Tuesday, 1 February 2011
On electronic wastefulness
People are becoming more and more aware of this. Environmental and economic problems have strengthened the criticism towards consumer culture, monetary power and political systems, and all kinds of countercultural movements are thriving. At the same time, however, ever more people are increasingly dependent on digital technology, which gets produced, bought, used and abandoned in greater masses than ever, causing an ever bigger impact on the world in the form of waste and pollution.
Because of this, I have decided to finally summarize my thoughts on how digital technology reflects the malfunctions of our civilization. I became a hobbyist programmer as a schoolkid in the mid-eighties, and fifteen years later I became a professional software developer. Despite all this baggage, I'm going to attempt to keep my words simple enough for common people to understand. Those who want to get convinced by citations and technical argumentation will get those at some later time.
Counter-explosion
For over fifty years, the progress of digital technology has been following the so-called Moore's law, which predicts that the number of transistors that fit on a microchip doubles every two-or-so years. This means that it is possible to produce digital devices that are of the same physical size but have ever more memory, ever more processing speed and ever greater overall capabilities.
Moore's law itself is not evil, as it also means that it is possibile to perform the same functions with ever less use of energy and raw material. However, people are people and behave like people: whenever it becomes possible to do something more easily and less consumingly, they start doing more of this something. This phenomenon is called "rebound effect" based on a medical term of the same name. It can be seen in many kinds of things: less fuel-consuming cars make people drive more, and less calories in food make weight-losers eat more. The worst case is when the actual savings becomes negative: a thing that is supposed to reduce consumption actually increases it instead.
In information technology, the most prominent form of rebound effect is the bloating of software, which takes place in the same rate of explosiveness as the improvement of hardware. This phenomenon is called Wirth's law. If we took a time machine ride back to 1990 and told the contemporaries that desktop computers would be becoming thousand times faster in twenty years, they would surely assume that almost anything would happen instantaneously with them. If we then corrected them by saying that software programs still take time to start up in the 2010s and that it is sometimes painful to tolerate their slowness and unresponsiveness, they wouldn't believe it. How is it even possible to write programs so poorly that they don't run smoothly with a futuristic, thousand times more powerful computer? This fact would become even harder to believe if we told them that it also applies to things like word processors which are used for more or less exactly the same things as before.
One reason for the unnecessary largeness, slowness and complexity of software is the dominant economic ideal of indefinite growth, which makes us believe that bigger things are always better and it is better to sell customers more than they need. Another reason is that rapid cycles of hardware upgrade make software developers indifferent: even if an application program were mindlessly slow and resource-consuming even on latest hardware, no one will notice it a couple of years later when the hardware is a couple of times faster. Nearly any excuse is valid for bloat. If it is possible to shorten software development cycles even slightly by stacking all kinds of abstraction frameworks and poorly implemented scripting languages on top of one another, it will be done.
The bloat phenomenon annoys people more and more in their normal daily life, as all kinds of electric appliances starting from the simplest flashlight contain increasingly complex digital technology, which drowns the user in uncontrollable masses of functionality and strange software bugs. The digitalization of television, for example, brought a whole bunch of computer-style immaturity to the TV-watching experience. I've even seen an electric kitchen stove that didn't heat up before the user first set up the integrated digital clock. Diverse functionality itself is not evil, but if the mere existence of extra features disrupts the use of the basic ones, something is totally wrong.
Even though many things in our world tend to swell and complexify, it is difficult to find a physical-world counterpart to software bloat, as the amount of matter and living space on our planet does not increase exponentially. It is not possible to double the size of one's apartment every two years in order to fit in more useless stuff. It is not possible to increase the complexity of official paperwork indefinitely, as it would require more and more food and accommodation space for the expanding army of bureaucrats. In the physical world, it is sometimes necessary to evaluate what is necessary and how to compress the whole in order to fit more. Such necessity does not exist in the digital world, however; there, it is possible to constantly inhale and never exhale.
Disposability
The prevailing belief system of today's world equates well-being with material abundance. The more production and consumption there is, the more well-being there is, and that's it. Even though the politicians in rich countries don't want to confess this belief so clearly anymore, they still use concepts such as "gross national product", "economic growth" and "standard of living" which are based on the idealization of boundless abundance.
As it is the holy responsibility of all areas of production to grow indefinitely, it is important to increase consumption regardless of whether it is sensible or not. If it is not possible to increase the consumption in natural ways, planned obsolensce comes to rescue. Some decades ago, people bought washing machines and television sets for the twenty years to follow, but today's consumers have the "privilege" of buying at least four of both during the same timespan, as the lifespans of these products have been deliberately shortened.
The scheduled breaking of electric appliances is now easier than ever, as most of them have an integrated microprocessor running a program of some kind. It is technically possible, for example, to hide a timer in this program, causing the device to either "break" or start misbehaving shortly after the warranty is over. This kind of sabotage may be beneficial for the sales of smaller and cheaper devices, but it is not necessary in the more complex ones; in their case, the bloated poor-quality software serves the same purpose.
Computers get upgraded especially when the software somehow becomes intolerably slow or even impossible to run. This change can take place even if the computer is used for exactly the same things as before. Bloat makes new versions of familiar software more resource-consuming, and when reforms are introduced on familiar websites, they tend to bloat up as well. In addition, some operating systems tend to slow down "automatically", but this is fortunately something that can be fixed by the user.
The experience of slowness, in its most annoying form, is caused by too long response times. The response time is the time between user's action and the indication that the action has been registered. Whenever the user moves the mouse, the cursor on the screen must immediately match the movement. Whenever the user presses a letter key on the keyboard, the same letter must appear on the screen immediately. Whenever the user clicks a button on the screen, the
graphic of the button must change immediately. According to usability research, the response time must be less than 1/10 seconds or the system feels laggy. When it has taken more than a second, the user's blood pressure is already increasing. After ten seconds, the user is convinced that "the whole piece of junk has locked up".
Slow response times are usually regarded as an indicator that the device is slow and that it is necessary to buy a new one. This is a misconception, however. Slow response times are indicators of nothing else than indifferent attitudes to software design. Every computing device that has become available during the last thirty years is completely capable of delivering the response within 1/10 seconds in every possible situation. Despite this fact, the software of the 2010s is still usually designed in such a way that the response is provided once the program has first finished all the more urgent tasks. What is supposed to be more important than serving the user? In the mainframe era, there were quite many such things, but in today's personal computing, this should never be the case. Fixing the response time problems would be a way to permanently make technology more comfortable to use as well as to help the users tolerate the actual slowness. The industry, however, is strangely indifferent to these problems. Response times are, from its point of view, something that "get fixed" automatically, at least for a short while and in some areas, at hardware upgrades.
Response time problems are just a single example of how the industry considers it more important to invent new features than to fix problems that irritate the basic user. A product that has too few problems may make consumers too satisfied. So satisfied that they don't feel like buying the next slightly "better" model which replaces old problems with new ones. Companies that want to ensure their growth prefer to do everything multiple times in slightly substandard ways instead of seeking any kind of perfection. Satisfaction is the worst enemy of unnecessary growth.
Is new hardware any better?
I'm sure that most readers have at least heard about the problems caused by the rat race of upgrade and overproduction. The landfills in rich countries are full of perfectly functioning items that interest no one. Having anything repaired is stupid, as it is nearly always easier and cheaper to just buy new stuff. Selling used items is difficult, as most people won't accept them even for free. Production eats up more and more natural resources despite all the efforts of "greening up" the production lines and recycling more and more raw material.
The role of software in the overproduction cycle of digital technology, however, is not so widely understood. Software is the soul of every microprocessor-based device, and it defines most of what it is like to use the device or how much of its potential can be used. Bad software can make even good hardware useless, whereas ingenious software can make even a humble device do things that the original designer could never have imagined. It is possible to both lengthen and shortern product lifetimes via software.
New hardware is often advocated with new features that are not actually features of the hardware but of the software it runs. Most of the features of the so-called "smartphones", for example, are completely software-based. It would be perfectly possible to rewrite the software of an old and humble cellphone in order to give it a bunch of features that would effectively turn it into a "smartphone". Of course, it is not possible to do complete impossibilities with software; there is no software trick that makes a camera-less phone take photos. Nevertheless, the general rule is that hardware is much more capable than its default software. The more the hardware advances, the more contrast there is between the capabilities of the software and the potential of the hardware.
If we consider the various tasks for which personal computers are used nowadays, we will notice that only a small minority of them actually requires a lot from the hardware. Of course, bad software may make some tasks feel more demanding than what they actually are, but that's another issue. For instance, most of the new online services, from Facebook to Youtube and Spotify, could very well be implemented so that they run with the PCs of the late 1990s. Actually, it would be possible to make them run more smoothly than how the existng versions run on today's PC. Likewise, with better operating systems and other software, we could make the same old hardware feel faster and more comfortable to use than today's hardware. From this we can conclude that the computing power of the 2000s is neither useful, necessary nor pleasing for most users. Unless we count the pseudo-benefit that it makes bad and slow software easier to tolerate, of course.
Let us now imagine that the last ten years in personal computing went a little bit differently -- that most of the computers sold to the great masses would have been "People's Computers" with a fixed hardware setup. This would have meant that the hardware performance would have remained constant for the last ten years. The 2011 of this alternate universe would probably be somewhat similar to our 2011, and some things could even be better. All the familiar software programs and on-line services would be there, they would just have been implemented more wisely. The use of the computers would have become faster and more comfortable during the years, but this would have been due to the improvement of software, not hardware. Ordinary people would never need to think about "hardware requirements", as the fixedness of the hardware would ensure that all software, services and peripherials work. New computers would probably be lighter and more energy-efficient, as the lack of competition in performance would have moved the competition to these areas. These are not just fringe utopian ideas; anyone can make similar conclusions by studying the history of home computing where several computer and console models have remained constant for ten years or more.
Of course it is easy to come up with ideas of tasks that demand more processing power than what was available to common people ten years ago or even today. A typical late-1990s desktop PC, for example, plays ordinary DVD-quality movies perfectly but may have major problems with the HD resolutions that are fashionable in the early 2010s. Similarly, by increasing the numbers, it is possible to come up with imaginary resolutions that are out of the reach of even the most expensive special-purpose equipment available today. For many people, this is exactly what technological progress means -- increase in numerical measures, the possibility to do the same old things in ever greater scales. When a consumer replaces an old TV with a new one, he or she gets a period of novelty vibes from the more magnificent picture quality. After a couple of years, the consumer can buy another TV and get the novelty vibes once again. If we had an access to unlimited natural resources, it would be possible to go on with this vanity cycle indefinitely, but still without improving anyone's quality of life in any considerable extent.
Most of the technological progress facilitated by the personal computing resources of the 2000s has been quantitative -- doing the same old stuff that became possible in the 1990s but with bigger numbers. Editing movies and pictures that have ever more pixels, running around in 3D video game worlds that have ever more triangles. It is difficult to even imagine a computational task relevant to an ordinary person that would require the number-crunching power of a 2000s home computer due to its nature alone, without any quantitative exaggeration. This could very well be regarded as an indicator that we already have enough processing power for a while. The software and user culture are lagging so far behind the hardware improvements, that it would be better to concentrate on them instead and leave the hardware on the background.
Helplessness
In addition to the senseless abundance of material items, today's people are also disturbed by a senseless abundance of information. Information includes not only the ever expanding flood of video, audio and text coming from the various media, but also the structural information incorporated in material and immaterial things. The expansion of this structural information manifests as increasing complexity of everything: consumer items, society systems, cultural phenomena. Those who want to understand the tools they use and the things that affect their life, must absorb ever greater amounts of structural information about them. Many people have already given up with understanding and just try to get along.
Many frown upon people who can't boil an egg or attach a nail to a wall without a special-purpose egg-boiler or nailgun, or who are not even interested in how the groceries come to the store or the electricity to the wall socket. However, the expanding flood of information and the complexification of everything may eventually result in a world where neo-helplessness and poor common knowledge are the normal condition. In computing, complexification has already gone so far that even many experts don't dare to understand how the technology works but prefer to guess and randomize.
Someone who wants to master a tool must build a mental model of its operation. If the tool is a very simple one, such as a hammer, the mental model builds up nearly automatically after a very short study. If someone who uses a hammer accidentally hits their finger with it, they will probably accuse themself instead of the hammer, as the functionality of a hammer can be understood perfectly even by someone who is not so capable in using it. However, when a computer program behaves against the user's will, the user will probably accuse the technology instead of themself. In situations like this, the user's mental model of how the program works does not match with its actual functionality.
The more bloated a software program is, the more effort the user needs to take in order to build an adequate mental model. Some programs are even marketing-minded enough to impose its new and glorious features to the user. This doesn't help at all in forming the mental model. Besides, most users don't have a slightest interest in extensive exploration but rather use a simple map and learn to tolerate the uncertainty caused by its rudimentariness. When we also consider that programs may change their functionality quite a lot between versions, even enthusiasts will turn cynical and frustrated when their precious mental maps become obsolete.
Many software programs try to fix the complexity problem by increasing the complexity instead of decreasing it. This mostly manifests as "intelligence". An "intelligent" programs monitors the user, guesses their intents and possibly suggests various courses of actions based on the intents. For example, a word processor may offer help in writing a letter, or a file manager may suggest things to do with a newly inserted memory stick. The users are offered all kinds of controlled ready-made functionality and "wizards" even for tasks they would surely prefer to do by themselves, at least if they had a chance to learn the normal basic functionality. If the user is forced to use specialized features before learning the basic ones, he or she will be totally helpless in situations where a special-purpose feature for the particular function does not exist. Just like someone who can use egg-boilers and nailguns but not kettles or hammers.
The reasons why technology exists are making things easier to do and facilitating otherwise impossible tasks. However, if a technological appliance becomes so complex that its use is more like random guessing than goal-oriented controlling, we can say that the appliance no longer serves its purpose and that the user has been taken over by technology. For this reason, it is increasingly important to keep things simple and controllable. Simplicity, of course, does not mean mere superficial pseudo-simplicity that hides the internal complxity, but the avoidance of complexity on all levels. The user cannot be in full control without having some kind of an idea about what the tool is doing at any given time.
In software, it may be useful to reorder the complexity so that there is a simple core program from which any additional complexity is functionally separated until the user deliberately activates it. This would make the programs feel reliable and controllable even with simple mental maps. An image processing software, for example, could resemble a simple paint program at its core level, and its functionality could be learned perfectly after a very short testing period. All kinds of auxilary functions, automations and other specialities could be easily found if needed, and the user could extend the core with them depending on the particular needs. Still, their existence would never disturb those users who don't need them. Regardless of the level of the user, the mental map would always match how the program actually works, and the program would therefore never surprise the user by acting against his or her expectations.
Software is rarely built like this, however. There is not much interest in the market for movements that make technology genuinely more approachable and comprehensible. Consumer masses who feel themselves helpless in regards to the technology are, after all, easier to control than masses of people who know what they are doing (or at least think so). It is much more beneficial for the industry to feed the helplessness by drowning the people in trivialities, distancing them for the basics and perhaps even submitting them under the power of an all-guessing artificially-intelligent assistant algorithm.
Changing the world
I have now discussed all kinds of issues, of which I have mostly accused bad software, and of whose badness I have mostly accused the economic system that idealizes growth and material abundance. But is it possible to do something about these issues? If most of the problems are indeed software-related, then couldn't they be resolved by producing better software, perhaps even outside of the commercial framework if necessary?
When calling for a counter-force for commercial software development, the free and open-source software (FOSS) movement is most commonly mentioned. FOSS software has mostly been produced as volunteer work without monetary income, but as the result of the work can be freely duplicated and used as basis of new work, they have managed to cause a much greater impact than what voluntary work usually does. The greatest impact has been among technology professionals and hobbyists, but even laypeople may recognize names such as Linux, Firefox and OpenOffice (of which the two latter are originally proprietary software, however).
FOSS is not bound to the requirements of the market. Even in cases where it is developed by corporations, people operating outside the commercial framework can contribute to it and base new projects on it. FOSS has therefore, in theory, the full potential of being independent of all the misanthropic design choices caused by the market. However, FOSS suffers from most of these problems just as much as proprietary software, and it even has a whole bunch of its own extra problems. Reasons for this can be found in the history of the movement. Since the beginning, the FOSS movement has mostly concentrated on cloning existing software without spending too much energy on questioning the dominant design principles. The philosophers of the movement tend to be more concerned about legal and political issues instead of technical ones: "How can we maximize our legal rights?" instead of "How should we design our software so that it would benefit the whole humanity instead of just the expert class?"
I am convinced that FOSS would be able to give the world much more than what it has already given if it could form a stronger contrast between itself and the growth-centric industry. In order to strengthen the contrast, we need a powerful manifest. This manifest would need to profoundly denounce all the disturbances to technological progress caused by the growth ideology, and it would need to state the principles on which software design should be based on in order to benefit human beings and nature in the best possible way. Of course, this manifest wouldn't exist exclusively for reinventing the wheel, but also for re-evaluating existing technology and redirecting its progress towards the better.
But what can ordinary people do? Even a superficial awareness of the causes of problems is better than nothing. One can easily learn to recognize many types of problems, such as those related to response times. One can also learn to accuse the right thing instead of superficially crying how "the computer is slow" or "the computer is misbehaving". Changes in language are also a nice way of spreading awareness. If people in general learned to accuse software instead of hardware, then they would probably also learn to demand software-based solutions for their problems instead of needlessly purchasing new hardware.
When hardware purchases are justifiable, those concerned of the environment will prefer second-hand hardware instead of new, as long as there is enough power for the given purposes. It is a common misconception to assume that new hardware would always consume less power than old -- actually, the trend has more often been exactly the opposite. During a period of ten years from the mid-1990s to the mid-2000s, for example, the power consumption of a typical desktop PC (excluding the monitor) increased tenfold, as the industry was more zealous to increase processing power than to improve energy efficiency. Power consumption curves for video game consoles have been even steeper. Of course, there are many examples of positive development as well. For example, CRT screens are worth replacing with similarly-sized LCD screens, and laptops also typically consume less than similar desktop PCs.
There is a strong market push towards discontinuing all kinds of service and repair activity. Especially in case of cellphones and other small gadgets, "service" more and more often means that the gadget is sent out to the manufacturer which dismantles it for raw material and sends a new gadget to the customer. For this reason, it may be reasonable to consider the difficulty of do-it-yourself activity when choosing a piece of hardware. As all forms of DIY culture seem to be waning due to a lack of interest, it is worthwhile to support them in all possible ways in order to ensure that there will still be someone in the future who can repair something.
Of course, we all hope that the world would change in a way such that the human- and nature-friendly ways to do things would always be the most beneficial ones even in "the reality of numbers and charts". Such a change will probably take longer than a few decades, however, regardless of the volume of the political quarrel. It may therefore not be wise to indefinitely wait for the change of the system, as it is already possible to participate in practical countercultural activity today. Even in things related to digital technology.
Friday, 3 September 2010
The Future of Demo Art: The Demoscene in the 2010s
Written by Ville-Matias Heikkilä a.k.a. viznut/pwp, released in the web on 2010-09-03. Also available in PDF format.
Introduction
An end of a decade is often regarded as an end of an era. Around the new year 2009-2010, I was thinking a lot about the future of demo art, which I have been involved with since the mid-nineties. The mental processes that led to this essay were also inspired by various events of the 2010s, such as the last Breakpoint party ever, as well as Markku Reunanen's licenciate thesis on the demoscene.First of all, I want to make it clear that I'm not going to discuss "the death of the scene". It's not even a valid scenario for me. The demo culture is already 25 years old, and during these years, it has shown its ability to adapt to the changes in its technological and cultural surroundings, so it's not very wise to question this ability. Instead, I want to speculate what kind of changes might be taking place during the next ten years. What is the potential of the artform in the 2010s, and what kind of challenges and opportunities is it going to face?
After the nineties
Back in the early nineties, demo art still represented the technological cutting edge in what home computers were able to show. You couldn't download and playback real-life music or movies, and even if you could, the quality was poor and the file sizes prohibitive. It was possible to scan photographs and paintings, but the quality could still be tremendously improved with some skilled hand-pixelling. Demos frequently showed things that other computer programs, such as video games, did not, and this made them hot currency among masses of computer hobbyists far beyond the actual demoscene. As a result, the subculture experienced a constant influx of young and enthusiastic newcomers who wanted to become kings of computer art.After the nineties, the traditional weapons of the demoscene became more or less ineffective. Seeing a demo on a computer screen is no longer a unique experience, as demos have the whole corpus of audiovisual culture to compete with. Programming is no longer a fashionable way of expressing creativity, as there is ready-made software easily available for almost any purpose. The massive, diverse hordes of the Internet make you feel small in comparison; the meaning of life is no longer to become a legend, but to sit in your own subcultural corner with an introvert attitude of "you make it, you watch it". Young and enthusiastic people interested in arts or programming have hundreds of new paths to choose from, and only a few pick the good, old and thorny path of demomaking.
There are many people who miss the "lost days of glory" of their teens. To them, demos have lost their "glamor" and are now becoming more and more irrelevant. I see the things a little bit differently, however.
Consider an alternative history where the glamor was never lost, and the influx of enthusiastic teenagers always remained constant. Year after year, you would have witnessed masses of newbies doing the same mistakes all over again. You would also have noticed that you are "becoming too old for this shit" and looked for a totally different channel for your creativity. The average career of a demo artist would thus have remained quite short, so there would never have been veteran artists with strong and refined visions and thus no chance for the artform to grow up. Therefore, I don't see it as a bad thing at all that demos are no longer as fashionable as they used to be.
There have been many changes in the demo culture during the last ten years. Most of them can be thought of as adaptations to the changing social and technological surroundings, but you can also think about them as belonging to a growth process. As your testosterone levels have lowered, you are no longer as arrogant about your underground trueness as you used to be. As you have gathered more experience and wisdom about life and the world, you can appreciate the diversity around yourself much better than you used to be. More outreach and less fight, you know.
When thinking about the growth process, one should also consider how the relationship between the demoscene and the technology industry has changed. In the eighties, it was all about piracy. In the nineties, people forgot about the piracy and started to dream about careers in software industry. Today, most sceners already have a job, so they have started to regard their freetime activity as a relief from their career rather than as something that would support it.
Especially those who happen to be coders "on both sides" tend to have an urge to separate the two worlds in some way or another by emphasizing the aspects that differentiate democoding from professional programming. You can't be very creative, independent, experimental or low-level in most programming jobs, so you'll want to be that in your artistic endeavours. You may want to choose totally different platforms, methods and technical approaches so that your leisure activity actually feels like leisure activity.
Thus, although many demosceners work in the software industry, the two worlds seem to be drifting apart. And it is not just because of the separation of work and freetime, but also because of the changes in the industry and the world in general.
Although the complexity of everything in human culture has been steadily increasing for a couple of centuries already, there has been a very dramatic acceleration during the past few decades, especially in technology. This means, among all, that there are more and more prepackaged blackboxes and less and less room for do-it-yourself activities.
Demo art was born in a cultural environment that advocated hobbyist programming and thorough bitwise understanding of one's gear. The technical ambitions of democoders were in complete harmony with the mainstream philosophy of that era's homecomputing. During the following decades, however, the mainstream philosophy degraded from do-it-yourself into passive consumerism, while the demoscene continued to cultivate its original values and attitudes. So, like it or not, demos are now in a "countercultural" zone.
While demos have less and less appeal to the mainstream industry where the "hardcore" niches are gradually disappearing, they are becoming increasingly interesting to all kinds starving artists, grassroots hippies, radical do-it-yourself guys and other "countercultural" people. And if you want your creative work to make any larger-scale sense in the future world, I guess it might be worthwhile to start hang around with these guys as well.
Core Demoscene Activity
The changes during the last ten years have made the demoscene activity somewhat vague. In the nineties, you basically made assembly code, pixel graphics and tracker music, and that was it. The scene was the secret cult that maintained the highest technical standards in all of these "underground" forms of creativity. Nowadays, everyone you know uses computers for creativity, some of them even being better than you, and most computer-aided creativity falls under some valid competition category at demoparties. Almost any deviantART user could submit their work in an average graphics compo, sometimes even win it. As almost anything can be a "demoscene production", being a "demoscener" is no longer about what your creative methods are like, but whom you hang around with.When talking about demo art, it is far too easy to concentrate on the social background ("the scene") instead of the actual substance of the artform and the kind of activity that makes it unique. For the purposes of this essay, I have therefore attempted to extract and define something that I call "Core Demoscene Activity". It is something I regard as the unique essence of demo art, the pulsating heart that gives it its life. All the other creative activities of demo art stem from the core activity, either directly or indirectly.
When defining "core demoscene activity", we first need to define what it isn't. The first things to rule out are the social aspects such as participating in demoscene events. These are important in upholding the social network, but they are not vital for the existence of demos. Making demos is supposed to be the reason for attending parties, not the other way around.
The core activity is not just "doing creative things with a computer" either. Everyone does it, even your mother. And not even "making non-interactive realtime animations", as there are other branches of culture that do the same thing -- the VJ and machinima communities, for example. Demos do have their own esthetic sensibilities, yes, but we are now looking for something more profound than that.
What is the most essential thing, in my opinion, is the program code. And not just any tame industry-standard code that fulfills some given specifications, but the wild and experimental code that does something that opens up new and unpredicted possibilities. Possibilities that are simply out of the reach of existing software tools. Although there are other areas of computer culture that practise non-compromising hard-core programming, I think the demoscene approach is unique enough to work as a basis of a wholesome definition.
The core activity of the demoscene is very technical. Exploration and novel exploitation of various possible hardware and software platforms. Experimentation with new algorithms, mathematical formulas and novel technical concepts. Stretching the expressive power of the byte. You can remove musicians, graphicians and conceptual experimenters, but you cannot remove hardcore experimental programming without destroying the essence of demo art.The values and preferences of demoscene-style programming are very similar to those of traditional hackers (of the MIT tradition). A major difference, however, seems to be that a traditional hacker determines the hack value of a program primarily by looking at the code, while a demo artist primarily looks at the audiovisual output. An ingenious routine alone is not enough; it must also be presented well, so that non-programmers are also able to appreciate the hack value. A lot of effort is put in presentational tweaking in order to maximize the audiovisual impact. This relationship between code and presentation is another unique thing in demo art.
Here is a short and somewhat idealized definition of "Core Demoscene Activity":
- Core Demoscene Activity is the activity that leads to the discovery of new techniques to be used in demo art.
- Everything in Core Demoscene Activity needs to directly or indirectly support the discovery of new kind of audiovisual output. Either something not seen on your platform before, or something not seen anywhere before.
- The exploration should ideally concentrate on things that are beyond the reach of existing software tools, libraries or de-facto standard methods. This usually requires a do-it-yourself approach that starts from the lowest available level of abstraction.
- General-purpose solutions or reusable code are never required on this level, so they should not interfere with the research. Rewrite from scratch if necessary.
Of course, the core activity alone is not enough, as the new discoveries need to be incorporated in actual productions, which also often include a lot of content created with non-programmatical methods. So, here is a four-level scheme that classifies the various creative activities of demo art based on their methodological distance from the "core". Graphically, this could be presented as nested circles. Note that the scheme is not supposed to be interpreted as a hierarchy of "eliteness" or "trueness", it is just one possible way of talking about things.
- First Circle / Core Demoscene Activity: Hardcore experimental programming. Discovery of new techniques, algorithms, formulas, theories, etc. which are put in use on the Second Circle.
- Second Circle Activity: Application-level programming. Demo composition, presentational tweaking of effect code, content creation via programming, development of specialized content creation tools (trackers, demomakers, softsynths), etc.
- Third Circle Activity: Content creation with experimental, specialized and "highly non-standard" tools. Musical composition with trackers, custom softsynths or chip music software; pixel and character graphics; custom content creation software (such as demomakers), etc.
- Fourth Circle Activity: Content creation with "industry-standard tools" including high-profile software and "real-life" instruments. Most of the bitmap graphics, 3D modelling and music in modern "full-size" demos have been created with fourth-circle techniques. Design/storyboard work also falls in the fourth circle. Blends rather seamlessly with mainstream computer-aided creativity.
It should be noted that the experimental or even "avant-garde" attitude present in the Core Activity can also be found on the other levels. This also makes the Fourth Circle important: while it is possible to do conceptual experimentation on any level, general-purpose industry-standard tools are often the best choices when trying out a random non-technical idea.
The four-circle scheme seems to be applicable to some other forms of digital art as well. In the autumn 2009, the discovery of Mandelbulb, an outstanding 3D variant of the classic Mandelbrot set, inspired me look into the fractal art community. The mathematical experimentation that led to the discovery of the Mandelbulb formula was definitely a kind of "core activity". Some time later, an "easy-to-use" rendering tool called "Mandelbulber" was released to the community in what I would classify as "second-circle" activity. The availability of such a tool made it possible for the non-programmers of the community to use the newly discovered mathematical structure in their art in activities that would fall on the third and fourth circles.Is it only about demos?
The artistic production central to demo culture is, obviously, the demo. According to the current mainstream definition, a demo is a stand-alone computer program that shows an audiovisual presentation, a couple of minutes long, using real-time rendering. It remains exactly the same from run to run, and you can't interact with it. But is this all? Is there something that demo artists can give to the world besides demos?I'm asking this for a reason. The whole idea of a demo, defined in this way, sounds somewhat redundant to laymen. What is the point in emphasizing real-time rendering in something that might just as well be a prerendered video? Isn't it kind of wasteful to use a clever technical discovery to only show a fixed set of special cases? In order to let the jewels of Core Demoscene Activity shine in their full splendor, there should be a larger scale of equally glorified ways of demonstrating them. Such as interactive art. Or dynamic non-interactive. Maybe games. Virtual toys. Creative toys or games. Creative tools. Or something in the vast gray areas between the previously-mentioned categories.
The idea of a "non-interactive realtime show" is, of course, tightly knit with the standard format of demoparty competitions. Demos are optimized for a single screening for a large audience, and it is therefore preferrable that you can fix as many things as possible beforehand. Realtime rendering wasn't enforced as a rule until video playback capabilities of home computers had become decent enough to be regarded as a threat to the dominance of hardcore program code.
But it's not all about party screenings. There are many other types of venues in the world, and there are, for example, people who still actually bother to download demoscene productions for watching at home. These people may even desire more from their downloaded programs than just a couple of minutes of entertainment. There may be spectators who, for example, would like to create their own art with the methods used in the demo. Of the categories mentioned before, I would therefore like to elevate creative toys and tools to a special position.
It is proven that creative tools originating in the demoscene may give rise to completely new creative subcultures. Take trackers, for example. The PC tracker scene of the nineties was much wider than the demoscene which gave it the tools to work with. In the vast mosaic of today's Internet world, there is room for all kinds of niches. Release a sufficiently interesting creative tool, and with some luck, you'll inspire a bunch of freaks to find their own preferred means of creativity. The freaks may even form a tight-knit community around your tool and raise you to a kind of legend status you can't achieve with demo compo victories alone.
Back in the testosterone-filled days, you frowned upon those who used certain creative tools without understanding their deep technicalities. But nowadays, you may already realize the importance of "laymen" exploring the expressive possibilities of your ingenious routine or engine. If you are turned off by the fact that "everyone" is able to (ab)use your technical idea, you should move on and invent an even better one. The Core Activity is about continuous pushing of boundaries, not about jealously dwelling in your invention as long as you can.
Now, is there a risk that the demoscene will "bland out" if "non-demo productions" will receive as much praise and glory as the "actual" demos? I don't think so. To me, what defines the demoscene is the Core Activity and not the "realtime non-interactive production". As long as you nurture the hardcore spirit, it manifests itself in all kinds of things you produce, regardless of how static, realtime, bouncy or cubistic they are.
Parties and social networks
An important staple in keeping demo culture alive is the demoparty. It both strengthens the social bonds and motivates the people involved to create and release new material. Of course, extensive remote communication has always been there, but flesh-and-blood meetings are the ones that strengthen the relationships to span years and decades.As there are so many people who have deeply dedicated theirselves to demo art for so many years, I am convinced that there will be demoscene parties in 2020 as well. Only a global disaster of an apocalyptic scale can stop them from taking place.
While pure insider parties may be enough for keeping the demoscene alive, they are not enough for keeping it strong and vital. There is a need for fruitful contacts between demo artists and other relevant people, such as other kinds of artists and potential newcomers. High-profile mainstream computer parties, such as Assembly, have been succesful in establishing these contacts in the past, but much of the potential for success has faded out during the last decade, as an average demo artist has less and less in common with an average Assembly visitor.I think it is increasingly vital for demo artists to actively establish connections with other islets of creative culture they can relate to. The other high-profile Finnish demoparty, Alternative Party, has been very adventurous in this area. Street and museum exhibitions that bring demo art to "random" people may be fruitful as well, even in surprising ways. When looking for contacts, restricting oneself to "geeky subcultures" is not very relevant anymore, as everyone uses computers and digital storage formats nowadays, and being creative with them -- even in ways relevant to demo art -- does not require unusual levels of technological obsession.
Crosscultural contacts, in general, have the potential of giving demosceners more room to breath. While a typical demoparty environment strongly encourages a specific type of artwork (i.e. demos), other cultural contexts may inspire demo artists to create totally different kinds of artifacts. I'm also sure that many experimental artists would be happy to try out some unique creative tools that the demo community may be able to give to them, so the collaboration may work well in both directions.
Real and virtual platforms
The relationship between demo artists and computing platforms has changed dramatically during the past ten years. Back in the nineties, you had a limited number of supported platforms with separate scenes and competitions. Nowadays, you can choose nearly any hardware or software platform you like, and different platforms often share the same competitions. Due to the existence of decent emulators and easy video captures, the scene is no longer divided by gear ownership. Anyone can watch demos from any platform or even try to develop for almost any platform without owning the real hardware. Also, as the average age of demosceners has risen, platform fanboyism is now far less common.The freedom is not as full as it could be, however. There are people who build their own demo hardware and they are praised for this, but what about creating your own entirely software-based "virtual platforms"? Most demo artists don't even think about this idea. Of course, there are many coders who have created ad-hoc integrated virtual machines in order to, for example, improve the code density in 4K demos, but "actual" platforms are still something that need to be defined by the industry. In the past, it even required quite a tedious process until a new hardware platform became accepted by the community.
So, why would we need virtual platforms in the first place? Let's talk about the expressive power of chip music, for example. There are various historical soundchips that have different sets of features and limitations, and after using several of them, a musician may not be completely satisfied by any single chip. Instead, he or she may imagine a "perfect soundchip" that has the exact combination of features and limitations that inspires him/her in the best possible way. It may be a slight improvement of a favorite chip or a completely new design. Still, someone who composes for a virtual chip rather than an authentic historical chip may not be regarded as very "true". There is still certain history-fetishism that discourages this kind of activity. In my earlier essay about Computationally Minimal Art, however, I expressed my belief that the historical timeline will lose its meaning in the near future. This will make "non-historical experimentation" more acceptable.It is already relatively acceptable to run demos with emulators instead of real hardware, even in competitions, so I think it's only a matter of time that completely virtual platforms (or "fake emulators") become common. For many, this will be a blessing. Artists will be happier and more productive working with instruments that lack the design imperfections they used to hate, and the audience will be happier as it gets new kinds of esthetic forms to appreciate.
Virtual platforms may also introduce new problems, however. One of them is that none of the achieved technical feats can be appreciated if the platform is not well-understood by the audience: if you participate in a 256-byte competition with a demo written for your own separate virtual machine, it is always relevant for the spectator to assume that you have cheated by transferring logic from the demo code into the virtual machine implementation. You could, for example, put an entire music synthesizer in your virtual machine and just use a couple of bytes in the demo code to drive it. If you want your technical feats appreciated, the platform needs to pass some kind of a community acceptance process beforehand.
On the other hand, virtual platforms may eventually become mandatory for certain technical feats. It is already difficult in modern operating systems, for example, to create very small executables that access the graphics and sound hardware. As the platforms "improve", it may eventually become impossible to do certain things from within, say, a four-kilobyte executable. In cases like this, the community may need to solve the problem with a commonly accepted "virtual platform", i.e. a loader that allows running executables given in a format that has less overhead. Such a loader may also be used for fixing various compatibility problems that are certain to arise when new versions of operating systems come out.
Within some years from now, we may have a plethora of virtual machines attempting to represent "the ultimate demo platform". There will be a need for classifying these machines and deciding about their validity in various technical competitions. Despite all the possible problems and controversies they are going to introduce, I'm going to embrace their arrival.
But what about actual hardware platforms, then? I guess that there won't be as much difference by 2020 anymore. FPGA implementations of classic hardware have already been available for several years, and I assume it won't take long until it will be common to synthesize both emulators and physical hardware from the same source code. Once we reach the point that it is easy for anyone to use a printer-type device to produce a piece of hardware from a downloadable file, I don't think it'll really matter so much to anyone whether something is running virtually or physically.
Regarding the next decade of the mainstream hardware industry, I think the infamous Moore's law makes it all quite predictable and obvious: things that were not previously possible in real time will be easy to do in real time. There will be smaller video projectors and all that. Mobile platforms will be as powerful as today's high-end PCs, so you won't be able to get "oldschool kicks" from them anymore. If you want such kicks from an emerging technology, you won't have many niches left; conductive ink may be one of the last possibilities. Before 2020, your local grocery store will probably be selling milk in packages that have ink-based circuits displaying animations, and before that happens, I'm sure that the demoscene will be having lots of fun with the technology.
Paths of initiation
It is already a commonly accepted view that the demoscene needs newcomers to remain vital, and that they need to be actively recruited since the influx is no longer as overwhelming as it used to be. This view represents a dramatic change from the underground-elitist attitudes of the nineties, when potential newcomers were often forced thru a tight social filter that was supposed to separate the gifted individuals from the "lamers". Requiring guidance was a definitive sign of weakness; if you couldn't figure out the path of initiation on your own, no one was going to help you. You simply got stuck in the filter and never got in.According to my experiences, it is not very difficult to get people interested in demo art as long as you manage to pull the right strings. It is also relatively easy to get them participate in demoscene events. But getting them involved in the various creative activies is a much more complex task, especially when talking about the inner-circle activities that require programming. It is not about a lack of will or determination but more like about uncertainty about how to get started.
A lot of consideration should be put in the paths of initiation during the following decade. Instead of generalizing from their own past experiences, recruiters should listen to the stories of the recent newcomers. What kind of paths have they taken? What kind of niches have they found relevant? What have been the most difficult challenges in getting involved? Success stories and failure stories should both be listened to.
I'm now going to present some of my own ideas and observations about how democoder initiation works in today's world and how it does not. These are all based on my personal experiences with recent newcomers and not on any objective research, so feel free to disagree.
First, I want to outline my own theory about programming pedagogy. This is something I regard as a meaningful "hands-on" path for hobbyist programmers in general, not only for aspiring democoders. Lazy academic students (whose minds get "mutilated beyond recovery" by a careless choice of first language) may prefer a more theoretical route, but this three-phase model is something I have witnessed to work even for the young and the practical-minded, from one decade to another.
- First phase: Toy Language. It should have an easy learning curve and reward your efforts as soon as possible. It should encourage you to experiment and gradually give you the first hints of a programming mindset. Languages such as BASIC and HTML+PHP have been popular in this phase among actual hobbyists.
- Second phase: Assembly Language. While your toy language had a lot of different building blocks, you now have to get along with a limited selection. This immerses you into a "virtual world" where every individual choice you make has a tangible meaning. You may even start counting bytes or clock cycles, especially if you chose a somewhat
restricted platform. - Third phase: High Level Language. After working on the lowest level of abstraction, you now have the capacity for understanding the higher ones. The structures you see in C or Java code are abstractions of the kind of structures you built from your "Lego blocks" during the previous phase. You now understand why abstractions are important, and you may also eventually begin to understand the purposes of different higher-level programming techniques and conventions.
Based on this theory, I think it is a horrible mistake to recommend the modern PC platform (with Win32, DirectX/OpenGL, C++ and so on) to an aspiring democoder who doesn't have in-depth prior knowledge about programming. Even though it might be easy to get "outstanding" visual results with a relative ease, the programmer may become frustrated by his or her vague understanding of how and why their programs work.
Not everyone has the mindset for learning an actual "oldschool platform" on their own, however. I therefore think it might be useful to develop an "educational demoscene platform" that is easy to learn, simple in structure, fun to experiment with and "hardcore" enough for promoting a proper attitude. It might even be worthwhile to incorporate the platform in some kind of a game that motivates the player to go thru varying "challenges". Putting the game online and binding it to a social networking site may also motivate some people quite a lot and give the project some additional visibility.
Conclusion
We have now covered many different aspects of the future of demo art in the 2010s, and it is now the time to summarize. If we crystallize the prognosis to a single word, "diversity" might be a good choice.It indeed seems that the diversity in what demo artists produce will continue to increase in all areas. There will be more platforms available, many of them designed by the artists themselves. There will be more alternatives to the traditional realtime non-interactive demo, especially via the various "new" venues provided by "crosscultural contacts". And I'm sure that the range of conceptual and esthetic experimentation will broaden as well.
Back in the nineties, most demo artists were "playing the same game", with the same rules with relatively similar goals. After that, the challenges became much more individual, with different artists finding their very own niches to operate in. There are still "major categories" today, but as the new decade continues, they will have less and less meaning compared to the more individual quests. This may also reduce the competitive aspect of the demo culture: as everyone is playing their own separate game, it is no longer possible to compare the players. Perhaps, at some point of time, someone will even question the validity of the traditional compo format.Another keyword for the next decade could be "openness". It will show both in the increased outreach and "crossculturality". There will be an increasing amount of demo artists who operate in other contexts besides the good old demoscene, and perhaps there will also be more and more "outsiders" who want to try out the "demoscene way" for a chance, without intentions of becoming more integral members of the subculture.
In the nineties, many in the scene were dreaming about careers in the video game industry. After that, there have been similar dreams about the art world: gaining acceptance, perhaps even becoming professional artists. The dreams about the video game industry came true for many, so I'm convinced that the dreams about the art world will come true as well.
Sunday, 18 April 2010
Behind "Dramatic Pixels"
I released a minimalistic demo called "Dramatic Pixels" at Breakpoint 2010. It is an experiment in narrative using very minimal visual output: three colored character blocks ("big pixels") moving on an entirely black background, synchronized to musical accompaniment. (CSDB, Pouet.net)
I was expecting the demo to cause very mixed reactions in the audience, but to my surprise, it actually won the competition it was in (4-kilobyte Commodore 64 demo) and the reception has been almost entirely positive. This -- along with the fact that a somewhat similar production was released by Skrju and Triebkraft for the ZX Spectrum just two months earlier -- inspired me to write this short essay about the philosophy behind this production. And besides, visy/trilobit has also blogged about "Dramatic Pixels" recently, so I think I am obliged to do the same.
Background
For quite some time already, I have been on a philosophical excursion to the nature of "hard-core" digital creativity, especially the deep essences of the demoscene and the "8-bit" culture. The so far biggest visible result of this excursion has been my recent essay about Computationally Minimal Art, which, among all, separates the ideas of "optimalism" and "reductivism". I have noticed that the audiovisual digital culture (including the demoscene) has traditionally been very optimalist in nature, aiming at fitting as much complexity as possible within given boundaries. The opposite approach, reductivism, which embraces minimal complexity itself as an esthetic goal, is very seldom used by the demoscene, however.In December 2009, I was pondering about how to express "complex real-world phenomena" such as human emotions via "extreme reductivism". I was planning to design a low-pixel "video game character" that shows a wide range of emotions with facial and bodily expressions, and I particularly wanted to find out the minimum number of facial pixels required to express all the nine emotional responses (rasas) of the Indian theatre. When minimizing the number of pixels, however, I realized that facial expressions might not in fact be necessary at all; movement patterns and rhythms alone seemed to be enough for differentiating fear from bravery, or certainty from uncertainty. If the character only needs to move around for full expressive power, its pixel pattern can very well be reduced to a single pixel.
I quickly did a couple of experiments with this idea of "pixel drama". As the results were convincing enough, I started to plan a minimalistic movie using only single-pixel characters. As the movie was quite probably to be implemented as a demoscene production, I thought it would be important to have a somewhat "operatic" approach, synchronizing the visual action with a strong musical accompaniment.
After some initial sketches, I didn't really think about the idea for a couple of months. But less than a week before the Breakpoint party, I decided to implement it on the C-64. The choice of platform could have been just about anything, however, from VCS to Win32. C-64 just seemed like the best and easiest choice considering the competition categories available at Breakpoint. The size of the demo ended up to be about 1.5K bytes, and I later also released a 1K version where the introductionary text was removed.
The demo itself
Technically, everything in "Dramatic Pixels" is centered around the music player routine, which is also responsible for the choreography: the bytes that encode the notes of the lead channel also contain bits that control the movement of the pixels. To be exact, every time a new note is played by the lead instrument, exactly one of the three pixels takes a single step towards one of the four cardinal directions. This is an intentional technical decision that ties the pixel movement seamlessly to the music. Internally, the whole show is a series of looping sequences that are both musical and visual at the same time.All the actual musical notes, by the way, are encoded by only two bits each. These two bits form an index to a four-note set, which is defined by two variables (indicating base pitch and harmonic structure). These variables are manipulated on the fly by a higher-level control routine that is also responsible for the other macro-level changes in the demo. I prefer to encode melodies in this way rather than as absolute pitches, as a more "indirect" approach makes it more compact and closer to the essence of the musical structure. And, in the case of this demo, I wanted some minimalism (or maybe serialism) in the musical score as well, and the possibility to repeat the same patterns in different modes helps in this goal.
The 6502 assembly source code of the 1K version is available for those who are interested. It should be relatively easy to port to any 6502-based platform (with the music player probably requiring most work), so I've been planning on releasing separate versions for VIC-20 and Atari 2600 as well.
So, what about the story, then? Most of the interpretations I've heard have been somewhat similar and close to my own intentions, so I think my decisions about the audiovisual language have been relatively succesful: Red and Blue meet, fall in love, become estranged, cheat on each other with Green, and in the end everyone gets killed. However, there are some portions that are apparently more difficult.When I created the characters, I had no intentions of assigning genders to the pixels. Still, some people have interpreted Red as male and Blue as female. This probably stems from the differences in the base pitches (when Blue moves, the pitch is an octave higher than when Red moves), but the personalities of the pixels may also matter. Red is more stereotypically masculine, making more initiatives, while Blue mostly responds to these initiatives. I don't know whether the interpretations would have been different if I had chosen Blue to be the initiator.
The second part, where Red and Blue spend time on the opposite sides of the screen, is perhaps the most difficult to follow. I intended this part to represent everyday life where both pixels have their own daytime activities and only see each other at home very briefly in evenings (and don't pay much attention to one another even then). Also, the workplaces are so far away that the pixels can't see each other cheating until Red decides to get closer to Blue's workplace. And no, Green does not represent two different pixel personalities depending on the partner -- it's the same despisable creature in all cases. The part is intentionally slightly too long and repetitive in order to emphasize the frustration that repetitive everyday routines may lead to.Comparison to the Spectrum demo
I would now like to compare "Dramatic Pixels" to the 256-byte Spectrum demo I mentioned earlier, "A true story from the life of a lonely cell" by Sq/Skrju and Psndcj/Triebkraft. Although I'm trying to follow the Spectrum demoscene due to some very visionary groups therein, this demo was so recent that I never managed to even hear about it until I had finished "Dramatic Pixels".In both demos, there are three characters represented by solid-colored blocks. The blocks express emotion mostly by the way how they move. In "A true story", all movement happens in one dimension, so it is basically all about back-and-forth movement in varying rhythms. "Dramatic Pixels" can be very easily seen as a refinement of this concept, adding a musical accompaniment and another dimension (although it may very well have worked in 1D as well). The stories in both demos are based on the love triangle model, although my story is a little bit more complex.
"Great minds think alike", yes, but the coincidence still baffles me. Is it really just a coincidence or a result of some external factors? Deep thoughts about the state of the demoscene, perhaps combined with some general angst about the potential of the art form in the 2010s, were part of the mental process that lead me to create "Dramatic Pixels". I haven't discussed this with Sq, but perhaps there was something similar going on in his mind as well.
To add an additional spice to the mystery: the recent video game inspired short film called "Pixels" was put on the web on the same day (2010-04-07) as I put the video of "Dramatic Pixels" on Youtube.
The bigger purpose
For some time already, I have been writing pretty words about "thinking out of the box" in the demoscene context. But pretty words are hollow unless you back them up with some practical evidence, such as an actual demo.I considered it important to finish "Dramatic Pixels" for Breakpoint, as I had just recently released my essay about Computationally Minimal Art. I wanted to release a production that would support some of its ideas, especially the equality of reductivism as a "boundary-pushing" approach.
Anyway, I hope this experiment broke some new ground that would inspire some further experimentation in computational minimalism. I think traditional minimalists have already done quite a lot of "basic research" during the last hundred years or so, so I would like the inspired productions to choose a fresh route by emphasizing those areas that are unique in the computational approach.
Monday, 15 March 2010
Defining Computationally Minimal Art (or, taking the "8" out of "8-bit")
![[Icon Watch designed by &design]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/iconwatch.jpg)
Introduction
"Low-tech" and "8-bit" are everywhere nowadays. Not only are the related underground subcultures thriving, but "retrocomputing esthetics" seems to pop up every now and then in mainstream contexts as well: obvious chip sounds can be heard in many pop music songs, and there are many examples of "old video game style" in TV commercials and music videos. And there are even "pixel-styled" physical products, such as the pictured watch sold by the Japanese company "&design". I'm not a grand follower of popular culture, but it seems to me that the trend is increasing.
The most popular and widely accepted explanation for this phenomenon is the "nostalgia theory", i.e. "People of the age group X are collectively rediscovering artifacts from the era Y". But I'm convinced that there's more to it -- something more profound that is gradually integrating "low-tech" or "8-bit" into our mainstream cultural imagery.
Many people have became involved with low-tech esthetics via nostalgia, but I think it is only the first phase. Many don't experience this phase at all and jump directly to the "second phase", where pixellated graphics or chip sounds are simply enjoyed the way they are, totally ignoring the
historical baggage. There is even an apparent freshness or novelty value for some people. This happens with audiences that are "too young" (like the users of Habbo Hotel) or otherwise more or less unaffected by the "oldskool electronic culture" (like many listeners of pop music).
Since the role of specific historical eras and computer/gaming artifacts is diminishing, I think it is important to provide a neutral conceptual basis for "low-tech esthetics"; an independent and universal definition that does not refer to the historical timeline or some specific cultural technology. My primary goal in this article is to provide this definition
and label it as "Computationally Minimal Art". We will also be looking for support for the universality of Computationally Minimal Art and finding ur-examples that are even older than electricity.
A definition: Computationally Minimal Art
Once we strip "low-tech esthetics" from its historical and cultural connections, we will be left with "pixellated shapes and bleepy sounds" that share an essential defining element. This element stems from what is common to the old computing/gaming hardware in general, and it is perfectly possible to describe it in generic terms, without mentioning specific platforms or historical eras.
![[Space Invaders sprite]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/spaceinvader.gif)
The defining element is LOW COMPUTATIONAL COMPLEXITY, as expressed in all aspects of the audiovisual system: the complexity of the platform (i.e. the number of transistors or logic gates in the hardware), the complexity of the software (i.e. the length in bits of the program code and static data), as well as the time complexity (i.e. how many state changes the computational
tasks require). A more theoretical approach would eliminate the differentiation of software and hardware and talk about description/program length, memory complexity and time complexity.
There's little more that needs to be defined; all the important visible and audible features of "low-tech" emerge from the various kinds of low complexity. Let me elaborate with a couple of examples:
- A low computing speed leads to a low number of processed and output bits per time frame. In video output, this means low resolutions and limited color schemes. In audio output, this means simple waveforms on a low number of discrete channels.
- A short program+data length, combined with a low processing speed, makes it preferrable to have a small set of small predefined patterns (characters, tiles, sprites) that are extensively reused.
- A limited amount of temporary storage (emerging from the low hardware complexity) also supports the former two examples via the small amount of available video memory.
- In general, the various types of low complexity make it possible for a human being (with some expertise) to "see the individual bits with a naked eye and even count them".
In order to complete the definition, we will still have to know what "low" means. It may not be wise to go for an arbitrary threshold here ("less than X transistors in logic, less than Y bits of storage and less than Z cycles per second"), so I would like to define it as "the lower the better". Of course, this does not mean that a piece of low-tech artwork would ideally
constitute of one flashing pixel and static square-wave noise, but that the most essential elements of this artistic branch are those that persist the longest when the complexity of the system approaches zero.
Let me therefore dub the idealized definition of "low-tech art" as Computationally Minimal Art (CMA).
To summarize: "Computationally Minimal Art is a form of discrete art governed by a low computational complexity in the domains of time, description length and temporary storage. The most essential features of Computationally Minimal Art are those that persist the longest when the
various levels of complexity approach zero."
How to deal with the low complexity?
Traditionally, of course, low complexity was the only way to go. The technological and economical conditions of the 1970s and 1980s made the microelectronic artist bump into certain "strict boundaries" very soon, so the art needed to be built around these boundaries regardless of the artist's actual esthetic ideals. Today, on the other hand, immense and virtually non-limiting amounts of computing capacity are available for practically everyone who desires it, so computational minimalism is nearly always a conscious choice. There are, therefore, clear differences in how the low complexity has been dealt with in different eras and
disciplines.
I'm now going to define two opposite approaches to low complexity in computational art: optimalism (or "oldschool" attitude), which aims at pushing the boundaries in order to fit in "as much beauty as possible", and reductivism (or "newschool" attitude), which idealizes the low complexity itself as a source of beauty.
Disclaimer: All the exaggeration and generalization is intentional! I'm intending to point out differences between various extremities, not to portray any existing "philosophies" accurately.
Optimalism
Optimalism is a battle of maximal goals against a minimal environment. There are absolute predefined boundaries that provide hard upper limits for the computational complexity, and these boundaries are then pushed by fitting as much expressive power as possible between them. This approach is the one traditionally applied to mature and static hardware platforms by the
video game industry and the demoscene, and it is characterized by the appreciation of optimization in order to reach a high content density regardless of the limitations.
![[Frog, Landscape and a lot of Clouds by oys]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/pixelgfxexample.gif)
A piece of traditional European-style pixel graphics ("Frog, Landscape and a lot of Clouds" by oys) exemplifies many aspects of optimalism. The resolution and color constraints of a video mode (in this case, non-tweaked C-64
multicolor) provide the hard limits, and it is the responsibility of the artist to fill up the space as wisely and densely as possible. Large single-colored areas would look "unfinished", so they are avoided, and if it is possible to fit in more detail or dithering somewhere, it should be done. It is also avoidable to leave an available color unused -- an idea which leads to the infamous "Dutch color scheme" when applied to high/truecolor video modes.
When applied to chip music, the optimalist dogma tells, among all, to fill in all the silent parts and avoid "simple beeps". Altering the values of as many sound chip registers per frame as possible is thought to be efficient use of the chip. This adds to the richness of the sound, which is though to correlate with the quality of the music.
![[Artefacts by Plush]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/demoexample.gif)
On platforms such as the Commodore 64, the demoscene and video game industry seem to have been having relatively similar ideals. Once an increased computing capacity becomes available, however, an important difference between these cultures is revealed. Whenever the video game
industry gets more disk space or other computational resources, it will try to use it up as aggressively as possible, without starting any optimization efforts until the new boundaries have been reached. The demoscene, on the other hand, values optimality and content density so much that it often prefers to stick to old hardware or artificial boundaries in order to keep the "art of optimality" alive. The screenshot is from the 4K demo "Artefacts" by Plush (C-64).
Despite the cultural differences, however, the core esthetic ideal of optimalism is always "bigger is better"; that an increased perceived content complexity is a requirement for increased beauty. Depending on the circumstances, more or less pushing of boundaries is required.
Reductivism
Reductivism is the diagonal opposite of optimalism. It is the appreciation of minimalism within a maximal set of possibilities, embracing the low complexity itself as an esthetic goal. The approach can be equated with the artistic discipline of minimal art, but it should be remembered that the idea is much older than that. Pythagoras, who lived around 2500 years ago, already appreciated the role of low complexity -- in the form of mathematical beauty such as simple numerical ratios -- in music and art.
The reductivist approach does not lead to a similar pushing of boundaries as optimalism, and in many cases, strict boundaries aren't even introduced. Regardless, a kind of pushing is possible -- by exploring ever simpler structures and their expressive power -- but most reductivists don't seem to be interested in this aspect. It is usually enough that the output comes out as "minimal enough" instead of being "as minimal as possible".
![[VVVVVV by Terry Cavanagh]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/vvvvvv.gif)
The visuals of the recent acclaimed Flash-based platformer game, VVVVVV, are a good example of computational minimalism with a reductivist approach. The author, Terry Cavanagh, has not only chosen a set of voluntary "restrictions" (reminiscent of mature computer platforms) to guide the
visual style, but keeps to a reductivist attitude in many other aspects as well. Just look at the "head-over-heels"-type main sprite -- it is something that a child would be able to draw in a minute, and yet it is perfect in the same iconic way as the Pac-Man character is. The style totally serves its purpose: while it is charming in its simplicity and downright naivism, it
shouts out loud at the same time: "Stop looking at the graphics, have fun with the actual game instead!"
![[Thrust]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/thrust0.gif)
Although reductivism may be regarded as a "newschool" approach, it is possible to find some slightly earlier examples of it as well. The graphics of the 1986 computer game Thrust, for example, has been drawn with simple geometrical lines and arcs. The style is reminiscent of older vector-based arcade games such as Asteroids and Gravitar, and it definitely serves a technical purpose on such hardware. But on home computers with bitmapped screens and sprites, the approach can only be an esthetical one.
Optimalism versus Reductivism
Optimalism and reductivism sometimes clash, and an example of this can be found in the chip music community. After a long tradition of optimalism thru the efforts of the video game industry and the demoscene, a new kind of cultural branch was born. This branch, sometimes mockingly called
"cheaptoon", seems to get most of its kicks from the unrefined roughness of the pure squarewave rather than the pushing of technological and musical boundaries that has been characteristic of the "oldschool way". To an optimalist, a reductivist work may feel lazy or unskilled, while an
optimalist work may feel like "too full" or "too refined" to a reductivist mindset.
Still, when working within constraints, there is room for both approaches. Quite often, an idea is good for both sides; a simple and short algorithm, for example, may be appreciated by an optimalist because the saved bytes leave room for something more,, while a reductivist may regard
the technical concept as beautiful on its own right.
Comparison to Low-Complexity Art
Now I would like to compare my definition of Computationally Minimal Art to another concept with a somewhat similar basis: Jürgen Schmidhuber's Low-Complexity Art.
![[A low-complexity face picture by Juergen Schmidhuber]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/lcaface.gif)
While CMA is an attempt to formalize "low-tech computer art", Schmidhuber's LCA comes from another direction, being connected to an ages-old tradition that attempts to define beauty by mathematical simplicity. The specific mathematical basis used in Schmidhuber's theory is Kolmogorov complexity, which defines the complexity of a given string of information (such as a picture) as the length of the shortest computer program that outputs it. Kolmogorov's theory works on a high level of generalization, so the choice of language does not matter as long as you
stick to it.
Schmidhuber sees, in "down-to-earth coder terms", that the human mind contains a built-in "compressor" that attempts to represent sensory input in a form as compact as possible. Whenever this compression process succeeds well, the input is perceived as esthetically pleasing. It is a well-studied fact that people generally perceive symmetry and regularity as more beautiful than unsymmetry and irregularity, so this hypothesis of a "mental compressor" cannot be dismissed as just an arbitrary crazy idea.
Low-Complexity Art tests this hypothesis by deliberately producing graphical images that are as compressible as possible. One of the rules of LCA is that an "informed viewer" should be able to perceive the algorithmic simplicity quite easily (which also effectively limits the time complexity of the algorithm, I suppose). Schmidhuber himself has devised a system based
on indexed circle segments for his pictures.
![[Superego by viznut/pwp]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/superego.gif)
The above picture is from "Superego", a tiny pc demo I made in 1998. The picture takes some tens of bytes and the renderer takes less than 100 bytes of x86 code. Unfortunately, there is only one such picture in the demo, although the 4K space could have easily contained tens of pictures. This is because the picture design process was so tedious and counter-intuitive --
something that Schmidhuber has encountered with his own system as well. Anyway, when I encountered Schmidhuber's LCA a couple of years after this experiment, I immediately realized its relevance to size-restricted demoscene productions -- even though LCA is clearly a reductivist approach as opposed to the optimalism of the mainstream demoscene.
What Low-Complexity Art has in common with Computationally Minimal Art is the concern about program+data length; a minimalized Kolmogorov complexity has its place in both concepts. The relationship with other types of complexity is different, however. While CMA is concerned about all the types of complexity of the audiovisual system, LCA leaves time and memory complexity out of the rigid mathematical theory and into the domain of a "black box" that processes sensory input in the human brain. This makes LCA much more theoretical and psychological than CMA, which is mostly concerned about "how the actual bits move". In other words, LCA makes you look at
visualizations of mathematical beauty and ignore the visualization process, while CMA assigns an utmost importance to the visualizer component as well.
Psychological considerations
Now, an important question: why would anyone want to create Computationally Minimal Art for purely esthetical reasons -- novelty and counter-esthetic values aside? After all, those "very artificial bleeping sounds and flashing pixels" are quite alien to an untrained human mind, aren't they? And even many fans admit that a prolonged exposure to those may cause headache.
It is quite healthy-minded to assume that the perception mechanisms of the human species, evolved during hundreds of millions of years, are "optimized" for perceiving the natural world, a highly complex three-dimensional environment with all kinds of complex lighting and shading conditions. The extremely brief technological period has not yet managed to alter the "built-in defaults" of the human mind anyhow. Studies show, for example, that people all over the world prefer to be surrounded by wide-open landscapes with some water and trees here and there -- a preference that was fixed to our minds during our millions of years on the African savannah.
![[Synchiropus splendidus, photographed by Luc Viatour]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/natureexample.jpg)
So, the untrained mind prefers a photorealistic, high-fidelity sensory input, and that's it? No, it isn't that simple, as the natural surroundings haven't evolved independently from the sensory mechanisms of their inhabitants. Fruits and flowers prefer to be symmetric and vivid-colored because animals prefer them that way, and animals prefer them that way because it is beneficial for their survival to like those features, and so on. The natural world is full of signalling which is a result of millions of years of coevolutionary feedback loops, and this is also an important source for our own sense of esthetics. (The fish in the picture, by the way, is a Synchiropus splendidus, photographed by Luc
Viatour.)
I'm personally convinced that natural signalling has a profound preference for low complexity. Symmetries, regularities and strong contrasts are important because they are easy and effortless to detect, and the implementation requires a relatively low amount of genetic coding on both
the "transmitter" and "receiver" sides. These are completely analogous to the various types of computational complexity.
So, why does enjoying Computationally Minimal Art require "mental training" in the first place? I think it is not because of the minimality itself but because of certain pecularities that arise from the higher complexity of the natural world. We can't see individual atoms or even cells, so we haven't evolved a built-in sense for pixel patterns. Also, the sound generation
mechanisms in nature are mostly optimized to the constraints of pneumatics rather than electricity, so we don't really hear squarewave arpeggios in the woods (although some birds may come quite close).
But even though CMA requires some special adjustment from the human mind, it is definitely not alone in this area. Our cultural surroundings are full of completely unnatural signals that need similar adjustments. Our music uses instruments that sound totally different from any animal, and
practically all musical genres (apart from the simplest lullabies, I think) require an adjustment period. So, I don't think there's nothing particularly "alien" in electronic CMA apart from the fact that it still hasn't yet integrated in our mainstream culture.
CMA unplugged
The final topic we cover here is the extent where Computationally Minimal Art, using our strict definition, can be found. As the definition is independent from technology, it is possible to find ur-examples that predate computers or even electricity.
In our search, we are ignoring the patterns found in the natural world because none of them seem to be discrete enough -- that is, they fail to have "human-countable bits". So, we'll limit ourselves to the artifacts found in human culture.

Embroidery is a very old area of human culture that has its own tradition of pixel patterns. I guess everyone familiar with electronic pixel art has seen cross-stitch works that immediately bring pixel graphics in mind. The similarities have been widely noted, and there have been href="http://www.spritestitch.com/">quite many craft projects inspired by old video games. But is this just a superficial resemblance or can we count it as Computationally Minimal Art?
![[Traditional monochrome bird patterns in cross-stitch]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/crossstitchexample1.jpg)
Cross-stitch patterns are discrete, as they use a limited set of colors and a rigid grid form which dictates the positions of each of the X-shaped, single-colored stitches. "Individual bits are perceivable" because each pixel is easily visible and the colors of the "palette" are usually easy to tell apart. The low number of pixels limits the maximum description length, and one doesn't need to keep many different things in mind while working either. Thus, cross-stitch qualifies all the parts of the definition of Computationally Minimal Art.
What about the minimization of complexity? Yes, it is also there! Many traditional patterns in textiles are actually algorithmic or at least highly repetitive rather than "fully hand-pixelled". This is somewhat natural, as the old patterns have traditionally been memorized, and the memorization is much easier if mnemonic rules can be applied.
There are also some surprising similarities with electronic CMA. Many techniques (like knitting and weaving) proceed one complete row of "pixels" at a time (analogous to the raster scan of TV-like displays), and often, the set of colors is changed between rows, which is corresponds very well to the use of raster synchronization in oldschool computer graphics. There are even peculiar technique-specific constraints in color usage, just like there are similar constraints in many old video chips.
![[Pillow from 'Introduction to Fair Isle']](http://www.pelulamu.net/countercomplex/computationally-minimal-art/algoembroideryexample.jpg)
The picture above (source) depicts a pillow knitted with the traditional Fair Isle technique. It is apparent that there are two colors per "scanline", and these colors are changed between specific lines (compare to rasterbars). The patterns are based on sequential repetition, with the sequence changing on a per-scanline basis.
Perhaps the most interesting embroidery patterns from the CMA point of view are the oldest ones that remain popular. During centuries, the traditional patterns of various cultures have reached a kind of multi-variable optimality, minimizing the algorithmical and technical complexity while maximizing the eye-pleasingness of the result. These patterns may very well
be worth studying by electronic CMA artists as well. Things like this are also an object of study for the field of ethnomathematics, so that's another word you may want to look up if you're interested.
What about the music department, then? Even though human beings have written music down in discrete notation formats for a couple of millennia already, the notes alone are not enough for us. CMA emphasizes the role of the rendering, and the performance therefore needs to be discrete as well. As it seems that every live performance has at least some non-discrete variables, we will need to limit ourselves to automatic systems.
![[A musical box]](http://www.pelulamu.net/countercomplex/computationally-minimal-art/musicbox.jpg)
The earliest automatic music was mechanical, and arguably the simplest conceivable automatic music system is the musical box. Although the musical box isn't exactly discrete, as the barrel rotates continuously rather than stepwise, I'm sure that the pins have been positioned in an engineer's accuracy as guided by written music notation. So, it should be discrete enough to satisfy our demands, and we may very well declare the musical box as being the mechanical counterpart of chip music.
Conclusion
I hope these ideas can provide food for thought for people interested in the various forms of "low-tech" electronic art as well as computational art or "discrete art" in general. I particularly want people to realize the universality of Computationally Minimal Art and how it works very well outside of the rigid "historical" contexts it is often confined into.
I consciously skipped all the cultural commentary in the main text on my quest for proving the universality of my idea, so perhaps it's time for that part now.
In this world of endless growth and accumulation, I see Computationally Minimal Art as standing for something more sustainable, tangible and crafty than what the growth-oriented "mainstream cultural industry" provides. CMA represents the kind of simplicity and timelessness that is totally immune to the industrial trends of fidelity maximization and planned obsolescence. It is something that can be brought to a perfection by an individual artist,
without hiring a thousand-headed army of specialists.
As we are in the middle of a growth phase, we can only guess what kind of forms Computationally Minimal Art will get in the future, and what kind of position it will eventually acquire in our cultural framework. We are living interesting times indeed.