Friday, 17 June 2011
We need a Pan-Hacker movement.
The pioneering hackers felt that computers had changed their life for the better and therefore wanted to share this new improvement method with everyone else. They thought everyone should have an access to a computer, and not just any kind of access but an unlimited, non-institutionalized one. Something like a cheap personal computer, for example. Eventually, in the seventies, some adventurous hackers bootstrapped the personal computer industry, which led to the the so-called "microcomputer revolution" in the early eighties.
The era was filled with hopes and promises. All kinds of new possibilities were now at everyone's fingertips. It was assumed that programming would become a new form of literacy, something every citizen should be familiar with -- after all, using a computer to its fullest potential has always required programming skill. "Citizens' computer courses" were broadcasted on TV and radio, and parents bought cheap computers for their kids to ensure a bright future for the next generation. Some prophets even went far enough to suggest that personal computers could augment people's intellectual capacities or even expand their consciousnesses in the way how psychedelic drugs were thought to do.
In the nineties, however, reality stroke back. Selling a computer to everyone was apparently not enough for automatically turning them into superhuman creatures. As a matter of fact, digital technology actually seemed to dumb a lot of people down, making them helpless and dependent rather than liberating them. Hardware and software have become ever more complex, and it is already quite difficult to build reliable mental models about them or even be aware of all the automation that takes place. Instead of actually understanding and controlling their tools, people just make educated guesses about them and pray that everything works out right. We are increasingly dependent on digital technology but have less and less control over it.
So, what went wrong? Hackers opened the door to universal hackerdom, but the masses didn't enter. Are most people just too stupid for real technological awareness, or are the available paths to it too difficult or time-consuming? Is the industry deliberately trying to dumb people down with excessive complexity, or is it just impossible to make advanced technology any simpler to genuinely understand? In any case, the hacker movement has somewhat forgotten the idea of making digital technology more accessible to the masses. It's a pity, since the world needs this idea now more than ever. We need to give common people back the possibility to understand and master the technology they use. We need to let them ignore the wishes of the technological elite and regain the control of their own lives. We need a Pan-Hacker movement.
What does "Pan-Hacker" mean? I'll be giving three interpretations that I find equally relevant, emphasizing different aspects of the concept: "everyone can be a hacker", "everything can be hacked" and "all hackers together".
The first interpretation, "everyone can be a hacker", expands on the core idea of oldschool hackerdom, the idea of making technology as accessible as possible to as many as possible. The main issue is no longer the availability of technology, however, but the way how the various pieces of technology are designed and what kind of user cultures are formed around them. Ideally, technology should be designed so that it invites the user to seize the control, play around for fun and gradually develop an ever deeper understanding in a natural way. User cultures that encourage users to invent new tricks should be embraced and supported, and there should be different "paths of hackerdom" for all kinds of people with all kinds of interests and cognitive frameworks.
The second interpretation, "everything can be hacked", embraces the trend of extending the concept of hacking out of the technological zone. The generalized idea of hacking is relevant to all kinds of human activities, and all aspects of life are relevant to the principles of in-depth understanding and hands-on access. As the apparent complexity of the world is constantly increasing, it is particularly important to maintain and develop people's ability to understand the world and all kinds of things that affect their lives.
The third interpretation, "all hackers together", wants to eliminate the various schisms between the existing hacker subcultures and bring them into a fruitful co-operation. There is, for example, a popular text, Eric S. Raymond's "How To Become A Hacker", that represents a somewhat narrow-minded "orthodox hackerdom" that sees the free/open-source software culture as the only hacker culture that is worth contributing to. It frowns upon all non-academic hacker subcultures, especially the ones that use handles (such as the demoscene, which is my own primary reference point to hackerdom). We need to get rid of this kind of segregation and realize that there are many equally valid paths suitable for many kinds of minds and ambitions.
Now that I've mentioned the demoscene, I would like to add that all kinds of artworks and acts that bring people closer to the deep basics of technology are also important. I've been very glad about the increasing popularity of chip music and circuit-bending, for example. The Pan-Hacker movement should actively look for new ways of "showing off the bits" to different kinds of audiences in many kinds of diverse contexts.
I hope my writeup has given someone some food of thought. I would like to elaborate my philosophy even further and perhaps do some cartography on the existing "Pan-Hacker" activity, but perhaps I'll return to that at some later time. Before that, I'd like to hear your thoughts and visions about the idea. What kind of groups should I look into? What kind of projects could Pan-Hacker movement participate in? Is there still something we need to define or refine?
Monday, 6 June 2011
Ancient binary symbolism and why it is relevant today
Divination was probably the earliest human application for binary arrays. There are several systems in Eurasia and Africa that assign fixed semantics to bitstrings of various lengths. The Chinese I Ching gives meanings to the 3- and 6-bit arrays, while the systems used in the Middle East, Europe and Africa tend to prefer groups of 4 and 8 bits.
These systems of binary mysticism have been haunting me for quite many years already. As someone who has been playing around with bits since childhood, I have found the idea of ancient archetypal meanings for binary numbers very attractive. However, when studying the actual systems in order to find out the archetypes, I have always encountered a lot of noise that has blocked my progress. It has been a little bit frustrating: behind the noise, there are clear hints of an underlying logic and an original protosemantics, but whenever I have tried to filter out the noise, the solution has escaped my grasp.
Recently, however, I finally came up with a solution that satisfies my sense of esthetics. I even pixelled a set of "binary tarot cards" for showing off the discovery:

For a more complete summary, you may want to check out this table that contains a more elaborate set of meanings for each array and also includes all the traditional semantics I have based them on.
Of course, I'm not claiming that this is some kind of a "proto-language" from which all the different forms of binary mysticism supposedly developed. It is just an attempt to find an internally consistent set of meanings that match the various traditional semantics as closely as possible.
Explanation
In my analysis, I have translated the traditional binary patterns into modern Leibnizian binary numbers using the following scheme:
This is the scheme that works best for I Ching analysis. The bits on the bottom are considered heavier and more significant, and they change less frequently, so the normal big-endian reading starts from the bottom. The "yang" line, consisting of a single element, maps quite naturally to the binary "1", especially given that both "yang" and "1" are commonly associated with activity.I have drawn each "card picture" based on the African shape of the binary array (represented as rows of one or two stones). I have left the individual "stones" clearly visible so that the bitstrings can be read out from the pictures alone. Some of the visual associations are my own, but I have also tried to use traditional associations (such as 1111=road/path, 0110=crossroads, 1001=enclosure) as often as they feel relevant and universal enough.
In addition to visual associations, the traditional systems have also formed semantics by opposition: if the array 1111 means "journey", "change" and "death", its inversion 0000 may obtain the opposite meanings: "staying at home", "stability" and "life". The visual associations of 0000 itself no longer matter as much.
The two operations used for creating symmetry groups are inversion and mirroring. These can be found in all families of binary divination: symmetric arrays are always paired with their inversions (e.g. 0000 with 1111), and asymmetric arrays with their reversions (e.g. 0111 with 1110).
Because of the profound role of symmetry groups, I haven't represented the arrays in a numerical order but in a 4x4 arrangement that emphasizes the mutual relationships via inversion and mirroring. Each of the rows in the "binary tarot" picture represents a group with similar properties:
- The top row contains the four symmetrical arrays (which remain the same when mirrored).
- The second row contains the arrays for which mirroring and inversion are equivalent.
- The two bottom rows represent the two groups whose members can be derived from each other solely by mirroring and inversion.
The arrays in the top two groups have an even parity while those on the bottom two groups have an odd parity. This difference is important at least in Al-Raml and related systems, where the array getting the role of a "judge" in a divination table must have an even parity; otherwise there is an error in the calculation.
The members of each row can be derived from one another by eXclusive-ORing them with a symmetrical array (0000, 1111, 0110 or 1001). For this reason, I have also organized the arrangement as a XOR table.
The color schemes used in the card pictures are based on the colors in various 16-color computer palettes and don't carry further symbolism (even though 0010 happens to have the meaning of "red" in Al-Raml and Geomancy as well). Other than that, I have abstained from any modern technological connections.
But why?
Our subjective worlds are full of symbolism that brings various mental categories together. We associate numbers, letters, colors and even playing cards to various real-world things. We may have superstitions about them or give them unique personalities. Synesthetics even do this involuntarily, so I guess it is quite a basic trait for the human mind.
Binary numbers, however, have remained quite dry in this area. We don't really associate them with anything else, so they remain alien to us. Even experts who are constantly dealing with binary technology prefer to hide them or abstract them away. This alienation combined to the increasing role of digitality in our lives is the reason why I think there should be more exposure for the various branches of binary symbolism.
In many cultures, binary symbolism has attained a role so central that people base their conceptions of the world on it. A lot of traditional Chinese cosmology is basically commentary of I Ching. The Yoruba of West Africa use the eight-bit arrays of the Ifa system as "hash codes" to index their whole oral tradition. Some other West African peoples -- the Fon and the Ewe -- extend this principle far enough to give every person an eight-bit "kpoli" or "life sign" at their birth.
I guess the best way to bring some binary symbolism to our modern technological culture might be using it in art. Especially the kind of art such as pixel art, chip music and demoscene productions that embrace the bits, bringing them forward instead of hiding them. This is still just a meta-level idea, however, and I can't yet tell how to implement in it practice. But once I've progressed with it, I'll let you know for sure!
Thursday, 2 June 2011
What should big pixels look like?
Impressive, yes, but as in case with all such algorithms, the first question that came to my mind was: "But does it manage dithering and antialiasing?". The paper explicitly answers this question: no.All the depixelization algorithms so far have been succesful only with a specific type of pixel art. Pixel art of a cartoonish style that has clear lines and not too many details. This kind of pixel art may have been mainstream in Japan, but in the Western sphere, especially in Europe, there has been a strong tradition of optimalism: the tendency of maximizing the amount of detail and shading within the limited grid of pixels. An average pixel artwork on the Commodore 64 or the ZX Spectrum has an extensive amount of careful manual dithering. If we wish to find a decent general-purpose pixel art depixelization algorithm, it would definitely need to take care of that.
I once experimented by writing an undithering filter that attempts to smooth out dithering while keeping non-dithering-related pixels intact. The filter works as follows:
- Flag a pixel as a dithering candidate if it differs enough from its cardinal neighborhood (no more than one of the four neighbors are more similar to the reference pixel than the neighbor average).
- Extend the area of dither candidates: flag a pixel if at least five of its eight neighbors are flagged. Repeat until no new pixels are flagged.
- For each flagged pixel, replace its color with the weighed average of all the flagged pixels within the surrounding 3x3 rectangle.
The undithering works well enough within the smooth areas, and hq4x is even able to recognize the undithered areas as gradients and smooth them a little bit further. However, when looking at the border between the nose and the background, we'll notice careful manual antialiasing that even adds some lonely dithering pixels to smooth out the staircasing. My algorithm doesn't recognize these lonely pixels as dithering, and neither does it recognize the loneliest pixels in the outskirts of dithered gradients as dithering. It is a difficult task to algorithmically detect whether a pixel is intended to be a dithering pixel or a contour/detail pixel. Detecting antialiasing would be a totally different task, requiring a totally new set of processing stages.There seems to be still a lot of work to do. But suppose that, some day, we will discover the ultimate depixelization algorithm. An image recognition and rerendering pipeline that succesfully recognizes and interprets contours, gradients, dithering, antialiasing and everything else in all conceivable cases, and rerenders it in a high resolution and color without any distracting artifacts. Would that be the holy grail? I wouldn't say so.
The case is that we already have the ultimate depixelization algorithm -- the one running in the visual subsystem of the human brain. It is able to fill in amazing amounts of detail when coping with low amounts of reliable data. It handles noisiness and blurriness better than any digital system. It can extrapolate very well from low-complexity shapes such as silhouette drawings or groups of blurry dots on a CRT screen.
A fundamental problem with the "unlimited resolution" approach of pixel art upscaling is that it attempts to fill in details that aren't there -- a task in which the human brain is vastly superior. Replacing blurry patterns with crisp ones can even effectively turn off the viewer's visual imagination: a grid of blurry dots in the horizon can be just about anything, but if they get algorithmically substituted by some sort of crisp blobs, the illusion disappears. I think it is outright stupid to waste computing resources and watts for something that kills the imagination.
The reason why pixel art upscaling algorithms exist in the first place is that sharp rectangular pixels (the result of nearest-neighbor upscaling) look bad. And I have to agree with this. Too easily recognizable pixel boundaries distract the viewer from the art. The scaling algorithms designed for video scaling partially solve this problem with their interpolation, but the results are still quite bad for the opposite reason -- because there is no respect for the nature of the individual pixel.
When designing a general-purpose pixel art upscaling algorithm, I think the best route would go somewhere between the "unlimited resolution" approach and the "blurry interpolation" approach. Probably something like CRT emulation with some tasteful improvements. Something that keeps the pixels blurry enough for the visual imagination to work while still keeping them recognizable and crisp enough so that the beauty of the patterns can be appreciated.
Nevertheless, I was very fascinated by the Kopf-Lischinski algorithm, but not because of how it would improve existing art, but for its potential of providing nice, organic and blobby pixels to paint new art with. A super-low-res pixel art painting program that implements this kind of algorithm would make a wonderful toy and perhaps even vitalize the pixel art scene in a new and refreshing way. Such a vitalization would also be a triumph for the idea of Computationally Minimal Art which I have been advocating.
Tuesday, 17 May 2011
Is it possible to unite transhumanism and degrowth?
Transhumanism, in short, advocates the use of technology for turning humans into something better: creatures with ridiculously long lifespans, ridiculous levels of intelligence and ridiculous amounts of pleasure in life. In order to endlessly improve all the statistics, the transcendental mankind needs more and more energy and raw material. One planet is definitely not enough -- we need to populate an ever bigger portion of the universe with ever bigger brainlike structures. To me, these ideas sound like an extreme glorification of our current number one memetic plague, the ideology of endless economic growth. What a pity, since some of the stuff does make some sense.
Fortunately, there seems to be more room for diversity in transhumanist thought than that. As there are currents such as "Christian Transhumanism" or "Social-Democratic Transhumanism", would it be possible to devise something like "Degrowthian Transhumanism" as well? Something that denounces the growth ideology while still advocating scientific and technological progress in order to transform mankind into something better? This would be a form of transhumanist philosophy even I might be able to appreciate and symphatize. But could such a bastard child of two seemingly conflicting ideologies be anything else than oxymoronic and inconsistent? Let's find out.
The degrowth movement, as the name says, advocates the contraction of economies by downscaling production, as it views the excessive production and consumption in today's societies as detrimental to both the environment and the quality of human life. Consumers need to switch their materialist lifestyles into voluntary simplicity, and producers need to abandon things like planned obsolescence that artificially keep the production volumes up. Downscaling will also give people more free time, which can be used for noble quests such as charity and self-cultivation. These goals may sound agreeable on their own, even to a transhumanist, but what about the technological progress? Downscaling the industries would also slow it down or even reverse it, wouldn't it?
Actually, quite many technologies make it possible to do "more with less" and therefore scale dependency networks down. Personal computers, for example, have succesfully replaced an arsenal of special-purpose gadgets ranging from typewriters and television sets to expensive studio equipment. 3D printing technology will reduce the need for specialized mass production, and once we get nanobots to assist in it, we will never require mines or factories anymore. A degrowthian transhumanist may want to emphasize this kind of potential in emergencing technologies and advocate their use for downshifting instead of upshifting. A radical one may even take the provocative stance that only those technologies that reduce overhead are genuinely progressive.
A degrowthian transhumanist may want to advocate immaterial developments whenever possible: memetics, science, software. Information is much more lightweight to process than matter or energy and therefore an obvious point of focus for anyone who wants to support both degrowth and technological progress. Most people in our time do not understand how crucial software is in making hardware perform well, so a degrowthian transhumanist may want to shout it out every now and then. It is entirely possible that we already have the hardware for launching a technological singularity, we just don't have the software yet. We may all have savant potential in our brains, we just haven't found the magic formula to unleash it with. In general, we don't need new gadgetry as much as we need in-depth understanding of what we already have.
In a downshifted society, people will have a lot of free time. A degrowthian transhumanist may therefore be more willing to adopt time-consuming methods of self-improvement than the mainline transhumanist who fantasizes about quick and easy mass-produced magic such as instant IQ pills. Wisdom is a classical example of a psychological feature that takes time to build up, so a degrowthian transhumanist may want to put a special emphasis on it. Using intelligence without wisdom may have catastrophic results, so we need superhuman wisdom to complement superhuman intelligence, and maybe even artificial wisdom to complement artificial intelligence. The quest for immortality is no longer just about an individual desire to live as long as possible, but about having individuals and societies that get wise enough to use their superhuman capacities in a non-catastrophic way.
So, how would a race of degrowthian superhumans spend their lives? By reducing and reducing until there's nothing left but ultimate reclusion? I don't think so. The degrowth movement is mostly a reaction towards the present state of the world and not a dogma that should be adhered to its logical extreme. Once we have scaled our existence down to some kind of sustainable moderateness, we won't need to degrow any further. Degrowthian transhumans would therefore very well colonize a planet or moon every now and then, they just wouldn't regard expansion as an ends of its own. In general, these creatures would be rather serious about the principles of moderation and middle way in anything they do. They would probably also be more independent and self-sufficient than their extropian counterparts who, on their part, would have more brain cells, gadgets and space to toy around with.
This was a thought experiment I carried out in order to clarify my own relationship with the transhumanist ideology. I tried to find a fundamental disagreement between the core transhumanist ideas and my personal philosophy but didn't find one. Still, I don't think I'll be calling myself a transhumanist (even a degrowthian one) any time soon; I just wanted to be sure how much I can sympathize this bunch of freaks. I also considered my point of view fresh enough to write it up and share with others, so there you are, hope you liked it.
Tuesday, 1 February 2011
On electronic wastefulness
People are becoming more and more aware of this. Environmental and economic problems have strengthened the criticism towards consumer culture, monetary power and political systems, and all kinds of countercultural movements are thriving. At the same time, however, ever more people are increasingly dependent on digital technology, which gets produced, bought, used and abandoned in greater masses than ever, causing an ever bigger impact on the world in the form of waste and pollution.
Because of this, I have decided to finally summarize my thoughts on how digital technology reflects the malfunctions of our civilization. I became a hobbyist programmer as a schoolkid in the mid-eighties, and fifteen years later I became a professional software developer. Despite all this baggage, I'm going to attempt to keep my words simple enough for common people to understand. Those who want to get convinced by citations and technical argumentation will get those at some later time.
Counter-explosion
For over fifty years, the progress of digital technology has been following the so-called Moore's law, which predicts that the number of transistors that fit on a microchip doubles every two-or-so years. This means that it is possible to produce digital devices that are of the same physical size but have ever more memory, ever more processing speed and ever greater overall capabilities.
Moore's law itself is not evil, as it also means that it is possibile to perform the same functions with ever less use of energy and raw material. However, people are people and behave like people: whenever it becomes possible to do something more easily and less consumingly, they start doing more of this something. This phenomenon is called "rebound effect" based on a medical term of the same name. It can be seen in many kinds of things: less fuel-consuming cars make people drive more, and less calories in food make weight-losers eat more. The worst case is when the actual savings becomes negative: a thing that is supposed to reduce consumption actually increases it instead.
In information technology, the most prominent form of rebound effect is the bloating of software, which takes place in the same rate of explosiveness as the improvement of hardware. This phenomenon is called Wirth's law. If we took a time machine ride back to 1990 and told the contemporaries that desktop computers would be becoming thousand times faster in twenty years, they would surely assume that almost anything would happen instantaneously with them. If we then corrected them by saying that software programs still take time to start up in the 2010s and that it is sometimes painful to tolerate their slowness and unresponsiveness, they wouldn't believe it. How is it even possible to write programs so poorly that they don't run smoothly with a futuristic, thousand times more powerful computer? This fact would become even harder to believe if we told them that it also applies to things like word processors which are used for more or less exactly the same things as before.
One reason for the unnecessary largeness, slowness and complexity of software is the dominant economic ideal of indefinite growth, which makes us believe that bigger things are always better and it is better to sell customers more than they need. Another reason is that rapid cycles of hardware upgrade make software developers indifferent: even if an application program were mindlessly slow and resource-consuming even on latest hardware, no one will notice it a couple of years later when the hardware is a couple of times faster. Nearly any excuse is valid for bloat. If it is possible to shorten software development cycles even slightly by stacking all kinds of abstraction frameworks and poorly implemented scripting languages on top of one another, it will be done.
The bloat phenomenon annoys people more and more in their normal daily life, as all kinds of electric appliances starting from the simplest flashlight contain increasingly complex digital technology, which drowns the user in uncontrollable masses of functionality and strange software bugs. The digitalization of television, for example, brought a whole bunch of computer-style immaturity to the TV-watching experience. I've even seen an electric kitchen stove that didn't heat up before the user first set up the integrated digital clock. Diverse functionality itself is not evil, but if the mere existence of extra features disrupts the use of the basic ones, something is totally wrong.
Even though many things in our world tend to swell and complexify, it is difficult to find a physical-world counterpart to software bloat, as the amount of matter and living space on our planet does not increase exponentially. It is not possible to double the size of one's apartment every two years in order to fit in more useless stuff. It is not possible to increase the complexity of official paperwork indefinitely, as it would require more and more food and accommodation space for the expanding army of bureaucrats. In the physical world, it is sometimes necessary to evaluate what is necessary and how to compress the whole in order to fit more. Such necessity does not exist in the digital world, however; there, it is possible to constantly inhale and never exhale.
Disposability
The prevailing belief system of today's world equates well-being with material abundance. The more production and consumption there is, the more well-being there is, and that's it. Even though the politicians in rich countries don't want to confess this belief so clearly anymore, they still use concepts such as "gross national product", "economic growth" and "standard of living" which are based on the idealization of boundless abundance.
As it is the holy responsibility of all areas of production to grow indefinitely, it is important to increase consumption regardless of whether it is sensible or not. If it is not possible to increase the consumption in natural ways, planned obsolensce comes to rescue. Some decades ago, people bought washing machines and television sets for the twenty years to follow, but today's consumers have the "privilege" of buying at least four of both during the same timespan, as the lifespans of these products have been deliberately shortened.
The scheduled breaking of electric appliances is now easier than ever, as most of them have an integrated microprocessor running a program of some kind. It is technically possible, for example, to hide a timer in this program, causing the device to either "break" or start misbehaving shortly after the warranty is over. This kind of sabotage may be beneficial for the sales of smaller and cheaper devices, but it is not necessary in the more complex ones; in their case, the bloated poor-quality software serves the same purpose.
Computers get upgraded especially when the software somehow becomes intolerably slow or even impossible to run. This change can take place even if the computer is used for exactly the same things as before. Bloat makes new versions of familiar software more resource-consuming, and when reforms are introduced on familiar websites, they tend to bloat up as well. In addition, some operating systems tend to slow down "automatically", but this is fortunately something that can be fixed by the user.
The experience of slowness, in its most annoying form, is caused by too long response times. The response time is the time between user's action and the indication that the action has been registered. Whenever the user moves the mouse, the cursor on the screen must immediately match the movement. Whenever the user presses a letter key on the keyboard, the same letter must appear on the screen immediately. Whenever the user clicks a button on the screen, the
graphic of the button must change immediately. According to usability research, the response time must be less than 1/10 seconds or the system feels laggy. When it has taken more than a second, the user's blood pressure is already increasing. After ten seconds, the user is convinced that "the whole piece of junk has locked up".
Slow response times are usually regarded as an indicator that the device is slow and that it is necessary to buy a new one. This is a misconception, however. Slow response times are indicators of nothing else than indifferent attitudes to software design. Every computing device that has become available during the last thirty years is completely capable of delivering the response within 1/10 seconds in every possible situation. Despite this fact, the software of the 2010s is still usually designed in such a way that the response is provided once the program has first finished all the more urgent tasks. What is supposed to be more important than serving the user? In the mainframe era, there were quite many such things, but in today's personal computing, this should never be the case. Fixing the response time problems would be a way to permanently make technology more comfortable to use as well as to help the users tolerate the actual slowness. The industry, however, is strangely indifferent to these problems. Response times are, from its point of view, something that "get fixed" automatically, at least for a short while and in some areas, at hardware upgrades.
Response time problems are just a single example of how the industry considers it more important to invent new features than to fix problems that irritate the basic user. A product that has too few problems may make consumers too satisfied. So satisfied that they don't feel like buying the next slightly "better" model which replaces old problems with new ones. Companies that want to ensure their growth prefer to do everything multiple times in slightly substandard ways instead of seeking any kind of perfection. Satisfaction is the worst enemy of unnecessary growth.
Is new hardware any better?
I'm sure that most readers have at least heard about the problems caused by the rat race of upgrade and overproduction. The landfills in rich countries are full of perfectly functioning items that interest no one. Having anything repaired is stupid, as it is nearly always easier and cheaper to just buy new stuff. Selling used items is difficult, as most people won't accept them even for free. Production eats up more and more natural resources despite all the efforts of "greening up" the production lines and recycling more and more raw material.
The role of software in the overproduction cycle of digital technology, however, is not so widely understood. Software is the soul of every microprocessor-based device, and it defines most of what it is like to use the device or how much of its potential can be used. Bad software can make even good hardware useless, whereas ingenious software can make even a humble device do things that the original designer could never have imagined. It is possible to both lengthen and shortern product lifetimes via software.
New hardware is often advocated with new features that are not actually features of the hardware but of the software it runs. Most of the features of the so-called "smartphones", for example, are completely software-based. It would be perfectly possible to rewrite the software of an old and humble cellphone in order to give it a bunch of features that would effectively turn it into a "smartphone". Of course, it is not possible to do complete impossibilities with software; there is no software trick that makes a camera-less phone take photos. Nevertheless, the general rule is that hardware is much more capable than its default software. The more the hardware advances, the more contrast there is between the capabilities of the software and the potential of the hardware.
If we consider the various tasks for which personal computers are used nowadays, we will notice that only a small minority of them actually requires a lot from the hardware. Of course, bad software may make some tasks feel more demanding than what they actually are, but that's another issue. For instance, most of the new online services, from Facebook to Youtube and Spotify, could very well be implemented so that they run with the PCs of the late 1990s. Actually, it would be possible to make them run more smoothly than how the existng versions run on today's PC. Likewise, with better operating systems and other software, we could make the same old hardware feel faster and more comfortable to use than today's hardware. From this we can conclude that the computing power of the 2000s is neither useful, necessary nor pleasing for most users. Unless we count the pseudo-benefit that it makes bad and slow software easier to tolerate, of course.
Let us now imagine that the last ten years in personal computing went a little bit differently -- that most of the computers sold to the great masses would have been "People's Computers" with a fixed hardware setup. This would have meant that the hardware performance would have remained constant for the last ten years. The 2011 of this alternate universe would probably be somewhat similar to our 2011, and some things could even be better. All the familiar software programs and on-line services would be there, they would just have been implemented more wisely. The use of the computers would have become faster and more comfortable during the years, but this would have been due to the improvement of software, not hardware. Ordinary people would never need to think about "hardware requirements", as the fixedness of the hardware would ensure that all software, services and peripherials work. New computers would probably be lighter and more energy-efficient, as the lack of competition in performance would have moved the competition to these areas. These are not just fringe utopian ideas; anyone can make similar conclusions by studying the history of home computing where several computer and console models have remained constant for ten years or more.
Of course it is easy to come up with ideas of tasks that demand more processing power than what was available to common people ten years ago or even today. A typical late-1990s desktop PC, for example, plays ordinary DVD-quality movies perfectly but may have major problems with the HD resolutions that are fashionable in the early 2010s. Similarly, by increasing the numbers, it is possible to come up with imaginary resolutions that are out of the reach of even the most expensive special-purpose equipment available today. For many people, this is exactly what technological progress means -- increase in numerical measures, the possibility to do the same old things in ever greater scales. When a consumer replaces an old TV with a new one, he or she gets a period of novelty vibes from the more magnificent picture quality. After a couple of years, the consumer can buy another TV and get the novelty vibes once again. If we had an access to unlimited natural resources, it would be possible to go on with this vanity cycle indefinitely, but still without improving anyone's quality of life in any considerable extent.
Most of the technological progress facilitated by the personal computing resources of the 2000s has been quantitative -- doing the same old stuff that became possible in the 1990s but with bigger numbers. Editing movies and pictures that have ever more pixels, running around in 3D video game worlds that have ever more triangles. It is difficult to even imagine a computational task relevant to an ordinary person that would require the number-crunching power of a 2000s home computer due to its nature alone, without any quantitative exaggeration. This could very well be regarded as an indicator that we already have enough processing power for a while. The software and user culture are lagging so far behind the hardware improvements, that it would be better to concentrate on them instead and leave the hardware on the background.
Helplessness
In addition to the senseless abundance of material items, today's people are also disturbed by a senseless abundance of information. Information includes not only the ever expanding flood of video, audio and text coming from the various media, but also the structural information incorporated in material and immaterial things. The expansion of this structural information manifests as increasing complexity of everything: consumer items, society systems, cultural phenomena. Those who want to understand the tools they use and the things that affect their life, must absorb ever greater amounts of structural information about them. Many people have already given up with understanding and just try to get along.
Many frown upon people who can't boil an egg or attach a nail to a wall without a special-purpose egg-boiler or nailgun, or who are not even interested in how the groceries come to the store or the electricity to the wall socket. However, the expanding flood of information and the complexification of everything may eventually result in a world where neo-helplessness and poor common knowledge are the normal condition. In computing, complexification has already gone so far that even many experts don't dare to understand how the technology works but prefer to guess and randomize.
Someone who wants to master a tool must build a mental model of its operation. If the tool is a very simple one, such as a hammer, the mental model builds up nearly automatically after a very short study. If someone who uses a hammer accidentally hits their finger with it, they will probably accuse themself instead of the hammer, as the functionality of a hammer can be understood perfectly even by someone who is not so capable in using it. However, when a computer program behaves against the user's will, the user will probably accuse the technology instead of themself. In situations like this, the user's mental model of how the program works does not match with its actual functionality.
The more bloated a software program is, the more effort the user needs to take in order to build an adequate mental model. Some programs are even marketing-minded enough to impose its new and glorious features to the user. This doesn't help at all in forming the mental model. Besides, most users don't have a slightest interest in extensive exploration but rather use a simple map and learn to tolerate the uncertainty caused by its rudimentariness. When we also consider that programs may change their functionality quite a lot between versions, even enthusiasts will turn cynical and frustrated when their precious mental maps become obsolete.
Many software programs try to fix the complexity problem by increasing the complexity instead of decreasing it. This mostly manifests as "intelligence". An "intelligent" programs monitors the user, guesses their intents and possibly suggests various courses of actions based on the intents. For example, a word processor may offer help in writing a letter, or a file manager may suggest things to do with a newly inserted memory stick. The users are offered all kinds of controlled ready-made functionality and "wizards" even for tasks they would surely prefer to do by themselves, at least if they had a chance to learn the normal basic functionality. If the user is forced to use specialized features before learning the basic ones, he or she will be totally helpless in situations where a special-purpose feature for the particular function does not exist. Just like someone who can use egg-boilers and nailguns but not kettles or hammers.
The reasons why technology exists are making things easier to do and facilitating otherwise impossible tasks. However, if a technological appliance becomes so complex that its use is more like random guessing than goal-oriented controlling, we can say that the appliance no longer serves its purpose and that the user has been taken over by technology. For this reason, it is increasingly important to keep things simple and controllable. Simplicity, of course, does not mean mere superficial pseudo-simplicity that hides the internal complxity, but the avoidance of complexity on all levels. The user cannot be in full control without having some kind of an idea about what the tool is doing at any given time.
In software, it may be useful to reorder the complexity so that there is a simple core program from which any additional complexity is functionally separated until the user deliberately activates it. This would make the programs feel reliable and controllable even with simple mental maps. An image processing software, for example, could resemble a simple paint program at its core level, and its functionality could be learned perfectly after a very short testing period. All kinds of auxilary functions, automations and other specialities could be easily found if needed, and the user could extend the core with them depending on the particular needs. Still, their existence would never disturb those users who don't need them. Regardless of the level of the user, the mental map would always match how the program actually works, and the program would therefore never surprise the user by acting against his or her expectations.
Software is rarely built like this, however. There is not much interest in the market for movements that make technology genuinely more approachable and comprehensible. Consumer masses who feel themselves helpless in regards to the technology are, after all, easier to control than masses of people who know what they are doing (or at least think so). It is much more beneficial for the industry to feed the helplessness by drowning the people in trivialities, distancing them for the basics and perhaps even submitting them under the power of an all-guessing artificially-intelligent assistant algorithm.
Changing the world
I have now discussed all kinds of issues, of which I have mostly accused bad software, and of whose badness I have mostly accused the economic system that idealizes growth and material abundance. But is it possible to do something about these issues? If most of the problems are indeed software-related, then couldn't they be resolved by producing better software, perhaps even outside of the commercial framework if necessary?
When calling for a counter-force for commercial software development, the free and open-source software (FOSS) movement is most commonly mentioned. FOSS software has mostly been produced as volunteer work without monetary income, but as the result of the work can be freely duplicated and used as basis of new work, they have managed to cause a much greater impact than what voluntary work usually does. The greatest impact has been among technology professionals and hobbyists, but even laypeople may recognize names such as Linux, Firefox and OpenOffice (of which the two latter are originally proprietary software, however).
FOSS is not bound to the requirements of the market. Even in cases where it is developed by corporations, people operating outside the commercial framework can contribute to it and base new projects on it. FOSS has therefore, in theory, the full potential of being independent of all the misanthropic design choices caused by the market. However, FOSS suffers from most of these problems just as much as proprietary software, and it even has a whole bunch of its own extra problems. Reasons for this can be found in the history of the movement. Since the beginning, the FOSS movement has mostly concentrated on cloning existing software without spending too much energy on questioning the dominant design principles. The philosophers of the movement tend to be more concerned about legal and political issues instead of technical ones: "How can we maximize our legal rights?" instead of "How should we design our software so that it would benefit the whole humanity instead of just the expert class?"
I am convinced that FOSS would be able to give the world much more than what it has already given if it could form a stronger contrast between itself and the growth-centric industry. In order to strengthen the contrast, we need a powerful manifest. This manifest would need to profoundly denounce all the disturbances to technological progress caused by the growth ideology, and it would need to state the principles on which software design should be based on in order to benefit human beings and nature in the best possible way. Of course, this manifest wouldn't exist exclusively for reinventing the wheel, but also for re-evaluating existing technology and redirecting its progress towards the better.
But what can ordinary people do? Even a superficial awareness of the causes of problems is better than nothing. One can easily learn to recognize many types of problems, such as those related to response times. One can also learn to accuse the right thing instead of superficially crying how "the computer is slow" or "the computer is misbehaving". Changes in language are also a nice way of spreading awareness. If people in general learned to accuse software instead of hardware, then they would probably also learn to demand software-based solutions for their problems instead of needlessly purchasing new hardware.
When hardware purchases are justifiable, those concerned of the environment will prefer second-hand hardware instead of new, as long as there is enough power for the given purposes. It is a common misconception to assume that new hardware would always consume less power than old -- actually, the trend has more often been exactly the opposite. During a period of ten years from the mid-1990s to the mid-2000s, for example, the power consumption of a typical desktop PC (excluding the monitor) increased tenfold, as the industry was more zealous to increase processing power than to improve energy efficiency. Power consumption curves for video game consoles have been even steeper. Of course, there are many examples of positive development as well. For example, CRT screens are worth replacing with similarly-sized LCD screens, and laptops also typically consume less than similar desktop PCs.
There is a strong market push towards discontinuing all kinds of service and repair activity. Especially in case of cellphones and other small gadgets, "service" more and more often means that the gadget is sent out to the manufacturer which dismantles it for raw material and sends a new gadget to the customer. For this reason, it may be reasonable to consider the difficulty of do-it-yourself activity when choosing a piece of hardware. As all forms of DIY culture seem to be waning due to a lack of interest, it is worthwhile to support them in all possible ways in order to ensure that there will still be someone in the future who can repair something.
Of course, we all hope that the world would change in a way such that the human- and nature-friendly ways to do things would always be the most beneficial ones even in "the reality of numbers and charts". Such a change will probably take longer than a few decades, however, regardless of the volume of the political quarrel. It may therefore not be wise to indefinitely wait for the change of the system, as it is already possible to participate in practical countercultural activity today. Even in things related to digital technology.
Friday, 3 September 2010
The Future of Demo Art: The Demoscene in the 2010s
Written by Ville-Matias Heikkilä a.k.a. viznut/pwp, released in the web on 2010-09-03. Also available in PDF format.
Introduction
An end of a decade is often regarded as an end of an era. Around the new year 2009-2010, I was thinking a lot about the future of demo art, which I have been involved with since the mid-nineties. The mental processes that led to this essay were also inspired by various events of the 2010s, such as the last Breakpoint party ever, as well as Markku Reunanen's licenciate thesis on the demoscene.First of all, I want to make it clear that I'm not going to discuss "the death of the scene". It's not even a valid scenario for me. The demo culture is already 25 years old, and during these years, it has shown its ability to adapt to the changes in its technological and cultural surroundings, so it's not very wise to question this ability. Instead, I want to speculate what kind of changes might be taking place during the next ten years. What is the potential of the artform in the 2010s, and what kind of challenges and opportunities is it going to face?
After the nineties
Back in the early nineties, demo art still represented the technological cutting edge in what home computers were able to show. You couldn't download and playback real-life music or movies, and even if you could, the quality was poor and the file sizes prohibitive. It was possible to scan photographs and paintings, but the quality could still be tremendously improved with some skilled hand-pixelling. Demos frequently showed things that other computer programs, such as video games, did not, and this made them hot currency among masses of computer hobbyists far beyond the actual demoscene. As a result, the subculture experienced a constant influx of young and enthusiastic newcomers who wanted to become kings of computer art.After the nineties, the traditional weapons of the demoscene became more or less ineffective. Seeing a demo on a computer screen is no longer a unique experience, as demos have the whole corpus of audiovisual culture to compete with. Programming is no longer a fashionable way of expressing creativity, as there is ready-made software easily available for almost any purpose. The massive, diverse hordes of the Internet make you feel small in comparison; the meaning of life is no longer to become a legend, but to sit in your own subcultural corner with an introvert attitude of "you make it, you watch it". Young and enthusiastic people interested in arts or programming have hundreds of new paths to choose from, and only a few pick the good, old and thorny path of demomaking.
There are many people who miss the "lost days of glory" of their teens. To them, demos have lost their "glamor" and are now becoming more and more irrelevant. I see the things a little bit differently, however.
Consider an alternative history where the glamor was never lost, and the influx of enthusiastic teenagers always remained constant. Year after year, you would have witnessed masses of newbies doing the same mistakes all over again. You would also have noticed that you are "becoming too old for this shit" and looked for a totally different channel for your creativity. The average career of a demo artist would thus have remained quite short, so there would never have been veteran artists with strong and refined visions and thus no chance for the artform to grow up. Therefore, I don't see it as a bad thing at all that demos are no longer as fashionable as they used to be.
There have been many changes in the demo culture during the last ten years. Most of them can be thought of as adaptations to the changing social and technological surroundings, but you can also think about them as belonging to a growth process. As your testosterone levels have lowered, you are no longer as arrogant about your underground trueness as you used to be. As you have gathered more experience and wisdom about life and the world, you can appreciate the diversity around yourself much better than you used to be. More outreach and less fight, you know.
When thinking about the growth process, one should also consider how the relationship between the demoscene and the technology industry has changed. In the eighties, it was all about piracy. In the nineties, people forgot about the piracy and started to dream about careers in software industry. Today, most sceners already have a job, so they have started to regard their freetime activity as a relief from their career rather than as something that would support it.
Especially those who happen to be coders "on both sides" tend to have an urge to separate the two worlds in some way or another by emphasizing the aspects that differentiate democoding from professional programming. You can't be very creative, independent, experimental or low-level in most programming jobs, so you'll want to be that in your artistic endeavours. You may want to choose totally different platforms, methods and technical approaches so that your leisure activity actually feels like leisure activity.
Thus, although many demosceners work in the software industry, the two worlds seem to be drifting apart. And it is not just because of the separation of work and freetime, but also because of the changes in the industry and the world in general.
Although the complexity of everything in human culture has been steadily increasing for a couple of centuries already, there has been a very dramatic acceleration during the past few decades, especially in technology. This means, among all, that there are more and more prepackaged blackboxes and less and less room for do-it-yourself activities.
Demo art was born in a cultural environment that advocated hobbyist programming and thorough bitwise understanding of one's gear. The technical ambitions of democoders were in complete harmony with the mainstream philosophy of that era's homecomputing. During the following decades, however, the mainstream philosophy degraded from do-it-yourself into passive consumerism, while the demoscene continued to cultivate its original values and attitudes. So, like it or not, demos are now in a "countercultural" zone.
While demos have less and less appeal to the mainstream industry where the "hardcore" niches are gradually disappearing, they are becoming increasingly interesting to all kinds starving artists, grassroots hippies, radical do-it-yourself guys and other "countercultural" people. And if you want your creative work to make any larger-scale sense in the future world, I guess it might be worthwhile to start hang around with these guys as well.
Core Demoscene Activity
The changes during the last ten years have made the demoscene activity somewhat vague. In the nineties, you basically made assembly code, pixel graphics and tracker music, and that was it. The scene was the secret cult that maintained the highest technical standards in all of these "underground" forms of creativity. Nowadays, everyone you know uses computers for creativity, some of them even being better than you, and most computer-aided creativity falls under some valid competition category at demoparties. Almost any deviantART user could submit their work in an average graphics compo, sometimes even win it. As almost anything can be a "demoscene production", being a "demoscener" is no longer about what your creative methods are like, but whom you hang around with.When talking about demo art, it is far too easy to concentrate on the social background ("the scene") instead of the actual substance of the artform and the kind of activity that makes it unique. For the purposes of this essay, I have therefore attempted to extract and define something that I call "Core Demoscene Activity". It is something I regard as the unique essence of demo art, the pulsating heart that gives it its life. All the other creative activities of demo art stem from the core activity, either directly or indirectly.
When defining "core demoscene activity", we first need to define what it isn't. The first things to rule out are the social aspects such as participating in demoscene events. These are important in upholding the social network, but they are not vital for the existence of demos. Making demos is supposed to be the reason for attending parties, not the other way around.
The core activity is not just "doing creative things with a computer" either. Everyone does it, even your mother. And not even "making non-interactive realtime animations", as there are other branches of culture that do the same thing -- the VJ and machinima communities, for example. Demos do have their own esthetic sensibilities, yes, but we are now looking for something more profound than that.
What is the most essential thing, in my opinion, is the program code. And not just any tame industry-standard code that fulfills some given specifications, but the wild and experimental code that does something that opens up new and unpredicted possibilities. Possibilities that are simply out of the reach of existing software tools. Although there are other areas of computer culture that practise non-compromising hard-core programming, I think the demoscene approach is unique enough to work as a basis of a wholesome definition.
The core activity of the demoscene is very technical. Exploration and novel exploitation of various possible hardware and software platforms. Experimentation with new algorithms, mathematical formulas and novel technical concepts. Stretching the expressive power of the byte. You can remove musicians, graphicians and conceptual experimenters, but you cannot remove hardcore experimental programming without destroying the essence of demo art.The values and preferences of demoscene-style programming are very similar to those of traditional hackers (of the MIT tradition). A major difference, however, seems to be that a traditional hacker determines the hack value of a program primarily by looking at the code, while a demo artist primarily looks at the audiovisual output. An ingenious routine alone is not enough; it must also be presented well, so that non-programmers are also able to appreciate the hack value. A lot of effort is put in presentational tweaking in order to maximize the audiovisual impact. This relationship between code and presentation is another unique thing in demo art.
Here is a short and somewhat idealized definition of "Core Demoscene Activity":
- Core Demoscene Activity is the activity that leads to the discovery of new techniques to be used in demo art.
- Everything in Core Demoscene Activity needs to directly or indirectly support the discovery of new kind of audiovisual output. Either something not seen on your platform before, or something not seen anywhere before.
- The exploration should ideally concentrate on things that are beyond the reach of existing software tools, libraries or de-facto standard methods. This usually requires a do-it-yourself approach that starts from the lowest available level of abstraction.
- General-purpose solutions or reusable code are never required on this level, so they should not interfere with the research. Rewrite from scratch if necessary.
Of course, the core activity alone is not enough, as the new discoveries need to be incorporated in actual productions, which also often include a lot of content created with non-programmatical methods. So, here is a four-level scheme that classifies the various creative activities of demo art based on their methodological distance from the "core". Graphically, this could be presented as nested circles. Note that the scheme is not supposed to be interpreted as a hierarchy of "eliteness" or "trueness", it is just one possible way of talking about things.
- First Circle / Core Demoscene Activity: Hardcore experimental programming. Discovery of new techniques, algorithms, formulas, theories, etc. which are put in use on the Second Circle.
- Second Circle Activity: Application-level programming. Demo composition, presentational tweaking of effect code, content creation via programming, development of specialized content creation tools (trackers, demomakers, softsynths), etc.
- Third Circle Activity: Content creation with experimental, specialized and "highly non-standard" tools. Musical composition with trackers, custom softsynths or chip music software; pixel and character graphics; custom content creation software (such as demomakers), etc.
- Fourth Circle Activity: Content creation with "industry-standard tools" including high-profile software and "real-life" instruments. Most of the bitmap graphics, 3D modelling and music in modern "full-size" demos have been created with fourth-circle techniques. Design/storyboard work also falls in the fourth circle. Blends rather seamlessly with mainstream computer-aided creativity.
It should be noted that the experimental or even "avant-garde" attitude present in the Core Activity can also be found on the other levels. This also makes the Fourth Circle important: while it is possible to do conceptual experimentation on any level, general-purpose industry-standard tools are often the best choices when trying out a random non-technical idea.
The four-circle scheme seems to be applicable to some other forms of digital art as well. In the autumn 2009, the discovery of Mandelbulb, an outstanding 3D variant of the classic Mandelbrot set, inspired me look into the fractal art community. The mathematical experimentation that led to the discovery of the Mandelbulb formula was definitely a kind of "core activity". Some time later, an "easy-to-use" rendering tool called "Mandelbulber" was released to the community in what I would classify as "second-circle" activity. The availability of such a tool made it possible for the non-programmers of the community to use the newly discovered mathematical structure in their art in activities that would fall on the third and fourth circles.Is it only about demos?
The artistic production central to demo culture is, obviously, the demo. According to the current mainstream definition, a demo is a stand-alone computer program that shows an audiovisual presentation, a couple of minutes long, using real-time rendering. It remains exactly the same from run to run, and you can't interact with it. But is this all? Is there something that demo artists can give to the world besides demos?I'm asking this for a reason. The whole idea of a demo, defined in this way, sounds somewhat redundant to laymen. What is the point in emphasizing real-time rendering in something that might just as well be a prerendered video? Isn't it kind of wasteful to use a clever technical discovery to only show a fixed set of special cases? In order to let the jewels of Core Demoscene Activity shine in their full splendor, there should be a larger scale of equally glorified ways of demonstrating them. Such as interactive art. Or dynamic non-interactive. Maybe games. Virtual toys. Creative toys or games. Creative tools. Or something in the vast gray areas between the previously-mentioned categories.
The idea of a "non-interactive realtime show" is, of course, tightly knit with the standard format of demoparty competitions. Demos are optimized for a single screening for a large audience, and it is therefore preferrable that you can fix as many things as possible beforehand. Realtime rendering wasn't enforced as a rule until video playback capabilities of home computers had become decent enough to be regarded as a threat to the dominance of hardcore program code.
But it's not all about party screenings. There are many other types of venues in the world, and there are, for example, people who still actually bother to download demoscene productions for watching at home. These people may even desire more from their downloaded programs than just a couple of minutes of entertainment. There may be spectators who, for example, would like to create their own art with the methods used in the demo. Of the categories mentioned before, I would therefore like to elevate creative toys and tools to a special position.
It is proven that creative tools originating in the demoscene may give rise to completely new creative subcultures. Take trackers, for example. The PC tracker scene of the nineties was much wider than the demoscene which gave it the tools to work with. In the vast mosaic of today's Internet world, there is room for all kinds of niches. Release a sufficiently interesting creative tool, and with some luck, you'll inspire a bunch of freaks to find their own preferred means of creativity. The freaks may even form a tight-knit community around your tool and raise you to a kind of legend status you can't achieve with demo compo victories alone.
Back in the testosterone-filled days, you frowned upon those who used certain creative tools without understanding their deep technicalities. But nowadays, you may already realize the importance of "laymen" exploring the expressive possibilities of your ingenious routine or engine. If you are turned off by the fact that "everyone" is able to (ab)use your technical idea, you should move on and invent an even better one. The Core Activity is about continuous pushing of boundaries, not about jealously dwelling in your invention as long as you can.
Now, is there a risk that the demoscene will "bland out" if "non-demo productions" will receive as much praise and glory as the "actual" demos? I don't think so. To me, what defines the demoscene is the Core Activity and not the "realtime non-interactive production". As long as you nurture the hardcore spirit, it manifests itself in all kinds of things you produce, regardless of how static, realtime, bouncy or cubistic they are.
Parties and social networks
An important staple in keeping demo culture alive is the demoparty. It both strengthens the social bonds and motivates the people involved to create and release new material. Of course, extensive remote communication has always been there, but flesh-and-blood meetings are the ones that strengthen the relationships to span years and decades.As there are so many people who have deeply dedicated theirselves to demo art for so many years, I am convinced that there will be demoscene parties in 2020 as well. Only a global disaster of an apocalyptic scale can stop them from taking place.
While pure insider parties may be enough for keeping the demoscene alive, they are not enough for keeping it strong and vital. There is a need for fruitful contacts between demo artists and other relevant people, such as other kinds of artists and potential newcomers. High-profile mainstream computer parties, such as Assembly, have been succesful in establishing these contacts in the past, but much of the potential for success has faded out during the last decade, as an average demo artist has less and less in common with an average Assembly visitor.I think it is increasingly vital for demo artists to actively establish connections with other islets of creative culture they can relate to. The other high-profile Finnish demoparty, Alternative Party, has been very adventurous in this area. Street and museum exhibitions that bring demo art to "random" people may be fruitful as well, even in surprising ways. When looking for contacts, restricting oneself to "geeky subcultures" is not very relevant anymore, as everyone uses computers and digital storage formats nowadays, and being creative with them -- even in ways relevant to demo art -- does not require unusual levels of technological obsession.
Crosscultural contacts, in general, have the potential of giving demosceners more room to breath. While a typical demoparty environment strongly encourages a specific type of artwork (i.e. demos), other cultural contexts may inspire demo artists to create totally different kinds of artifacts. I'm also sure that many experimental artists would be happy to try out some unique creative tools that the demo community may be able to give to them, so the collaboration may work well in both directions.
Real and virtual platforms
The relationship between demo artists and computing platforms has changed dramatically during the past ten years. Back in the nineties, you had a limited number of supported platforms with separate scenes and competitions. Nowadays, you can choose nearly any hardware or software platform you like, and different platforms often share the same competitions. Due to the existence of decent emulators and easy video captures, the scene is no longer divided by gear ownership. Anyone can watch demos from any platform or even try to develop for almost any platform without owning the real hardware. Also, as the average age of demosceners has risen, platform fanboyism is now far less common.The freedom is not as full as it could be, however. There are people who build their own demo hardware and they are praised for this, but what about creating your own entirely software-based "virtual platforms"? Most demo artists don't even think about this idea. Of course, there are many coders who have created ad-hoc integrated virtual machines in order to, for example, improve the code density in 4K demos, but "actual" platforms are still something that need to be defined by the industry. In the past, it even required quite a tedious process until a new hardware platform became accepted by the community.
So, why would we need virtual platforms in the first place? Let's talk about the expressive power of chip music, for example. There are various historical soundchips that have different sets of features and limitations, and after using several of them, a musician may not be completely satisfied by any single chip. Instead, he or she may imagine a "perfect soundchip" that has the exact combination of features and limitations that inspires him/her in the best possible way. It may be a slight improvement of a favorite chip or a completely new design. Still, someone who composes for a virtual chip rather than an authentic historical chip may not be regarded as very "true". There is still certain history-fetishism that discourages this kind of activity. In my earlier essay about Computationally Minimal Art, however, I expressed my belief that the historical timeline will lose its meaning in the near future. This will make "non-historical experimentation" more acceptable.It is already relatively acceptable to run demos with emulators instead of real hardware, even in competitions, so I think it's only a matter of time that completely virtual platforms (or "fake emulators") become common. For many, this will be a blessing. Artists will be happier and more productive working with instruments that lack the design imperfections they used to hate, and the audience will be happier as it gets new kinds of esthetic forms to appreciate.
Virtual platforms may also introduce new problems, however. One of them is that none of the achieved technical feats can be appreciated if the platform is not well-understood by the audience: if you participate in a 256-byte competition with a demo written for your own separate virtual machine, it is always relevant for the spectator to assume that you have cheated by transferring logic from the demo code into the virtual machine implementation. You could, for example, put an entire music synthesizer in your virtual machine and just use a couple of bytes in the demo code to drive it. If you want your technical feats appreciated, the platform needs to pass some kind of a community acceptance process beforehand.
On the other hand, virtual platforms may eventually become mandatory for certain technical feats. It is already difficult in modern operating systems, for example, to create very small executables that access the graphics and sound hardware. As the platforms "improve", it may eventually become impossible to do certain things from within, say, a four-kilobyte executable. In cases like this, the community may need to solve the problem with a commonly accepted "virtual platform", i.e. a loader that allows running executables given in a format that has less overhead. Such a loader may also be used for fixing various compatibility problems that are certain to arise when new versions of operating systems come out.
Within some years from now, we may have a plethora of virtual machines attempting to represent "the ultimate demo platform". There will be a need for classifying these machines and deciding about their validity in various technical competitions. Despite all the possible problems and controversies they are going to introduce, I'm going to embrace their arrival.
But what about actual hardware platforms, then? I guess that there won't be as much difference by 2020 anymore. FPGA implementations of classic hardware have already been available for several years, and I assume it won't take long until it will be common to synthesize both emulators and physical hardware from the same source code. Once we reach the point that it is easy for anyone to use a printer-type device to produce a piece of hardware from a downloadable file, I don't think it'll really matter so much to anyone whether something is running virtually or physically.
Regarding the next decade of the mainstream hardware industry, I think the infamous Moore's law makes it all quite predictable and obvious: things that were not previously possible in real time will be easy to do in real time. There will be smaller video projectors and all that. Mobile platforms will be as powerful as today's high-end PCs, so you won't be able to get "oldschool kicks" from them anymore. If you want such kicks from an emerging technology, you won't have many niches left; conductive ink may be one of the last possibilities. Before 2020, your local grocery store will probably be selling milk in packages that have ink-based circuits displaying animations, and before that happens, I'm sure that the demoscene will be having lots of fun with the technology.
Paths of initiation
It is already a commonly accepted view that the demoscene needs newcomers to remain vital, and that they need to be actively recruited since the influx is no longer as overwhelming as it used to be. This view represents a dramatic change from the underground-elitist attitudes of the nineties, when potential newcomers were often forced thru a tight social filter that was supposed to separate the gifted individuals from the "lamers". Requiring guidance was a definitive sign of weakness; if you couldn't figure out the path of initiation on your own, no one was going to help you. You simply got stuck in the filter and never got in.According to my experiences, it is not very difficult to get people interested in demo art as long as you manage to pull the right strings. It is also relatively easy to get them participate in demoscene events. But getting them involved in the various creative activies is a much more complex task, especially when talking about the inner-circle activities that require programming. It is not about a lack of will or determination but more like about uncertainty about how to get started.
A lot of consideration should be put in the paths of initiation during the following decade. Instead of generalizing from their own past experiences, recruiters should listen to the stories of the recent newcomers. What kind of paths have they taken? What kind of niches have they found relevant? What have been the most difficult challenges in getting involved? Success stories and failure stories should both be listened to.
I'm now going to present some of my own ideas and observations about how democoder initiation works in today's world and how it does not. These are all based on my personal experiences with recent newcomers and not on any objective research, so feel free to disagree.
First, I want to outline my own theory about programming pedagogy. This is something I regard as a meaningful "hands-on" path for hobbyist programmers in general, not only for aspiring democoders. Lazy academic students (whose minds get "mutilated beyond recovery" by a careless choice of first language) may prefer a more theoretical route, but this three-phase model is something I have witnessed to work even for the young and the practical-minded, from one decade to another.
- First phase: Toy Language. It should have an easy learning curve and reward your efforts as soon as possible. It should encourage you to experiment and gradually give you the first hints of a programming mindset. Languages such as BASIC and HTML+PHP have been popular in this phase among actual hobbyists.
- Second phase: Assembly Language. While your toy language had a lot of different building blocks, you now have to get along with a limited selection. This immerses you into a "virtual world" where every individual choice you make has a tangible meaning. You may even start counting bytes or clock cycles, especially if you chose a somewhat
restricted platform. - Third phase: High Level Language. After working on the lowest level of abstraction, you now have the capacity for understanding the higher ones. The structures you see in C or Java code are abstractions of the kind of structures you built from your "Lego blocks" during the previous phase. You now understand why abstractions are important, and you may also eventually begin to understand the purposes of different higher-level programming techniques and conventions.
Based on this theory, I think it is a horrible mistake to recommend the modern PC platform (with Win32, DirectX/OpenGL, C++ and so on) to an aspiring democoder who doesn't have in-depth prior knowledge about programming. Even though it might be easy to get "outstanding" visual results with a relative ease, the programmer may become frustrated by his or her vague understanding of how and why their programs work.
Not everyone has the mindset for learning an actual "oldschool platform" on their own, however. I therefore think it might be useful to develop an "educational demoscene platform" that is easy to learn, simple in structure, fun to experiment with and "hardcore" enough for promoting a proper attitude. It might even be worthwhile to incorporate the platform in some kind of a game that motivates the player to go thru varying "challenges". Putting the game online and binding it to a social networking site may also motivate some people quite a lot and give the project some additional visibility.
Conclusion
We have now covered many different aspects of the future of demo art in the 2010s, and it is now the time to summarize. If we crystallize the prognosis to a single word, "diversity" might be a good choice.It indeed seems that the diversity in what demo artists produce will continue to increase in all areas. There will be more platforms available, many of them designed by the artists themselves. There will be more alternatives to the traditional realtime non-interactive demo, especially via the various "new" venues provided by "crosscultural contacts". And I'm sure that the range of conceptual and esthetic experimentation will broaden as well.
Back in the nineties, most demo artists were "playing the same game", with the same rules with relatively similar goals. After that, the challenges became much more individual, with different artists finding their very own niches to operate in. There are still "major categories" today, but as the new decade continues, they will have less and less meaning compared to the more individual quests. This may also reduce the competitive aspect of the demo culture: as everyone is playing their own separate game, it is no longer possible to compare the players. Perhaps, at some point of time, someone will even question the validity of the traditional compo format.Another keyword for the next decade could be "openness". It will show both in the increased outreach and "crossculturality". There will be an increasing amount of demo artists who operate in other contexts besides the good old demoscene, and perhaps there will also be more and more "outsiders" who want to try out the "demoscene way" for a chance, without intentions of becoming more integral members of the subculture.
In the nineties, many in the scene were dreaming about careers in the video game industry. After that, there have been similar dreams about the art world: gaining acceptance, perhaps even becoming professional artists. The dreams about the video game industry came true for many, so I'm convinced that the dreams about the art world will come true as well.
Sunday, 18 April 2010
Behind "Dramatic Pixels"
I released a minimalistic demo called "Dramatic Pixels" at Breakpoint 2010. It is an experiment in narrative using very minimal visual output: three colored character blocks ("big pixels") moving on an entirely black background, synchronized to musical accompaniment. (CSDB, Pouet.net)
I was expecting the demo to cause very mixed reactions in the audience, but to my surprise, it actually won the competition it was in (4-kilobyte Commodore 64 demo) and the reception has been almost entirely positive. This -- along with the fact that a somewhat similar production was released by Skrju and Triebkraft for the ZX Spectrum just two months earlier -- inspired me to write this short essay about the philosophy behind this production. And besides, visy/trilobit has also blogged about "Dramatic Pixels" recently, so I think I am obliged to do the same.
Background
For quite some time already, I have been on a philosophical excursion to the nature of "hard-core" digital creativity, especially the deep essences of the demoscene and the "8-bit" culture. The so far biggest visible result of this excursion has been my recent essay about Computationally Minimal Art, which, among all, separates the ideas of "optimalism" and "reductivism". I have noticed that the audiovisual digital culture (including the demoscene) has traditionally been very optimalist in nature, aiming at fitting as much complexity as possible within given boundaries. The opposite approach, reductivism, which embraces minimal complexity itself as an esthetic goal, is very seldom used by the demoscene, however.In December 2009, I was pondering about how to express "complex real-world phenomena" such as human emotions via "extreme reductivism". I was planning to design a low-pixel "video game character" that shows a wide range of emotions with facial and bodily expressions, and I particularly wanted to find out the minimum number of facial pixels required to express all the nine emotional responses (rasas) of the Indian theatre. When minimizing the number of pixels, however, I realized that facial expressions might not in fact be necessary at all; movement patterns and rhythms alone seemed to be enough for differentiating fear from bravery, or certainty from uncertainty. If the character only needs to move around for full expressive power, its pixel pattern can very well be reduced to a single pixel.
I quickly did a couple of experiments with this idea of "pixel drama". As the results were convincing enough, I started to plan a minimalistic movie using only single-pixel characters. As the movie was quite probably to be implemented as a demoscene production, I thought it would be important to have a somewhat "operatic" approach, synchronizing the visual action with a strong musical accompaniment.
After some initial sketches, I didn't really think about the idea for a couple of months. But less than a week before the Breakpoint party, I decided to implement it on the C-64. The choice of platform could have been just about anything, however, from VCS to Win32. C-64 just seemed like the best and easiest choice considering the competition categories available at Breakpoint. The size of the demo ended up to be about 1.5K bytes, and I later also released a 1K version where the introductionary text was removed.
The demo itself
Technically, everything in "Dramatic Pixels" is centered around the music player routine, which is also responsible for the choreography: the bytes that encode the notes of the lead channel also contain bits that control the movement of the pixels. To be exact, every time a new note is played by the lead instrument, exactly one of the three pixels takes a single step towards one of the four cardinal directions. This is an intentional technical decision that ties the pixel movement seamlessly to the music. Internally, the whole show is a series of looping sequences that are both musical and visual at the same time.All the actual musical notes, by the way, are encoded by only two bits each. These two bits form an index to a four-note set, which is defined by two variables (indicating base pitch and harmonic structure). These variables are manipulated on the fly by a higher-level control routine that is also responsible for the other macro-level changes in the demo. I prefer to encode melodies in this way rather than as absolute pitches, as a more "indirect" approach makes it more compact and closer to the essence of the musical structure. And, in the case of this demo, I wanted some minimalism (or maybe serialism) in the musical score as well, and the possibility to repeat the same patterns in different modes helps in this goal.
The 6502 assembly source code of the 1K version is available for those who are interested. It should be relatively easy to port to any 6502-based platform (with the music player probably requiring most work), so I've been planning on releasing separate versions for VIC-20 and Atari 2600 as well.
So, what about the story, then? Most of the interpretations I've heard have been somewhat similar and close to my own intentions, so I think my decisions about the audiovisual language have been relatively succesful: Red and Blue meet, fall in love, become estranged, cheat on each other with Green, and in the end everyone gets killed. However, there are some portions that are apparently more difficult.When I created the characters, I had no intentions of assigning genders to the pixels. Still, some people have interpreted Red as male and Blue as female. This probably stems from the differences in the base pitches (when Blue moves, the pitch is an octave higher than when Red moves), but the personalities of the pixels may also matter. Red is more stereotypically masculine, making more initiatives, while Blue mostly responds to these initiatives. I don't know whether the interpretations would have been different if I had chosen Blue to be the initiator.
The second part, where Red and Blue spend time on the opposite sides of the screen, is perhaps the most difficult to follow. I intended this part to represent everyday life where both pixels have their own daytime activities and only see each other at home very briefly in evenings (and don't pay much attention to one another even then). Also, the workplaces are so far away that the pixels can't see each other cheating until Red decides to get closer to Blue's workplace. And no, Green does not represent two different pixel personalities depending on the partner -- it's the same despisable creature in all cases. The part is intentionally slightly too long and repetitive in order to emphasize the frustration that repetitive everyday routines may lead to.Comparison to the Spectrum demo
I would now like to compare "Dramatic Pixels" to the 256-byte Spectrum demo I mentioned earlier, "A true story from the life of a lonely cell" by Sq/Skrju and Psndcj/Triebkraft. Although I'm trying to follow the Spectrum demoscene due to some very visionary groups therein, this demo was so recent that I never managed to even hear about it until I had finished "Dramatic Pixels".In both demos, there are three characters represented by solid-colored blocks. The blocks express emotion mostly by the way how they move. In "A true story", all movement happens in one dimension, so it is basically all about back-and-forth movement in varying rhythms. "Dramatic Pixels" can be very easily seen as a refinement of this concept, adding a musical accompaniment and another dimension (although it may very well have worked in 1D as well). The stories in both demos are based on the love triangle model, although my story is a little bit more complex.
"Great minds think alike", yes, but the coincidence still baffles me. Is it really just a coincidence or a result of some external factors? Deep thoughts about the state of the demoscene, perhaps combined with some general angst about the potential of the art form in the 2010s, were part of the mental process that lead me to create "Dramatic Pixels". I haven't discussed this with Sq, but perhaps there was something similar going on in his mind as well.
To add an additional spice to the mystery: the recent video game inspired short film called "Pixels" was put on the web on the same day (2010-04-07) as I put the video of "Dramatic Pixels" on Youtube.
The bigger purpose
For some time already, I have been writing pretty words about "thinking out of the box" in the demoscene context. But pretty words are hollow unless you back them up with some practical evidence, such as an actual demo.I considered it important to finish "Dramatic Pixels" for Breakpoint, as I had just recently released my essay about Computationally Minimal Art. I wanted to release a production that would support some of its ideas, especially the equality of reductivism as a "boundary-pushing" approach.
Anyway, I hope this experiment broke some new ground that would inspire some further experimentation in computational minimalism. I think traditional minimalists have already done quite a lot of "basic research" during the last hundred years or so, so I would like the inspired productions to choose a fresh route by emphasizing those areas that are unique in the computational approach.