Thursday, 2 April 2015
My first twenty years on the demoscene
Back in 1994, I got involved in some heated BBS discussions. I thought the computer culture of the time had been infected by a horrible disease. IBM PC compatible software was getting slow and bloated, and no one seemed to even question the need for regular hardware upgrades. I totally despised the way how PC hardware was being marketed to middle-class idiots and even declared the 486 PC as the computer of choice for dumb and spoiled kids. I was using an 8088 PC at the time and promised to myself not to buy any computing hardware that wasn't considered obsolete by consumption-oriented people. This decision has held quite well to these days. Nowadays, it is rather easy to get even "non-obsolete" hardware for free, so there has been very little need to actually buy anything but minor spare parts.
In the autumn of 1994, I released a couple of silly textmode games to spread my counterpropaganda. "Gamer Lamer" was about a kid who gathered "lamer points" by buying computers and games with his father's money. "Micro$oft Simulator", on the other hand, was a very simple economic simulator oriented on releasing new Windows versions and suing someone. I released these games under the group title PWP ("Pers-Wastaiset Produktiot" or "anti-arse productions") which was a kind of insider joke to begin with. The Finnish computer magazines of the time had been using the word "perusmikro" ("baseline microcomputer") for new and shiny 486 PCs, and this had inspired me to call them "persmikro" ("arse microcomputer").
At that time, Finnish BBSes were full of people who visited demoparties even without being involved with the demoscene. I wanted to meet users of my favorite boards more often, so I started visiting the events as well. In order to not being just another hang-around loser, I always entered a production to the PC 64k intro competition starting from 1996.
(The demo screenshots are Youtube links, by the way.)
Of course, I wanted to rebel against the demoscene status quo. I saw PC demos as "persmikro" software (after all, they were bloated to download with 2400 bps and didn't work in my 8088) and I was also annoyed by their conceptual emptiness. I decided that all PWP demos should run on 8088 in textmode or CGA, be under 32 kilobytes big and have some meaningful content. The afore-mentioned "Gamer Lamer" or "Pelulamu" character became the main hero in these productions. PWP demos have always been mostly my own solo productions, but sometimes other people contributed material as well – mostly graphics but sometimes music too.
The first three demos I released (the "Demulamu" trilogy) were disqualified from their respective competitions. Once I had developed some skill and style, I actually became quite succesful. In 1997, I came second in the 64k competition of the Abduction demoparty with "Isi", and in 1998, I won the competition with "Final Isi".
My demos were often seen as "cheap", pleasing crowds with "jokes" instead of talent. I wanted to prove to the naysayers that I had technical skills as well. In 1997, I had managed to get myself an "obsolete" 386 motherboard and VGA and started to work on a "technically decent" four-kilobyte demo for that year's Assembly party. The principle of meaningful content held: I wanted to tell a story instead of just showing rotating 3D objects. "Helium" eventually came first in the competition. Notably, it had optional Adlib FM music (eating up about 300 bytes of code and data) at a time when music was generally disallowed in the 4k size class.
My subsequent PC 4k demos were not as succesful, so I abandoned the category. Nevertheless, squeezing off individual bytes in size-optimized productions made me realize that profound discoveries and challenges might be waiting within tight constraints. Since Unix/Linux I was starting to get into wasn't a very grateful demo platform, I decided to go 8-bit.
In 1998, there was a new event called Alternative Party which wanted to promote alternative demoscene platforms and competitions. The main leading demoscene platforms of the time (386+ PC and AGA Amiga) were not allowed but anything else was. I sympathized the idea from the beginning and decided to try my hands on some VIC-20 demo code. "Bouncing Ball 2" won the competition and started a kind of curse: every time I ever participated in the demo competition at Alternative Party, I ended up first (1998, 2002, 2003 and 2010).
Alternative Party was influential in removing platform restrictions from other Finnish demoparties as well, which allowed me to use the unxepanded VIC-20 as my primary target platform just about anywhere. I felt quite good with this. There hadn't been many VIC-20 demos before, so there was still a lot of untapped potential in the hardware. I liked the raw and dirty esthetics the platform, the hard-core memory constraints of the unexpanded machine, as well as the fact that the platform itself could be regarded as a political statement. I often won competitions with the VIC-20 against much more powerful machines which kind of asserted that I was on the right track.
In around 2001-2003, there were several people who actively released VIC-20 demos, so there was some technical competition within the platform as well. New technical tricks were found all the time, and emulators often lagged behind the development. In 2003, I won the Alternative Party with a demo, "Robotic Warrior", that used a singing software speech synthesizer. The synth later became a kind of trademark for my demo productions. Later that year, I made my greatest hardware-level discovery ever – that the square-wave audio channels of the VIC-I chip actually use shift registers instead of mere flip-flops. Both the speech synth and "Viznut waveforms" can be heard in "Robotic Liberation" (2003) which I still regard as a kind of "magnum opus" for my VIC-20 work.
Although I released some "purely technical" demos (like the "Impossiblator" series), most of my VIC-20 productions have political or philosophical commentary of some kind. For example, "Robotic Warrior" and "Robotic Liberation", despite being primarily technical show-offs, are dystopian tales on the classic theme of machines rising against people.
I made demos for some other 8-bit platforms as well. "Progress Without Progress" (2006) is a simple Commodore 64 production that criticizes economic growth and consumption-oriented society (with a SID-enhanced version of my speech synthesizer). I also released a total of three 4k demos for the C-64 for the German parties Breakpoint and Revision. I never cared very much about technical excellence or "clean esthetics" when working on the C-64, as other sceners were concentrating on these aspects. For example, "Dramatic Pixels" (2010) is above all an experiment in minimalistic storytelling.
A version of my speech synth can also be heard on Wamma's Atari 2600 demo "(core)", and some of my VCS code can be seen in Trilobit's "Doctor" as well. I found the Atari 2600 platform very inspiring, having many similar characteristics and constraints I appreciate in the VIC 20 but sometimes in a more extreme form.
When I was bored with new technical effects for the VIC-20, I created tools that would allow me to emphasize art over technology. "The Next Level" (2007) was the first example of this, combining "Brickshop32" animation with my trusted speech synth. I also wrote a blog post about its development. The dystopian demo "Future 1999" (2009) combines streamed character-cell graphics with sampled speech. "Large Unified Theory" (2010), a story about enlightenment and revolution, was the last production where I used BS32.
Perhaps the hurried 128-kilobyte MS-DOS demo "Human Resistance" (2011) should be mentioned here as well. In the vein of my earlier dystopian demos, it tells about a resistance group that has achieved victory against a supposedly superior artificial intelligence by using the most human aspects of human mind. I find these themes very relevant to what kind of thoughts I am processing right now.
In around 2009-2011, I spent a lot of time contemplating on** the nature of the demoscene and computing platforms, as seen in many of my blog posts from that period. See e.g. "Putting the demoscene in a context", "Defining Computationally Minimal Art" and "The Future of Demo Art" (which are also available on academia.edu). I got quoted in the first ever doctoral dissertation about demos (Daniel Botz: Kunst, Code und Maschine), which also gave me some new food for thought. This started to form basis on my philosophical ideas about technology which I am refining right now.
Extreme minimalism in code and data size had fascinated me since my first 4k demos. I felt there was a lot of untapped potential in extremely simple and chaotic systems (as hinted by Stephen Wolfram's work). The C-64 4k demo "False Dimension" (2012) is a collection of Rorschach-like "landscape photographs" generated from 16-bit pseudorandom seeds. I also wanted to push the limits of sub-256-byte size classes, but since real-world platforms tend to be quite problematic with tiny program sizes, I wanted a clean virtual machine for this purpose. "IBNIZ" (2011) was born out of this desire.
When designing IBNIZ, I wanted to have a grasp on how much math would be actually needed for all-inclusive music synthesis. Experimentation with this gave birth to "Bytebeat", an extremely minimalistic approach to code-based music. It became quite a big thing, with more than 100000 watchers for the related Youtube videos. I even wrote an academic article about the thing.
After Bytebeat, I had begun to consciously distance myself from the demoscene in order to have more room for different kinds of social and creative endeavours. The focus on non-interactive works seemed limited to me especially when I was pondering about the "Tetris effects" of social media mechanisms or technology in general. However, my only step toward interactive works has been a single participation in Ludum Dare. I had founded an oldschool computer magazine called "Skrolli" in autumn 2012 and a lot of my resources went there.
Now that I have improved my self-management skills, I feel I might be ready for some vaguely demoscene-related software projects once again. One of the projects I have been thinking about is "CUGS" (Computer Underground Simulator) which would attempt to create a game-like social environment that would encourage creative and skill-oriented computer subcultures to thrive (basically replicating some of the conditions that allowed the demoscene to form and prosper). However, my head is full of other kinds of ideas as well, so what will happen in the next few months remains to be seen.
Thursday, 25 September 2014
Choosing low-tech visual styles for games
The theme of the contest was "connected worlds". I made a game called Quantum Dash that experiments with parallel universes as a central game mechanic. The player operates in three universes at the same time, and when connecting "interdimensional cords", the differences between these universes explosively cancel each other. The "Dash" part in the name refers to the Boulder Dash style grid physics I used. I found the creation process very refreshing, I am quite happy with the result considering the circumstances, and I will very likely continue making games (or at least rapid prototypes thereof).
My relationship with computer games became somewhat dissonant during the nineties. At that time, the commercial industry became radically more centralized and profit-oriented. Eccentric European coder-auteur-heroes disappeared from computer magazines, giving way to American industry giants and their campaigns. There was also the rise of the "gamer" subculture that I considered rather repulsive from early on due to its glorification of hardware upgrades and disinterest towards real computer skills.
Profit maximization in the so-called serious game industry is largely driven by a specific, Hollywood-style "bigger is better" approach to audiovisual esthetics. That is, a strive for photorealism. This approach is, of course, very appealing to shareholders: It is easy to imagine the grail -- everyone knows what the real world looks like -- but no one will ever reach it despite getting closer all the time. Increases in processing power and development budgets quite predictably map to increases in photorealism. There is also inherent obsolescence: yesterday's near-photorealism looks bad compared to today's near-photorealism, so it is easy to make consumers desire revamped versions of earlier titles instead of anything new.
In the early noughties, the cult of photorealism was still so dominant that even non-commercial and small-scale game productions followed it. Thus, independent games often looked like inadequate, "poor man's" versions of AAA games. But the cult was starting to lose its grip: independent games were already looking for new paths. In his spring 2014 paper, game researcher Jesper Juul gives 2005 as an important year in this respect: since 2005, the Grand Prize winners of the Independent Games Festival have invariably followed styles that diverge from the industrial mainstream.
Juul defines "Independent Style" as follows: "Independent Style is a representation of a representation. It uses contemporary technology to emulate low-tech and usually “cheap” graphical materials and visual styles, signaling that a game with this style is more immediate, authentic and honest than are big-budget titles with high-end 3-dimensional graphics."
The most prominent genre within I.S. is what Juul calls "pixel style", reminiscent of older video game technology and also overlapping with the concept of "Computationally Minimal Art" I formulated a few years ago. My game, Quantum Dash, also fits in this substyle. I found the stylistic approach appealing because it is quick and easy to implement from scratch in a limited time. Part of this easiness stems from the fact that CMA is native to the basic fabric of digital electronic computers. Another attracting aspect is the long tradition of low-tech video games which makes it easy to reflect prior work and use the established esthetic language.
Another widely used approach simulates art made with physical materials such as cut-out paper (And Yet It Moves) or wax pastels on paper (Crayon Physics). Both this approach and the aforementioned pixel style apparently refer to older technologies, which makes it tempting to generalize the idea of past references to other genres of I.S. as well. However, I think Juul somewhat stumbles with this attempt with styles that don't have a real historical predecessor: "The pixel style 3d games Minecraft and Fez also cannot refer to an earlier time when 3d games were commonly made out of large volumetric pixels (voxels), so like Crayon Physics Deluxe, the historical reference is somewhat counterfactual, but still suggests a simpler, if nonexistent, earlier technology."
I think it would be more fruitful to concentrate on complexity than history when analyzing Independent Style. The esthetic possibility space of modern computing is mind-bogglingly large. It is easy to get lost in all the available potential complexity. However, by introducing constraints and stylistic choices that dramatically reduce the complexity, it is easier even for a solo artist to explore and grasp the space. The contraints and choices don't need to refer to any kind of history -- real or counterfactual -- to be effective.
The voxel style in Minecraft can still be considered somewhat historical -- a 3D expansion of grid-based 2D games such as Boulder Dash. However, I suspect that the esthetic experimentation in independent games will eventually lead to a much wider variety of styles and constraints -- including a bunch that cannot be explained with historical references.
The demoscene has been experimenting with different visual styles for a long time. Even at times when technical innovation was the primary concern, the goal was to find new things that just look good -- and realism was just one possible way of looking good. In 1996, when realtime raytracing was a hot new photorealistic thing among democoders, there was a production called Paper by Psychic Link that dropped jaws with its paper-inspired visuals -- a decade before paper simulation became trendy in the independent games scene. Now that the new PC hardware no longer challenges the demo artist the way it used to, there is much more emphasis on stylistic experimentation in non-constrained PC demos.
Because of this longer history of active experimentation, I think it would be useful for many more independent game developers to look for stylistic inspiration in demoscene works. Of course, not all the tricks and effects adapt well to games, but the technological and social conditions in their production are quite similar to those in low-budget games. After all, demos are real-time-rendering computer programs produced by small groups without budgets, usually over relatively short time periods, so there's very little room for "big-budget practices" there.
Here's a short list of demos with unique esthetic elements that might be able to inspire game esthetics as well. Two of them are for 8-bit computers and the rest for (semi-)modern PCs.
- Metamorphosis by ASD
- IX by Moppi Productions
- Your Song is Quiet part 2 by Inward and TPOLM
- Royal Temple Ball by Synesthetics
- Antifact by Limp Ninja
- Weed by Triebkraft and 4th Dimension
- hwr2 by Kosmoplovci
Still, due to my background, I want to take effort in choosing a set of simple and lightweight esthetic approaches to be used. They will definitely be computationally minimal, but I want to choose some fresh techniques in order to contrast favorably against the square-pixel style that is already quite mainstream in independent games. But that'll be a topic for another post.
Tuesday, 5 August 2014
The resource leak bug of our civilization
A couple of months ago, Trixter of Hornet released a demo called "8088 Domination", which shows off real-time video and audio playback on the original 1981 IBM PC. This demo, among many others, contrasts favorably against today's wasteful use of computing resources.
When people try to explain the wastefulness of today's computing, they commonly offer something I call "tradeoff hypothesis". According to this hypothesis, the wastefulness of software would be compensated by flexibility, reliability, maintability, and perhaps most importantly, cheap programming work. Even Trixter himself favors this explanation.
I used to believe in the tradeoff hypothesis as well. I saw demo art on extreme platforms as a careful craft that attains incredible feats while sacrificing generality and development speed. However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal. An ever-increasing portion of the waste comes from abstraction clutter that serves no purpose in final runtime code. Most of this clutter could be eliminated with more thoughtful tools and methods without any sacrifices. What we have been witnessing in computing world is nothing utilitarian but a reflection of a more general, inherent wastefulness, that stems from the internal issues of contemporary human civilization.
The bug
Our mainstream economic system is oriented towards maximal production and growth. This effectively means that participants are forced to maximize their portions of the cake in order to stay in the game. It is therefore necessary to insert useless and even harmful "tumor material" in one's own economical portion in order to avoid losing one's position. This produces an ever-growing global parasite fungus that manifests as things like black boxes, planned obsolescence and artificial creation of needs.
Using a software development metaphor, it can be said that our economic system has a fatal bug. A bug that continuously spawns new processes that allocate more and more resources without releasing them afterwards, eventually stopping the whole system from functioning. Of course, "bug" is a somewhat normative term, and many bugs can actually be reappropriated as useful features. However, resource leak bugs are very seldom useful for anything else than attacking the system from the outside.
Bugs are often regarded as necessary features by end-users who are not familiar with alternatives that lack the bug. This also applies to our society. Even if we realize the existence of the bug, we may regard it as a necessary evil because we don't know about anything else. Serious politicians rarely talk about trying to fix the bug. On the contrary, it is actually getting more common to embrace it instead. A group that calls itself "Libertarians" even builds their ethics on it. Another group called "Extropians" takes the maximization idea to the extreme by advocating an explosive expansion of humankind into outer space. In the so-called Kardashev scale, the developmental stage of a civilization is straightforwardly equated with how much stellar energy it can harness for production-for-its-own-sake.
How the bug manifests in computing
What happens if you give this buggy civilization a virtual world where the abundance of resources grows exponentially, as in Moore's law? Exactly: it adopts the extropian attitude, aggressively harnessing as much resources as it can. Since the computing world is virtually limitless, it can serve as an interesting laboratory example where the growth-for-its-own-sake ideology takes a rather pure and extreme form. Nearly every methodology, language and tool used in the virtual world focuses on cumulative growth while neglecting many other aspects.
Result: alienation
The demoscene insight
What to do?
Saturday, 5 January 2013
I founded a new "oldschool" computer magazine.
In September 2012, I founded Skrolli, a new Finnish computer magazine. This turn in my life surprised even myself.
It started from an image that went viral. Produced by my friend CCR with a lot of ideas from me, it was a faux magazine cover speculating what the longest-living Finnish home computing magazine, MikroBitti, would be like today if it had never renewed itself after the eighties. The magazine happens to be somewhat iconic to those Finns who got immersed to computing before the turn of the millennium, so it reached some relevant audience quite efficiently.
The faux cover was meant to be a joke, but the abundance of comments like "I would definitely subscribe to this kind of magazine" made me seriously consider the possibility of actually creating something like it. I put up a simple web page stating the idea of a new "countercultural" computer magazine that is somewhat similar to what MikroBitti used to be like. In just a few days, over a hundred people showed up on the dedicated IRC channel, and here we are.
Bringing the concept of an oldschool microcomputer magazine to the present era needs some thoughtful reflection. The world has changed a lot; computer hobbyists no longer exist as a unified group, for example. Everyone uses a computer for leisure, and it is sometimes difficult to draw line between those who are interested in the applications and those who are genuinely interested in the technology. Different activities also have their own subcultures with their own communication channels, and it is often hard to relate to someone whose subculture has a very different basis.
Skrolli defines computer culture as something where the computational aspects are irreducible. It is possible to create visual art or music completely without digital technology, for example, but once the computer becomes the very material (like in case of pixel art or chip music), the creative activity becomes relevant to our magazine. Everything where programming or other direct access to the computational mechanisms is involved is also relevant, of course.
I also chose to target the magazine to my own language group. In a nation of six million, the various subcultures are closer to one another, so it is easier to build a common project that spans the whole scale. The continuing existence of large computer hobbyist events in this country might also simplify the task. If the magazine had been started in English or even German, there would have been a much greater risk of appealing only to a few specialized niches.
In order to keep myself motivated, I have been considering the possibility that Skrolli will actually start a new movement. Something that brings the computational aspects of computer entuhsiasm back to daylight and helps the younger generation to find a true, non-compromising relationship with digital technology. Once the movement starts growing on its own, without being tied to a single project, language barriers will no longer exist for it.
I will be busy with this stuff for at least a couple of months until we get the first few issues printed (yes, it will be primarily a paper magazine as a statement against short-living journalism). After that, it is somewhat likely that I will finish the projects I temporarily abandoned: there will probably be a JIT-enabled version IBNIZ, and the IBNIZ democoding contest I promised will be arranged. Stay tuned!
Saturday, 17 March 2012
"Fabric theory": talking about cultural and computational diversity with the same words
I'm sure this sounds quite meta, vague or superficial when explained this way, but I'm convinced that the similarities are far more profound than most people assume. In order to bring these concepts together, I've chosen to use the English word "fabric" to refer to the set of form-giving characteristics of languages, computers or just about anything. I've picked this word partly because of its dual meaning, i.e. you can consider a fabric a separate, underlying, form-giving framework just as well as an actual material from which the different artifacts are made. You may suggest a better word if you find one.
Fabrics
The fabric of a human language stems (primarily) from its grammar and vocabulary. The principle of lingustic relativity, also known as the Sapir-Whorf hypothesis, suggests that language defines a lot about how our ways of thinking end up being like, and there is even a bunch of experimental support for this idea. The stronger, classical version of the hypothesis, stating that languages build hard barriers that actually restrict what kind of ideas are possible, is very probably false, however. I believe that all human languages are "human-complete", i.e. they are all able to express the same complete range of human thoughts, although the expression may become very cumbersome in some cases. In most Indo-European languages, for example, it is very difficult to talk about people without mentioning their real or assumed genders all the time, and it may be very challenging to communicate mathematical ideas in an Aboriginal language that has a very rudimentary number system.Many programmers seem to believe that the Sapir-Whorf hypothesis also works with programming languages. Edsger Dijkstra, for example, was definitely quite Whorfian when stating that teaching BASIC programming to students made them "mentally mutilated beyond hope of regeneration". The fabric of a programming language stems from its abstract structure, not much unlike those of natural languages, although a major difference is that the fabrics of programming languages tend to be much "purer" and more clear-cut, as they are typically geared towards specific application areas, computation paradigms and software development philosophies.
Beyond programming languages there are computer platforms. In the context of audiovisual computer art, the fabric of a hardware platform stems both from its "general-purpose" computational capabilities and the characteristics of its special-purpose circuitry, especially the video and sound hardware. The effects of the fabric tend to be the clearest in the most restricted platforms, such as 8-bit home computers and video game consoles. The different fabrics ("limitations") of different platforms are something that demoscene artists have traditionally been concerned about. Nowadays, there is even an academic discipline with an expanding series of books, "Platform Studies", that asks how video games and other forms of computer art have been shaped by the fabrics of the platforms they've been made for.
The fabric of a human culture stems from a wide memetic mess including things like taboos, traditions, codes of conduct, and, of course, language. In modern societies, a lot stems from bureaucratic, economic and regulatory mechanisms. Behavior-shaping mechanisms are also very prominent in things like video games, user interfaces and interactive websites, where they form a major part of the fabric. The fabric of a musical instrument stems partly from its user interface and partly from its different acoustic ranges and other "limitations". It is indeed possible to extend the "fabric theory" to quite a wide variety of concepts, even though it may get a little bit far-fetched at times.
Noticing one's own box
In many cases, a fabric can become transparent or even invisible. Those who only speak one language can find it difficult to think beyond its fabric. Likewise, those who only know about one culture, one worldview, one programming language, one technique for a specific task or one just-about-anything need some considerable effort to even notice the fabric, let alone expand their horizons beyond it. History shows that this kind of mental poverty leads even some very capable minds into quite disastrous thoughts, ranging from general narrow-mindedness and false sense of objectivity to straightforward religious dogmatism and racism.In the world of computing, difficult-to-notice fabrics come out as standards, de-facto standards and "best practices". Jaron Lanier warns about "lock-ins", restrictive standards that are difficult to outthink. MIDI, for example, enforces a specific, finite formalization of musical notes, effectively narrowing the expressive range of a lot of music. A major concern risen by "You are not a gadget" is that technological lock-ins of on-line communication (e.g. those prominent in Facebook) may end up trivializing humanity in a way similar to how MIDI trivializes music.
Of course, there's nothing wrong with standards per se. Standards, also including constructs such as lingua francas and social norms, can be very helpful or even vital to humanity. However, when a standard becomes an unquestionable dogma, there's a good chance for something evil to happen. In order to avoid this, we always need individuals who challenge and deconstruct the standards, keeping people aware of the alternatives. Before we can think outside the box, we must first realize that we are in a box in the first place.
Constraints
In order to make a fabric more visible and tangible, it is often useful to introduce artificial constraints to "tighten it up". In a human language, for example, one can adopt a form of constrained writing, such as a type of poetry, to bring up some otherwise-invisible aspects of the linguistic fabric. In normal, everyday prose, words are little more than arbitrary sequences of symbols, but when working under tight constraints, their elementary structures and mutual relationships become important. This is very similar to what happens when programming in a constrained environment: previously irrelevant aspects, such as machine code instruction lengths, suddenly become relevant.Constrained programming has long traditions in a multitude of hacker subcultures, including the demoscene, where it has obtained a very prominent role. Perhaps the most popular type of constraint in all hacker subcultures in general is the program length constraint, which sets an upper limit to the size of either the source code or the executable. It seems to be a general rule that working with ever smaller program sizes brings the programmer ever closer to the underlying fabric: in larger programs, it is possible to abstract away a lot of it, but under tight constraints, the programmer-artist must learn to avoid abstraction and embrace the fabric the way it is. In the smallest size classes, even such details as the ordering of sound and video registers in the I/O space become form-giving, as seen in the sub-32-byte C-64 demos by 4mat of Ate Bit, for example.
Mind-benders
Sometimes a language or a platform feels tight enough even without any additional constraints. A lot of this feeling is subjective, caused by the inability to express oneself in the previously learned way. When learning a new human language that is completely different to one's mother tongue, one may feel restricted when there's no counterpart for a specific word or grammatical cosntruct. When encountering such a "boundary", the learner needs to rethink the idea in a way that goes around it. This often requires some mind-bending. The same phenomenon can be encountered when learning different programming languages, e.g. learning a declarative language after only knowing imperative ones.Among both human and programming languages, there are experimental languages that have been deliberately constructed as "mind-benders", having the kind of features and limitations that force the user to rethink a lot of things when trying to express an idea. Among constructed human languages, a good example is Sonja Elen Kisa's minimalistic "Toki Pona" that builds everything from just over 120 basic words. Among programming languages, the mind-bending experiments are called "esoteric programming languages", with the likes of Brainfuck and Befunge often mentioned as examples.
In computer platforms, there's also a lot of variance in "objective tightness". Large amounts of general-purpose computing resources make it possible to accurately emulate smaller computers; that is, a looser fabric may sometimes completely engulf a tighter one. Because of this, the experience of learning a "bigger" platform after a "smaller" one is not usually very mind-bending compared to the opposite direction.
Nothing is neutral
Now, would it be possible to create a language or a computer that would be totally neutral, objective and universal? I don't think so. Trying to create something that lacks fabric is like trying to sculpt thin air, and fabrics are always built from arbitrarities. Whenever something feels neutral, the feeling is usually deceptive.Popular fabrics are often perceived as neutral, although they are just as arbitrary and biased as the other ones. A tribe that doesn't have very much contact with other tribes typically regards its own language and culture as "the right one" and everyone else as strange and deviant. When several tribes come together, they may choose one language as their supposedly neutral lingua franca, and a sufficiently advanced group of tribes may even construct a simplified, bland mix-up of all of its member languages, an "Esperanto". But even in this case, the language is by no means universal; the fabric that is common between the source languages is still very much present. Even if the language is based on logical principles, i.e. a "Lojban", the chosen set of principles is arbitrary, not to mention all the choices made when implementing those principles.
Powerful computers can usually emulate many less powerful ones, but this does not make them any less arbitrary. On the contrary, modern IBM PC compatibles are full of arbitrary desgin choices stacked on one another, forming a complex spaghetti of historical trials and errors that would make no sense at all if designed from scratch. The modern IBM PC platform therefore has a very prominent fabric, and the main reason why it feels so neutral is its popularity. Another reason is that the other platforms have many a lot of the same design choices, making today's computer platforms much less diverse than what they were a couple of decades ago. For example, how many modern platforms can you name that use something other than RGB as their primary colorspace, or something other than a power of two as their word length?
Diversity is diminishing in many other areas as well. In countries with an astounding diversity, like Papua-New-Guinea, many groups are abandoning their unique native languages and cultures in favor of bigger and more prestigious ones. I see some of that even in my own country, where many young and intelligent people take pride in "thinking in English", erroreusnly assuming that second-language English would be somehow more expressive for them than their mother tongue. In a dystopian vision, the diversity of millennia-old languages and cultures is getting replaced by a global English-language monoculture where all the diversity is subcultural at best.
Conclusion
It indeed seems to be possible to talk about human languages, cultures, programming languages, computing platforms and many other things with similar concepts. These concepts also seem so useful at times that I'm probably going to use them in subsequent articles as well. I also hope that this article, despite its length, gives some food for thought to someone.Now, go to the world and embrace the mind-bending diversity!
Friday, 30 December 2011
IBNIZ - a hardcore audiovisual virtual machine and an esoteric programming language
Some days ago, I finished the first public version of my audiovisual virtual machine, IBNIZ. I also showed it off on YouTube with the following video:
As demonstrated by the video, IBNIZ (Ideally Bare Numeric Impression giZmo) is a virtual machine and a programming language that generates video and audio from very short strings of code. Technically, it is a two-stack machine somewhat similar to Forth, but with the major execption that the stack is cyclical and also used at an output buffer. Also, as every IBNIZ program is implicitly inside a loop that pushes a set of loop variables on the stack on every cycle, even an empty program outputs something (i.e. a changing gradient as video and a constant sawtooth wave as audio).
How does it work?
To illustrate how IBNIZ works, here's how the program ^xp is executed, step by step:

So, in short: on every loop cycle, the VM pushes the values T, Y and X. The operation ^ XORs the values Y and X and xp pops off the remaining value (T). Thus, the stack gets filled by color values where the Y coordinate is XORed by the X coordinate, resulting in the ill-famous "XOR texture".
The representation in the figure was somewhat simplified, however. In reality, IBNIZ uses 32-bit fixed-point arithmetic where the values for Y and X fall between -1 and +1. IBNIZ also runs the program in two separate contexts with separate stacks and internal registers: the video context and the audio context. To illustrate this, here's how an empty program is executed in the video context:

The colorspace is YUV, with the integer part of the pixel value interpreted as U and V (roughly corresponding to hue) and the fractional part interpreted as Y (brightness). The empty program runs in the so-called T-mode where all the loop variables -- T, Y and X -- are entered in the same word (16 bits of T in the integer part and 8+8 bits of Y and X in the fractional). In the audio context, the same program executes as follows:

Just like in the T-mode of the video context, the VM pushes one word per loop cycle. However, in this case, there is no Y or X; the whole word represents T. Also, when interpreting the stack contents as audio, the integer part is ignored altogether and the fractional part is taken as an unsigned 16-bit PCM value.
Also, in the audio context, T increments in steps of 0000.0040 while the step is only 0000.0001 in the video context. This is because we need to calculate 256x256 pixel values per frame (nearly 4 million pixels if there are 60 frames per second) but suffice with considerably fewer PCM samples. In the current implementation, we calculate 61440 audio samples per second (60*65536/64) which is then downscaled to 44100 Hz.
The scheduling and main-looping logic is the only somewhat complex thing in IBNIZ. All the rest is very elementary, something that can be found as instructions in the x86 architecture or as words in the core Forth vocabulary. Basic arithmetic and stack-shuffling. Memory load and store. An if/then/else structure, two kinds of loop structures and subroutine definition/calling. Also an instruction for retrieving user input from keyboard or pointing device. Everything needs to be built from these basic building blocks. And yes, it is Turing complete, and no, you are not restricted to the rendering order provided by the implicit main loop.
The full instruction set is described in the documentation. Feel free to check it out experiment with IBNIZ on your own!
So, what's the point?
The IBNIZ project started in 2007 with the codename "EDAM" (Extreme-Density Art Machine). My goal was to participate in the esoteric programming language competition at the same year's Alternative Party, but I didn't finish the VM at time. The project therefore fell to the background. Every now and then, I returned to the project for a short while, maybe revising the instruction set a little bit or experimenting with different colorspaces and loop variable formats. There was no great driving force to insppire me to finish the VM until mid-2011 after some quite succesful experiments with very short audiovisual programs. Once some of my musical experiments spawned a trend that eventually even got a name of its own, "bytebeat", I really had to push myself to finally finishing IBNIZ.
The main goal of IBNIZ, from the very beginning, was to provide a new platform for the demoscene. Something without the usual fallbacks of the real-world platforms when writing extremely small demos. No headers, no program size overhead in video/audio access, extremely high code density, enough processing power and preferrably a machine language that is fun to program with. Something that would have the potential to displace MS-DOS as the primary platform for sub-256-byte demoscene productions.
There are also other considerations. One of them is educational: modern computing platforms tend to be mind-bogglingly complex and highly abstracted and lack the immediacy and tangibility of the old-school home computers. I am somewhat concerned that young people whose mindset would have made them great programmers in the eighties find their mindset totally incompatible with today's mainstream technology and therefore get completely driven away from programming. IBNIZ will hopefully be able to serve as an "oldschool-style platform" in a way that is rewarding enough for today's beginninng programming hobbyists. Also, as the demoscene needs all the new blood it can get, I envision that IBNIZ could serve as a gateway to the demoscene.
I also see that IBNIZ has potential for glitch art and livecoding. By taking a nondeterministic approach to experimentation with IBNIZ, the user may encounter a lot of interesting visual and aural glitch patterns. As for livecoding, I suspect that the compactness of the code as well as the immediate visibility of the changes could make an IBNIZ programming performance quite enjoyable to watch. The live gigs of the chip music scene, for example, might also find use for IBNIZ.
About some design choices and future plans
IBNIZ was originally designed with an esoteric programming language competition in mind, and indeed, the language has already been likened to the classic esoteric language Brainfuck by several critical commentators. I'm not that sure about the similarity with Brainfuck, but it does have strong conceptual similarities with FALSE, the esoteric programming language that inspired Brainfuck. Both IBNIZ and FALSE are based on Forth and use one-character-long instructions, and the perceived awkwardness of both comes from unusual, punctuation-based syntax rather than deliberate attempts at making the language difficult.
When contrasting esotericity with usefulness, it should be noted that many useful, mature and well-liked languages, such as C and Perl, also tend to look like total "line noise" to the uninitiated. Forth, on the other hand, tends to look like mess of random unrelated strings to people unfamiliar with the RPN syntax. I therefore don't see how the esotericity of IBNIZ would hinder its usefulness any more than the usefulness of C, Perl or Forth is hindered by their syntaxes. A more relevant concern would be, for example, the lack of label and variable names in IBNIZ.
There are some design choices that often get questioned, so I'll perhaps explain the rationale for them:
- The colors: the color format has been chosen so that more sensible and neutral colors are more likely than "coder colors". YUV has been chosen over HSV because there is relatively universal hardware support for YUV buffers (and I also think it is easier to get richer gradients with YUV than with HSV).
- Trigonometric functions: I pondered for a long while whether to include SIN and ATAN2 and I finally decided to do so. A lot of demoscene tricks depend, including all kinds of rotating and bouncing things as well as more advanced stuff such as raycasting, depends on the availability of trigonometry. Both of these operations can be found in the FPU instruction set of the x86 and are relatively fundamental mathematical stuff, so we're not going into library bloat here.
- Floating point vs fixed point: I considered floating point for a long while as it would have simplified some advanced tricks. However, IBNIZ code is likely to use a lot of bitwise operations, modular bitwise arithmetic and indefinitely running counters which may end up being problematic with floating-point. Fixed point makes the arithmetic more concrete and also improves the implementability of IBNIZ on low-end platforms that lack FPU.
- Different coordinate formats: TYX-video uses signed coordinates because most effects look better when the origin is at the center of the screen. The 'U' opcode (userinput), on the other hand, gives the mouse coordinates in unsigned format to ease up pixel-plotting (you can directly use the mouse coordinates as part of the framebuffer memory address). T-video uses unsigned coordinates for making the values linear and also for easier coupling with the unsigned coordinates provided by 'U'.
Right now, all the existing implementations of IBNIZ are rather slow. The C implementation is completely interpretive without any optimization phase prior to execution. However, a faster implementation with some clever static analysis is quite high on the to-do list, and I expect a considerable performance boost once native-code JIT compilers come into use. After all, if we are ever planning to displace MS-DOS as a sizecoding platform, we will need to get IBNIZ to run at least faster than DOSBOX.
The use of externally-provided coordinate and time values will make it possible to scale a considerable portion of IBNIZ programs to a vast range of different resolutions from character-cell framebuffers on 8-bit platforms to today's highest higher-than-high-definition standards. I suspect that a lot of IBNIZ programs can be automatically compiled into shader code or fast C-64 machine language (yes, I've made some preliminary calculations for "Ibniz 64" as well). The currently implemented resolution, 256x256, however, will remain as the default resolution that will ensure compatibility. This resolution, by the way, has been chosen because it is in the same class with 320x200, the most popular resolution of tiny MS-DOS demos.
At some point of time, it will also become necessary to introduce a compact binary representation of IBNIZ code -- with variable bit lengths primarily based on the frequency of each instruction. The byte-per-character representation already has a higher code density than the 16-bit x86 machine language, and I expect that a bit-length-optimized representation will really break some boundaries for low size classes.
An important milestone will be a fast and complete version that runs in a web brower. I expect this to make IBNIZ much more available and accessible than it is now, and I'm also planning to host an IBNIZ programming contest once a sufficient web implementation is on-line. There is already a Javascript implementation but it is rather slow and doesn't support sound, so we will still have to wait for a while. But stay tuned!
Tuesday, 15 November 2011
Materiality and the demoscene: when does a platform feel real?
I've just finished reading Daniel Botz's 428-page PhD dissertation "Kunst, Code und Maschine: Die Ästhetik der Computer-Demoszene".
The book is easily the best literary coverage of the demoscene I've seen so far. It is basically a history of demos as an artform with a particular emphasis on the esthetical aspects of demos, going very deeply into different styles and techniques and their development, often in relation to the features of the three "main" demoscene platforms (C-64, Amiga and PC).
What impressed me the most in the book and gave me most food for thought, however, was the theoretical insight. Botz uses late Friedrich Kittler's conception of media materiality as a theoretical device to explain how the demoscene relates to the hardware platforms it uses, often contrasting the relationship to that of the mainstream media art. In short: the demoscene cares about the materiality of the platforms, while the mainstream art world ignores it.
To elaborate: mainstream computer artists regard computers as tools, universal "anything machines" that can translate pure, immaterial, technology-independent ideas into something that can be seen, heard or otherwise experienced. Thus, ideas come before technology. Demosceners, however, have an opposite point of view; for them, technology comes before ideas. A computer platform is seen as a material that can be brought into different states, in a way comparable to how a sculptor brings blocks of stone into different forms. The possibilities of a material can be explored with direct, uncompromising interaction such as low-level programming. The platform is not neutral, its characteristics are essential to what demos written for it end up being like. While a piece of traditional computer art can often be safely removed from its specific technological context, a demo is no longer a demo if the platform is neglected.
The focus on materiality also results in a somewhat unusual relationship with technology. For most people, computer platforms are just evolutionary stages on a timeline of innovation and obsolescence. A device serves for a couple of years before getting abandoned in favor of a new model that is essentially the same with higher specs. The characteristics of a digital device boil down to numerical statistics in the spirit of "bigger is better". The demoscene, however, sees its platforms as something more multi-faceted. An old computer or gaming console may be interesting as an artistic material just because of its unique combination of features and limitations. It is fine to have historical, personal or even political reasons for choosing a specific platform, but they're not necessary; the features of the system alone are enough to grow someone's creative enthusiasm. As so many people misunderstand the relationship between demoscene and old hardware as a form of "retrocomputing", it is very delightful to see such an accurate insight to it.
But is it really that simple?
I'm not entirely familiar with the semantic extent of "materiality" in media studies, but it is apparent that it primarily refers to physicality and concreteness. In many occasions, Botz contrasts materiality against virtuality, which, I think, is an idea that stems from Gilles Deleuze. This dichotomy is simple and appealing, but I disagree with Botz in how central it is to what the demoscene is doing. After all, there are, for example, quite many 8-bit-oriented demoscene artists who totally approve virtualization. Artists who don't care whether their works are shown with emulators or real hardware at parties, as long as the logical functionality is correct. Some even produce art for the C-64 without having ever owned a material C-64. Therefore, virtualization is definitely not something that is universally frowned upon on the demoscene. It is apparently also possible to develop a low-level, concrete material relationship with an emulated machine, a kind of "material" that is totally virtual to begin with!
Computer programming is always somewhat virtual, even in its most down-to-the-metal incarnations. Bits aren't physical objects; concentrations of electrons only get the role of bits from how they interact with the transistors that form the logical circuits. A low-level programmer who strives for a total, optimal control of a processor doesn't need to be familiar with these material interactions; just knowing the virtual level of bits, registers, opcodes and pipelines is enough. The number of abstraction layers between the actual bit-twiddling and the layer visible to the programmer doesn't change how programming a processor feels like. A software emulator or an FPGA reimplementation of the C-64 can deliver the same "material feeling" to the programmer as the original, NMOS-based C-64. Also, if the virtualization is perfect enough to model the visible and audible artifacts that stem from the non-binary aspects of the original microchips, even a highly experienced enthusiast can be fooled.
I therefore think it is more appropriate to consider the "feel of materiality" that demosceners experience to stem from the abstract characteristics of the platform than its physicality. Programming an Atari VCS emulator running in an X86 PC on top of an operating system may very well feel more concrete than programming the same PC directly with the X86 assembly language. When working with a VCS, even a virtualized one, a programmer needs to be aware of the bit-level machine state at all times. There's no display memory in the VCS; the only way to draw something on the screen is by telling the processor to put specific values in specific video chip registers at specific clock cycles. The PC, however, does have a display memory that holds the pixel values of the on-screen picture, as well as a video chip that automatically refreshes its contents to the screen. A PC programmer can therefore use very generic algorithms to render graphics in the display memory without caring about the underlying hardware, while on the VCS everything needs to be thought out from the specific point of view of the video chip and the CPU.
It seems that the "feel of materiality" has particularly much to do with complexity -- of both the platform and the manipulated data. A high-resolution picture, taking up megabytes of display memory, looks nearly identical on a computer screen regardless of whether it is internally represented in RGB or YUV colorspace. However, when we get a pixel artist to create versions of the same picture for various formats that use less than ten kilobytes of display memory, such as PC textmode or C-64 multicolor, the specific features and constraints of each format shine out very clearly. High levels of complexity allow for generic, platform-independent and general-purpose techniques whereas low levels of complexity require the artist to form a "material relationship" with the format.
Low complexity and the "feel of materiality" are also closely related to the "feel of total control" which I regard as an important state that demosceners tend to reach for. The lower the complexity of a platform, the easier it is to reach a total understanding of its functionality. Quite often, coders working on complex platforms choose to deliberately lower the perceived complexity by concentrating on a reduced, "essential" subset of the programming interface and ignoring the rest. Someone who codes for a modern PC, for example, may want to ignore the polygonal framework of the 3D API altogether and exclusively concentrate on shader code. Those who write softsynths, even for tiny size classes, tend to ignore high-level synthesis frameworks that may be available on the OS and just use a low-level PCM-soundbuffer API. Subsets that provide nice collections of powerful "Lego blocks" are the way to go. Even though bloated system libraries may very well contain useful routines that can be discovered and abused in things like 4-kilobyte demos, most democoders frown upon this idea and may even consider it cheating.
Emulators, virtual platforms and reduced programming interfaces are ways of creating pockets of lowered complexity within highly complex systems -- pockets that feel very "material" and controllable for a crafty programmer. Even virtual platforms that are highly abstract, idealistic and mathematical may feel "material". The "oneliner music platform", merely defined as C-like expression syntax that calculates PCM sample values, is a recent example of this. All of its elements are defined on a relatively high level, no specification of any kind of low-level machine, virtual or otherwise. Nevertheless, a kind of "material characteristic" or "immanent esthetics" still emerges from this "platform", both in how the sort formulas tend to sound like and what kind of hacks and optimizations are better than others.
The "oneliner music platform" is perhaps an extreme example, but in general, purely virtual platforms have been there for a while already. Things like Java demos, as well as multi-platform portable demos, have been around since the late 1990s, although they've usually remained quite marginal. For some reason, however, Botz seems to ignore this aspect of the demoscene nearly completely, merely stating that multi-platform demos have started to appear "in recent years" and that the phenomenon may grow bigger in the future. Perhaps this is a deliberate bias chosen to avoid topics that don't fit well within Botz's framework. Or maybe it's just an accident. I don't know.
Conclusion
To summarize: when Botz talks about the materiality of demoscene platforms, he often refers to phenomena that, in my opinion, could be more fruitfully analyzed with different conceptual devices, especially complexity. Wherever the dichotomy of materiality and immateriality comes up, I see at least three separate conceptual dimensions working under the hood:
1. Art vs craft (or "idea-first" vs "material-first"). This is the area where Botz's theory works very well: demoscene is, indeed, more crafty or "material-first" than most other communities of computer art. However, the material (i.e. the demo platform) doesn't need to be material (i.e. physical); the crafty approach works equally well with emulated and purely virtual platforms. The "artsy" approach, leading to conceptual and "avant-garde" demos, has gradually become more and more accepted, however there's still a lot of crafty attitude in "art demos" as well. I consider chip musicians, circuit-benders and homebrew 8-bit developers about as crafty on average as demosceners, by the way.
2. Physicality vs virtuality. There's a strong presence of classic hardware enthusiasm on the demoscene as well as people who build their own hardware, and they definitely are in the right place. However, I don't think the physical hardware aspect is as important in the demoscene as, for example in the chip music, retrogaming and circuit-bending communities. On the demoscene, it is more important to demonstrate the ability to do impressive things in limited environments than to be an owner of specific physical gear or to know how to solder. A C-64 demo can be good even if it is produced with an emulator and a cross-compiler. Also, as demo platforms can be very abstract and purely virtual as well and still be appealing to the subculture, I don't think there's any profound dogma that would drive demosceners towards physicality.
3. Complexity. The possibility of forming a "material relationship" with an emulated platform shows that the perception of "materiality", "physicality" and "controllability" is more related to the characteristics of the logical platform than to how many abstraction layers there are under the implementation. A low computational complexity, either in the form of platform complexity or program size, seems to correlate with a "feeling of concreteness" as well as the prominence of "emergent platform-specific esthetics". What I see as the core methodology of the demoscene seems to work better at low than high levels of complexity and this is why "pockets of lowered complexity" are often preferred by sceners.
Don't take me wrong: despite all the disagreements and my somewhat Platonist attitude to abstract ideas in general, I still think virtuality and immateriality have been getting too much emphasis in today's world and we need some kind of a countercultural force that defends the material. Botz also covers possible countercultural aspects of the demoscene, deriving them from the older hacker culture, and I found all of them very relevant. My basic disagreement comes from the fact that Botz's theory doesn't entirely match with how I perceive the demoscene to operate, and the subculture as a whole cannot therefore be put under a generalizing label such as "defenders and lovers of the materiality of the computer".
Anyway, I really enjoyed reading Botz's book and especially appreciated the theoretical insight. I recommend the book to everyone who is interested in the demoscene, its history and esthetic variety, AND who reads German well. I studied the language for about five years at school but I still found the text quite difficult to decipher at places. I therefore sincerely hope that my problems with the language haven't led me to any critical misunderstandings.
Friday, 28 October 2011
Some deep analysis of one-line music programs.













