Showing posts with label low-tech. Show all posts
Showing posts with label low-tech. Show all posts

Thursday, 9 April 2015

Bringing magic back to technology

Back in 2011, I was one of the discoverers of "Bytebeat", a type of very short computer programs that generate music. These programs received quite a lot of attention because they seem to be far too short for the complex musical structures they output. I wrote several technical articles about Bytebeat (arxiv, countercomplex 1, countercomplex 2) as well as a Finnish-language academic article about the social dynamics of the phenomenon. Those who just need a quick glance may want to check out one of the Youtube videos.

The popularity of Bytebeat can be partially explained with the concept of "hack value", especially in the context of Hakmem-style hacks -- very short programs that seem to outgrow their size. The Jargon File gives the following formal definition for "hack value" in the context of very short visual programs, display hacks:
"The hack value of a display hack is proportional to the esthetic value of the images times the cleverness of the algorithm divided by the size of the code."
Bytebeat programs apparently have a high hack value in this sense. The demoscene, being distinct from the MIT hacker lineage, does not really use the term "hack value". Still, its own ultra-compact artifacts (executables of 4096 bytes and less) are judged in a very similar manner. I might just replace "cleverness of the algorithm" with something like "freshness of the output compared to earlier work".
Another related hacker concept is "magic", which the Jargon File defines as follows:
1. adj. As yet unexplained, or too complicated to explain; compare automagically and (Arthur C.) Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic." "TTY echoing is controlled by a large number of magic bits." "This routine magically computes the parity of an 8-bit byte in three instructions." 
2. adj. Characteristic of something that works although no one really understands why (this is especially called black magic). 
3. n. [Stanford] A feature not generally publicized that allows something otherwise impossible, or a feature formerly in that category but now unveiled. 
4. n. The ultimate goal of all engineering & development, elegance in the extreme; from the first corollary to Clarke's Third Law: "Any technology distinguishable from magic is insufficiently advanced".
Short programs with a high hack value are magical especially in the first two senses. How and why Bytebeat programs work was often a mystery even to their discoverers. Even when some theory about them was devised, it was often quite difficult to understand or apply. Especially bitwise arithmetic tends to have very esoteric uses in Bytebeat.

The hacker definition of magic indirectly suggests that highly advanced and elegant engineering should be difficult to understand. Indecipherable program code has even been celebrated in contests such as IOCCC. This idea is highly countercultural. In mainstream software industry, clever hacks are despised: all code should be as easy as possible to understand and maintain. The mystical aspects of hacker subcultures are there to compensate for the dumb, odorless and dehumanizing qualities of the industrial chores.

Magic appears in the Jargon File in two ways. Terms such as "black magic", "voodoo programming" and "cargo cult programming" represent cases where the user doesn't know what they are doing or may not even strive to. Another aspect is exemplified by terms such as "deep magic" and "heavy wizardry": there, the technology may be difficult to understand or chaotic to control, but at least there are some talented individuals who have managed to. These aspects could be called "wild" and "domesticated", respectively, or alternatively "superstition" and "esoterica".

Most technology used to be magical in the wild/superstitious way. Cultural evolution does not require individual innovators to understand how their innovations work. Fermentation, for example, had been used for thousands of years without anyone having seen a micro-organism. Despite this, cultural evolution can find very good solutions if enough time is given: traditional craft designs often have a kind of optimality that is very difficult to attain from scratch even with the help of modern science. (See e.g. Robert Boyd et al.'s articles about cultural evolution of technology)

Science and technology have countless examples of "wild magic" getting "domesticated". An example from computer music is the Karplus-Strong string model. Earlier models of acoustic simulation had been constructed via rational analysis alone, so they were prohibitively expensive for real-time synthesis. Then, Karplus and Strong accidentally discovered a very resource-efficient model due to a software bug, and nowadays it is pretty standard textbook material without much magical glamor at all.

Magic and rationality support each other. In good technology, they would coexist in symbiosis. Industrialization, however, brought a cult of obsolescence that prevented this kind of relationship. Traditions, time-proven designs, intuitive understanding and irreducible wisdom started to get obsoleted by one-dimensional reductive analysis. Nowadays, "magic" is only tolerated as bursts of inspiration that must be captured within reductivist frameworks before they break something.

In the 20th century, utilitarian industrial engineering started to get obsoleted by its bastard offspring, tumorous engineering. This is what I discussed in my earlier essay "The resource leak bug of our civilization". Accumulation of bloat and complexity for their own sake is making technology increasingly difficult to rationally understand and control. In computing, where tumourous engineering dominates, designers are already longing back to utilitarian industry where simplicity, controllability, resource-efficiency and expertise were still valued.

When advocating the reintroduction of magic, one must be careful not to endorse the kind of superstitious thinking that already has a good hold on how people relate to technology. Devices that hide their internal logic and instead base their interfaces on guessing what the user wants are kind of Aladdin's lamps to most. You don't really understand how they work, but at least their spirits fulfill your wishes as long as you don't make them angry.

The way how magic manifests itself in traditional technology is diagonally opposite to this. The basic functional principles of a bow, a canoe or a violin can be learned via simple observation and experimentation. The mystery lies elsewhere: in the evolutionary design details that are difficult to rationally explain, in the outworldish talent and wisdom of the master crafter, in the superhuman excellence of the skilled user. If the design has been improved over generations, even minor improvements are difficult to do anymore, which gives it an aura of perfection.

The magic we need more in today's technological world is of the latter kind. We should strive to increase deepness rather than outward complexity, human virtuosity rather than consumerism, flexibility rather than effortlessness. The mysteries should invite attempts at understanding and exploitation rather than blind reliance or worship; this is also the key difference between esoterica and superstition.

One definition of magic, compatible with that in the Jargon File, is that it breaks people's preconceptions of what is possible. In order to challenge and ridicule today's technological bloat, we should particularly aim at discoveries that are "far too simple and random to work but still do". New ways to use and combine the available grassroots-level elements, for instance.

A Bytebeat formula is a simple arrangement of digital-arithmetic operations that have been elementary to computers since the very beginning. It is apparently something that should have been discovered decades ago, but it wasn't. Hakmem contains a few "sound hacks" that could have evolved into Bytebeat if a wide enough counter had been introduced into them, but there are no indications that this ever took place. It is mind-boggling to think about that the space of very short programs remains so uncharted that random excursions there can churn out new interesting structures even after seventy years.

Now consider that we are surrounded by millions of different natural "building blocks" such as plants, micro-organisms and geological materials. I honestly believe that, despite hundreds of thousands of years of cultural evolution, their combinatory space is nowhere near fully charted. For instance, it could be possible to find a rather simple and rudimentary technique that would make micro-organisms transform sand into a building material superior to everything we know today. A favorite fantasy scenario of mine is a small self-sufficient town that builds advanced spacecraft from scratch with "grassroots-level" techniques that seem magical to our eyes.

How to develop this kind of magic? Rational analysis and deterministic engineering will help us to some extent, but we are dealing with systems so chaotic and multidimensional that decades of random experimentation would be needed for many crucial leaps-forward. And we don't really have those decades if we want to beat our technological cancer.

Fortunately, the same Moore's law that empowers tumorous engineering also provides a way out. Computers make it possible to manage chaotic systems in ways other than neurotic modularization. Today's vast computational capacities can be used to simulate the technological trial-and-error of cultural evolution with various level of accuracy. Of course, simulations often fail, but at least they can give us a compass for real-world experimentation. Another important compass is "hack value" or "scientific intuition" -- the modern manifestations of the good old human sense of wonder that has been providing fitness estimations for cultural evolution since time immemorial.

Saturday, 14 March 2015

Counteracting alienation with technological arts and crafts

The alienating effects of modern technology have been discussed a lot during the past few centuries. Prominent thinkers such as Marx and Heidegger have pointed out how people get reduced into one-dimensional resources or pieces of machinery. Later on, grasping the real world has become increasingly difficult due to the ever-complexifying network of interface layers. I touched this topic a little bit in an earlier text of mine.

How to solve the problem? Discussion tends to bipolarize into quarrels between techno-utopians ("technological progress will automatically solve all the problems") and neo-luddites ("the problems are inherent in technology so we should avoid it altogether"). I looked for a more constructive view and found it in Albert Borgmann.

According to Borgmann, the problem is not in technology or consumption per se, but in the fact that we have given them the primary importance in our lives. To solve the problem, Borgmann proposes that we give the importance to something more worthwhile instead – something he calls "focal things and practices". His examples include music, gardening, running, and the culture of the table. Technological society would be there to protect these focalities instead of trying to make them obsolete.

In general, focal things and practices are something that are somehow able to reflect the whole human existence. Something where self-expression, excellence and deep meanings can be cultivated. Traditional arts and crafts often seem to fulfill the requirements, but Borgmann becomes skeptical whenever high technology gets involved. Computers or modern cars easily alienate the hands-on craftsperson with their blackboxed microelectronics.

Perhaps the most annoying part in Eric S. Raymond's "How To Become A Hacker" is the one titled "Points For Style". Raymond states there that an aspiring hacker should adopt certain non-computer activites such as language play, sci-fi fandom, martial arts and musical practice. This sounds to me like an enforcement of a rather narrow subcultural stereotype, but reading Borgmann made me realize an important point there: computer activities alone aren't enough even for computer hackers – they need to be complemented by something more focal.

Worlds drifting apart

So far so good: we should maintain a world of focal things supported by a world of high-tech things. The former is quite earthly, so everything that involves computing and such belongs to the latter. But what if these two worlds drift too far apart?

Borgmann believes that focal things can clarify technology. The contrast between focal and technological helps people put high-tech in proper roles and demand more tangibility from it. If the technology is material enough, its material aspects can be deepened by the materiality of the focal things. When dealing with information technology, however, Borgmann's idea starts losing relevance. Virtual worlds no longer speak a material language, so focal traditions no longer help grasp their black boxes. Technology becomes a detached, incomprehensible bubble of its own – a kind of "necessary evil" for those who put the focal world first.

In order to keep the two worlds anchored together, I suppose we need to build some islands between them. We need things and practices that are tangible and human enough to be earthed by "real" focal practices, but high-tech enough to speak the high-tech language.

Hacker culture provides one possible key. The principles of playful exploration and technological self-expression can be expanded to many other technologies besides computing. Even if "true focality" can't be reached, the hacker attitude at least counteracts passive alienation. Art and craft building on the assumed essence of a technology can be powerful in revealing the human-approachable dimensions of that technology.

How many hackers do we need?

I don't think it is necessary for every user of a complex technology to actively anchor it to reality. However, I do think everyone's social circle should include people who do. Assuming a a minimal Dunbar's number of 100, we can deduce that at least one percent of users of any given technology in any social group should be part of a "hacker culture" that anchors it.

Anchoring a technology requires a relationship deeper than what mere rational expertise provides. I would suggest that at least 10% of the users of a technology (preferrably a majority, however) should have a solid rational understanding of it, and at least 10% of these should be "hackers". A buffer of "casual experts" between superficial and deep users would also have some sociodynamical importance.

We also need to anchor those technologies that we don't use directly but which are used for producing the goods we consume. Since everyone eats food and wears clothes, every social circle needs to have some "gardening hackers" and "textile hackers" or something with a similar anchoring capacity. In a scenario where agriculture and textile industry are highly automated, some "automation hackers" may be needed as well.

Computing needs to be anchored from two sides – physical and logical. The physical aspect could be well supported by basic electronics craft or something like ham radio, while the logical side could be nurtured by programming-centered arts, maybe even by recreational mathematics.

The big picture

Sophisticated automation leaves people with increasing amounts of free time. Meanwhile, knowledge and control over technology are held by ever fewer. It is therefore quite reasonable to use the extra free time for activities that help keep technology in people's hands. A network of technological crafters may also provide alternative infrastructure that decreases dependence on the dominant machinery.

In an ideal world, people would be constantly aware of the skills and interests present in their various social circles. They would be ready to adopt new interests depending on which technologies need stronger anchoring. The society in general would support the growth and diversification of those groups that are too small or demographically too uniform.

At their best, technological arts would have a profound positive effect on how the majority experiences technology – even when practiced by only a few. They would inspire awe, appreciation and fascination in the masses but at the same time invite them to try to understand the technology.

This was my humble suggestion on a possible way how to counteract technological alienation. I hope I managed to be inspiring.

Sunday, 7 September 2014

How I view our species and our world

My recent blog post "The resource leak bug of our civilization" has gathered some interest recently, especially after getting noticed by Ran Prieur in his blog. I therefore decided to translate another essay to give it a wider context. Titled "A few words about humans and the world", it is intended to be a kind of wholesome summary of my worldview, and it is especially intended for people who have had difficulties in understanding the basis of some of my opinions.

---

This writeup is supposed to be concise rather than convincing. It therefore skips a lot of argumentation, linking and breakdowns that might be considered necessary by some. I'll get back to them in more specific texts.

1. Constructions

Humans are builders. We build not only houses, devices and production machinery, but also cultures, conceptual systems and worldviews. Various constructions can be useful as tools, however we also have an unfortunate tendency to chain ourselves to them.

Right now, humankind has chained itself to the worship of abundance: it is imperative to produce and consume more and more of everything. Quantitative growth is imagined to be the same thing as progress. Especially during the last hundred years, the theology of abundance has invaded so deep and profound levels, that most people don't even realize its effect. It's not just about consumerism on a superficial level, but about the whole economic system and worldview.

Extreme examples of growth ideology can be easily found in the digital world, where it manifests as a raised-to-the-power-two version. What happens if worshippers of abundance get their hands on a virtual world where the amount of available resources increases exponentially? Right, they will start bloating up the use of resources, sometimes even for its own sake. It is not at all uncommon to require a thousand times more memory and computational power than necessary for a given task. Mindless complexity and purposeless activities are equated with technological advancement. The tools and methods the virtual world is being built with have been designed from the point of view of idealized expansion, so it is difficult to even imagine alternatives.

I have some background in a branch of hacker culture, demoscene, where the highest ideal is to use minimal resources in an optimal way. The nature of the most valued progress there is condensing rather than expanding: doing new things under ever stricter limitations. This has helped me perceive the distortions of the digital world and their counterparts in the material world.

In everyday life, the worship of growth shows up, above all, as complexification of everything. It is becoming increasingly difficult to understand various socio-economic networks or even the functionality of ordinary technological devices. This alienates people from the basics of their lives. Many try to fight this alienation by creating pockets of understandability. Escapism, conservatism and extremism rise. On the other hand, there is also an increase in do-it-yourself culture and longing to a more self-sufficient way of life. People should be encouraged into these latter-mentioned, positive means to counter alienation instead of channels that increase conflicts.

An ever greater portion of techno-economical structures consists of useless clutter, so-called economic tumors. They form when various decision-makers attempt to keep their acquired cake-pieces as big as possible. Unnecessary complexity slows down and unilateralizes progress instead of being a requirement for it. Expansion needs to be balanced with contraction -- you can't breath in without breating out.

The current phase of expansion is finally about to end, since the fossil fuels that made it possible are getting rarer, and we still don't know about an equally powerful replacement. As the phase took so long, the transition into contraction will be difficult to many. An increasingly bigger portion of economy will escape into the digital world, where it is possible to maintain the unrealistic swelling longer than in the material world.

Dependencies of production can be depicted as a pyramid where the things on the higher levels are built from the things below. In today's world, people always try to build on the top, so the result looks more like a shaky tower than a pyramid. Most new things could be easily built at lower levels. The lowest levels of the pyramid could also be strengthened by giving more room for various self-sufficient communities, local production and low-tech inventions. Technological and cultural evolution is not a one-dimensional road where "forward" and "backward" are the only alternatives. Rather, it is a network of possibilities burgeoning towards every direction, and even its strange side-loops are worth knowing.

2. Diversity

It is often assumed that growth would increase the amount of available options. In principle, this is true -- there are more and more different products on store shelves -- but their differences are more and more superficial. The same is true with ways of life: it is increasingly difficult to choose a way of life that wouldn't be attached to the same chains of production or models of thinking as every other way of life. The alternatives boil down into the same basic consumer-whoredom.

Proprietors overstandardize the world with their choices, but this probably isn't very conscious activity. When there are enough decision-makers who play the same game with the same rules, the world will eventually shape around these rules (including all the ingrained bugs and glitches). Conspiracy theories or evil-incarnates are therefore not required to explain what's going on.

The human-built machinery is getting increasingly more complex, so it is also increasingly more difficult to talk about it in concrete terms. Many therefore seek help from conceptual tools such as economic theories, legal terminology or ideologies, and subsequently forget that they are just tools. Nowadays, money- and production-centered ways of conceptualizing the world have become so dominant that people often don't realize that there are other alternatives.

Diversity helps nature adapt to changes and recover from disasters. For the same reason, human culture should be as diverse as possible especially now that the future is very uncertain and we have already started to crash into the wall. It is necessary to make it considerably less difficult to choose radically different ways of life. Much more room should be given to experimental societies. Small and unique languages and cultures should be treasured.

There's no one-size-fits-all model that would be best for everyone. However, I believe that most people would be happiest in a society that actively maintains human rights and makes certain that no one is left behind. Dictatorship of majority, however, is not that crucial feature of a political system in a world where everyone can freely choose a suitable system. Regardless, dissidents should be given enough room in every society: everyone doesn't necessarily have the chance to choose a society, and excessive unanimosity tends to be quite harmful anyway.

3. Consciousness

Thousands of years ago, the passion for construction became so overwhelming that the quest for mental refinement didn't keep with the pace. I regard this as the main reason why human beings are so prone to become slaves of their constructs. Rational analysis is the only mental skill that has been nurtured somewhat sufficiently, and even rational analysis often becomes just a tool for various emotional outbursts and desires. Even very intelligent people may be completely lost with their emotions and motivations, making them inclined to adopt ridiculously one-dimensional thought constructs.

Putting one's own herd before anyone else is an example of attitude that may work among small hunter-gatherer groups, but which should have no more place in the modern civilization. A population that has the intellectual facilities to build global networks of cause and effect should also have the ability to make decisions on the corresponding level of understanding instead of being driven by pre-intellectual instincts.

Assuming that humankind still wants to maintain complex societal and technological structures, it should fill its consciousness gap. Any school system should teach the understanding and control of one's own mind at least as seriously as reading and writing. New practical mental methods, suitable for an ever greater variety of people, should be developed at least as passionately as new material technology.

For many people, worldview is still primarily a way of expressing one's herd instincts. They argue and even fight about whose worldview is superior. It is hopeful that future will bring a more individual attitude towards them: there is no single "truth" but different ways for conceptualizing the reality. A way that is suitable for one mind may be even destructive to another mind. Science produces facts and theories that can be used as building blocks for different worldviews, but it is not possible to put these worldviews into an objective order of preference.

4. Life

The purposes of life for individual human beings stem from their individual worldviews, so it is futile to suggest rules-of-thumb that suit all of them. It is much easier to talk about the purpose of biological life, however.

The basic nature of life, based on how life is generally defined, is active self-preservation: life continuously maintains its form, spreads and adapts into different circumstances. The biological role of a living being is therefore to be part of an ecosystem, strengthening the ecosystem's potential for continued existence.

The longer there is life on Earth, the more likely it is to expand into outer space at some point of time. This expansion may already take place during the human era, but I don't think we should specifically strive for it before we have learned how to behave non-destructively. However, I'm all for the production of raw material and energy in space, if it helps us abstain from raping our home planet.

At their best, intelligent lifeforms could function as some sort of gardeners. Gardeners that strengthen and protect the life in their respective homeworlds and help spread it to other spheres. However, I don't dare to suggest that the current human species have the prequisites for this kind of role. At this moment, we are so lost that we couldn't become even a galactic plague.

Some people regard the human species as a mistake of evolution and want us to abandon everything that differentiates us from other animals. I see no problem per se in the natural behavior of homo sapiens, however: there's just an unfortunate misbalance of traits. We shouldn't therefore abandon reason, abstractions or constructivity but rebalance them with more conscious self-improvement and mental refinement.

5. The end of the world

It is not possible to save the world, if it means saving the current societies and consumer-centric lifestyles. At most, we can soften the crash a little bit. It is therefore more relevant to concentrate on activities that make the postapocalyptic world more life-friendly.

As there is still an increasing amount of communications technology and automation in the world, and the privileged even have increasingly more free time, these facilities should be used right now for sowing the seeds for a better world. If we start building alternative constructs only when the circumstances force us to, the transition will be extremely painful.

People increasingly dwell in easiness bubbles facilitated by technology. It is therefore a good idea to bring suitable signals and facilities into these bubbles. Video game technology, for example, can be used to help reclaim one's mind, life and material environment. Entertainment in general can be used to increase the interest in such a reclaim.

Many people imagine progress as a kind of unidirectional growth curve and therefore regard the postapocalyptic era as a "return to the past". However, the future world is more likely to become radically different from any previous historical era -- regardless of some possible "old-fashioned" aspects. It may therefore more relevant to use fantasy rather than history to envision the future.

Sunday, 14 July 2013

Slower Moore's law wouldn't be that bad.

Many aspects of the world of computing are dominated by Moore's law -- the phenomenon that the density of integrated circuits tends to double every two years. In mainstream thought, this is often equated with progress -- a deterministic forward-march towards the universal better along a metaphorical one-dimensional path. In this essay, I'm creating a fictional alternative timeline to bring up some more dimensions. A more moderate pace in Moore's law wouldn't necessarily be that bad after all.

Question: What if Moore's law had been progressing at a half speed since 1980?

I won't try to explain the point of divergence. I just accept that, since 1980, certain technological milestones would have been rarer and fewer. As a result, certain quantities would have doubled only once every four years instead of every two years. The RAM capacities, transistor counts, hard disk sizes and clock frequencies would have just reached the 1990s level in the year 2000, and in the year 2013, we would be on the 1996 level in regards to these variables.

I'm excluding some hardware-related variables from my speculation. Growth in telecommunications bandwidths, including the spread of broadband, are more related to infrastructural development than Moore's law. I also consider the technological development in things like batteries, radio tranceivers and LCD screens to be unrelated to Moore's law, so their progress would have been more or less unaffected apart from things like framebuffer and DSP logic.

1. Most milestones of computing culture would not have been postponed.

When I mentioned "the 1996 level", many readers probably envisioned a world where we would be "stuck in the year 1996" in all computing-related aspects. Noisy desktop Pentiums running Windows 95s and Netscape Navigators, with users staring in awe at rainbow-colored, static, GIF-animation-plagued websites over landline dialup connections. This tells about mainstream views about computer culture: everything is so one-dimensionally techno-determinist that even progress in purely software- and culture-related aspects is difficult to envision without their supposed hardware prequisities.

My view is that progress in computing and some other high technology has always been primarily cultural. Things don't become market hits straight after they're invented, and they don't get invented straight after they're technologically possible. For example, there were touchscreen-based mobile computers as early as 1993 (Apple Newton), but it took until 2010 before the cultural aspects were right for their widespread adoption (iPad). In the Slow-Moore world, therefore, a lot of people would have tablets just like in our world, despite the fact that they wouldn't probably have too many colors.

The mainstream adoption of the Internet would have taken place in the mid-1990s just like in the real world. 1987-equivalent hardware would have been completely sufficient for the boom to take place. Public online services such as Videotex and BBSes had been available since the late 1970s, and Minitel had already gathered millions of users in France in the 1980s, so even a dumb text terminal would have sufficed on the client side. The power of the Internet compared to its competitors was its global, free and decentralized nature, so it would have taken off among common people even without graphical web browsers.

Assuming that the Internet had become popular with character-based interfaces rather than multimedia-enhanced hypertext documents, its technical timeline would have become somewhat different. Terminal emulators would have eventually accumulated features in the same way as Netscape-like browsers did in the real world. RIPscrip is a real-world example of what could have become dominant: graphics images, GUI components and even sound and video on top of a dumb terminal connection. "Dynamic content" wouldn't require horrible kludges such as "AJAX" or "dynamic HTML", as the dumb terminal approach would have been interactive and dynamic enough to begin with. The gap between graphical and text-based applications would be narrower, as well as the gap between "pre-web" and "modern" online culture.

The development of social media was purely culture-driven: Facebook would have been technically possible already in the 1980s -- feeds based on friend lists don't require more per-user computation than, say, IRC channels. What was needed was cultural development: several "generations" of online services were required before all the relevant ideas came up. In general, most online services I can think of could have taken place in some form or another, about the same time as they appeared in the real world.

The obvious exceptions would be those services that require a prohibitive amount of server-side storage. An equivalent of Google Street View would perhaps just show rough shapes of the buildings instead of actual photographs. YouTube would focus on low-bitrate animations (something like Flash) rather than on full videos, as the default storage space available per user would be quite limited. Client-side video/audio playback wouldn't necessarily be an issue, since MPEG decompression hardware was already available in some consumer devices in the early 1990s (Amiga CD32) and would have therefore been feasible in the Slow-Moore year 2004. Users would just be more sensitive about disk space and would therefore avoid video formats for content that doesn't require actual video.

All the familiar video games would be there, as the resource-hogging aspects of games can generally be scaled down without losing the game itself. It could even be argued that there would be far more "AAA" titles available, assuming that the average budget per game would be lower due to lower fidelity requirements.

Domestic broadband connections would be there, but they would be more often implemented via per-apartment ethernet sockets than via per-apartment broadband modems. The amount of DSP logic required by some protocols (*DSL) would make per-apartment boxes rather expensive compared to the installation of some additional physical wires. In rural areas, traditional telephone modems would still be rather common.

Mobile phones would be very popular. Their computational specs would be rather low, but most of them would still be able to access Internet services and run downloadable third-party applications. Neither of these requires a lot of power -- in fact, every microprocessor is designed to run custom code to begin with. Very few phones would have built-in cameras, however -- the development of cheap and tiny digital camera cells has a lot to do with Moore's law. Also, global digital divide would be greater -- there wouldn't be extremely cheap handsets available in poor countries.

It must be emphasized here that even though IC feature sizes would be in the "1996 level", we wouldn't be building devices from the familiar 1996 components. The designs would be far more advanced and logic-efficient. Hardware milestones would have been more like "reinventing the wheel" than accumulating as much intellectual property as possible on a single chip. RISC and Transputer architectures would have displaced X86-like CISCs a long time ago and perhaps even given way to ingenious inventions we can't even imagine.

Affordable 3D printers would be just around the corner, just like in the real world. Their developmental bottlenecks have more to do with the material printing process itself than anything Moorean. Similarly, the setbacks in the progress of virtual reality helmets have more to do with optics and head-tracking sensors than semiconductors.

2. People would be more conscious about the use of computing resources.

As mentioned before, digital storage would be far less abundant than in the real world. Online services would still have tight per-user disk quotas and many users would be willing to actually pay for more space. Even laypeople would have a rather good grasp about kilobytes and megabytes and would often put effort in choosing efficient storage formats. All computer users would need to regularly choose what is worth keeping and what isn't. Online privacy would generally be better, as it would be prohibitively expensive for service providers to neurotically keep the complete track record of every user.

As global Internet backbones would have considerably slower capacities than local and mid-range networks, users would actually care about where each server is geographically located. Decentralized systems such as IRC and Usenet would therefore never have given way to centralized services. Search engines would be technically more similar to YacY than Google, social media more similar to Diaspora than Facebook. Even the equivalent of Wikipedia would be a network of thousands of servers -- a centralized site would have ended up being killed by deletionists. Big businesses would be embracing this "peer-to-peer" world instead of expanding their own server farms.

In general, Internet culture would be more decentralized, ephemeral and realtime than in the real world. Live broadcasts would be more common than vlogs or podcasts. Much less data would be permanently stored, so people would have relatively small digital footprints. Big companies would have far less power over users.

Attitudes towards software development would be quite different, especially in regards to efficiency and optimization. In the real world, wasteful use of computational resources is systematically overlooked because "no one will notice the problem in the future anyway". As a result, we have incredibly powerful computers whose software still suffers from mainframe-era problems such as ridiculously high UI latencies. In a Slow-Moore world, such problems would have been solved a long time ago: after all, all you need is a good user-level control to how the operating system priorizes different pieces of code and data, and some will to use it.

Another problem in real-world software development is the accumulation of abstraction layers. Abstraction is often useful during development, as it speeds up the process and simplifies maintenance, but most of the resulting dependencies are a completely waste of resources in the final product. A lot of this waste could be eliminated automatically by the use of advanced static analysis and other methods. From the vast contrast between carefully size-optimized hobbyist hacks and bloated mainstream software we might guess that some mind-boggling optimization ratios could be reached. However, the use and development of such tools has been seriously lagging behind because of the attitude problems caused by Moore's law.

In a Slow-Moore world, the use of computing resources would be extremely efficient compared to current standards. This wouldn't mean that hand-coded assembly would be particularly common, however. Instead, we would have something like "hack libraries": huge collections of efficient solutions for various problems, from low-level to high-level, from specific to generic. All tamed, tested and proven in their respective parameter ranges. Software development tools would have intelligent pattern-matchers that would find efficient hacks from these libraries, bolt them together in optimal arrangements and even optimize the bolts away. Hobbyists and professionals alike would be competing in finding ever smarter hacks and algorithms to include in the "wisdombase", thus making all software incrementally more resource-efficient.

3. There would still be a gap between digital and "real" content.

Regardless of how efficently hardware resources are used, unbreakable limits always exist. In a Slow-Moore world, for instance, film photography would still be superior in quality to digital photography. Also, since the digital culture would be far more resource-conscious, large resolutions wouldn't even be desirable in purely digital contexts.

Spreading "memes" as bitmap images is a central piece of today's Internet culture. Even snippets of on-line discussions get spread as bitmapped screenshots. Wasteful, yes, but compatible and therefore tolerable. The Slow-Moore Internet would probably be much more compatible with low-bit formats such as plaintext or vector and character graphics.

Since the beginning of digital culture, there has been a desire to import content from "meatspace" into the digital world. At first, people did it in laborous ways: books were typed into text files, paintings and photographs were repainted with graphics editors, songs were covered with tracker programs. Later, automatic methods appeared: pictures could be scanned, songs could be recorded and compressed into MP3-like formats. However, it took some time before straight automatic imports could compete against skillful manual effort. In low resolutions, skillful pixel-pushing still makes a difference. Synthesized songs take a fraction of the space of an equivalent MP3 recording. Eventually, the difference diminished, and no one longer cared about it.

In a Slow-Moore world, the timeline of digital media would have been vastly different. A-priori-digital content would still have vast advantages over imported media. Artists looking for worldwide appreciation via the Internet would often choose to take the effort to learn born-digital methods instead of just digitizing their analog works. As a result, many traditional disciplines of computer art would have grown enormous. Demoscene and low-bit techniques such as procedural content generation and tracker-like synthesized music would be the mainstream norm in the Internet culture instead of anything "underground".

Small steps towards photorealism and higher fidelity would still be able to impress large audiences, as they would still notice the difference. However, in a resource-conscious online culture, there would also probably be a strong countercultural movement against "high-bit" -- a movement seeking to embrace the established "Internet esthetics" instead of letting it be taken over and marginalized by imports.

Record and film companies would definitely be suing people for importing, covering and spreading their copyrighted material. However, they would still be able to sell it in physical formats because of their superior quality. There would also be a class of snobs who hate all "computer art" and all the related esthetic while preferring "real, physical formats".

4. Conclusion

A Slow-Moore world would be somewhat "backwards" in some respects but far more sensible or even more advanced in others. As a demoscener with an ever-growing conflict against today's industry-standard attitudes, I would probably prefer to live with a more moderate level of Moorean inflation. However, a Netflix fan who likes high-quality digital photography and doesn't mind being in surveillance would probably choose otherwise.

The point in my thought experiment was to justify my view that the idea of a linear tech tree strongly tied to Moore's law is a banal oversimplification. There are many other dimensions that need to be noticed as well.

The alternative timeline may also be used as inspiration for real-world projects. I would definitely like to see whether an aggressively optimizing code generation tool based on "hack libraries" could be feasible. I would also like to see the advent of a mainstream operating system that doesn't suck.

Nevertheless: Down with Moore's law fetishism! It's time for a more mature technological vision!

Monday, 15 March 2010

Defining Computationally Minimal Art (or, taking the "8" out of "8-bit")

[Icon Watch designed by &design]

Introduction


"Low-tech" and "8-bit" are everywhere nowadays. Not only are the related underground subcultures thriving, but "retrocomputing esthetics" seems to pop up every now and then in mainstream contexts as well: obvious chip sounds can be heard in many pop music songs, and there are many examples of "old video game style" in TV commercials and music videos. And there are even "pixel-styled" physical products, such as the pictured watch sold by the Japanese company "&design". I'm not a grand follower of popular culture, but it seems to me that the trend is increasing.


The most popular and widely accepted explanation for this phenomenon is the "nostalgia theory", i.e. "People of the age group X are collectively rediscovering artifacts from the era Y". But I'm convinced that there's more to it -- something more profound that is gradually integrating "low-tech" or "8-bit" into our mainstream cultural imagery.


Many people have became involved with low-tech esthetics via nostalgia, but I think it is only the first phase. Many don't experience this phase at all and jump directly to the "second phase", where pixellated graphics or chip sounds are simply enjoyed the way they are, totally ignoring the
historical baggage. There is even an apparent freshness or novelty value for some people. This happens with audiences that are "too young" (like the users of Habbo Hotel) or otherwise more or less unaffected by the "oldskool electronic culture" (like many listeners of pop music).


Since the role of specific historical eras and computer/gaming artifacts is diminishing, I think it is important to provide a neutral conceptual basis for "low-tech esthetics"; an independent and universal definition that does not refer to the historical timeline or some specific cultural technology. My primary goal in this article is to provide this definition
and label it as "Computationally Minimal Art". We will also be looking for support for the universality of Computationally Minimal Art and finding ur-examples that are even older than electricity.


A definition: Computationally Minimal Art


Once we strip "low-tech esthetics" from its historical and cultural connections, we will be left with "pixellated shapes and bleepy sounds" that share an essential defining element. This element stems from what is common to the old computing/gaming hardware in general, and it is perfectly possible to describe it in generic terms, without mentioning specific platforms or historical eras.


[Space Invaders sprite]

The defining element is LOW COMPUTATIONAL COMPLEXITY, as expressed in all aspects of the audiovisual system: the complexity of the platform (i.e. the number of transistors or logic gates in the hardware), the complexity of the software (i.e. the length in bits of the program code and static data), as well as the time complexity (i.e. how many state changes the computational
tasks require). A more theoretical approach would eliminate the differentiation of software and hardware and talk about description/program length, memory complexity and time complexity.


There's little more that needs to be defined; all the important visible and audible features of "low-tech" emerge from the various kinds of low complexity. Let me elaborate with a couple of examples:


  • A low computing speed leads to a low number of processed and output bits per time frame. In video output, this means low resolutions and limited color schemes. In audio output, this means simple waveforms on a low number of discrete channels.

  • A short program+data length, combined with a low processing speed, makes it preferrable to have a small set of small predefined patterns (characters, tiles, sprites) that are extensively reused.

  • A limited amount of temporary storage (emerging from the low hardware complexity) also supports the former two examples via the small amount of available video memory.

  • In general, the various types of low complexity make it possible for a human being (with some expertise) to "see the individual bits with a naked eye and even count them".

In order to complete the definition, we will still have to know what "low" means. It may not be wise to go for an arbitrary threshold here ("less than X transistors in logic, less than Y bits of storage and less than Z cycles per second"), so I would like to define it as "the lower the better". Of course, this does not mean that a piece of low-tech artwork would ideally
constitute of one flashing pixel and static square-wave noise, but that the most essential elements of this artistic branch are those that persist the longest when the complexity of the system approaches zero.


Let me therefore dub the idealized definition of "low-tech art" as Computationally Minimal Art (CMA).


To summarize: "Computationally Minimal Art is a form of discrete art governed by a low computational complexity in the domains of time, description length and temporary storage. The most essential features of Computationally Minimal Art are those that persist the longest when the
various levels of complexity approach zero."


How to deal with the low complexity?


Traditionally, of course, low complexity was the only way to go. The technological and economical conditions of the 1970s and 1980s made the microelectronic artist bump into certain "strict boundaries" very soon, so the art needed to be built around these boundaries regardless of the artist's actual esthetic ideals. Today, on the other hand, immense and virtually non-limiting amounts of computing capacity are available for practically everyone who desires it, so computational minimalism is nearly always a conscious choice. There are, therefore, clear differences in how the low complexity has been dealt with in different eras and
disciplines.


I'm now going to define two opposite approaches to low complexity in computational art: optimalism (or "oldschool" attitude), which aims at pushing the boundaries in order to fit in "as much beauty as possible", and reductivism (or "newschool" attitude), which idealizes the low complexity itself as a source of beauty.


Disclaimer: All the exaggeration and generalization is intentional! I'm intending to point out differences between various extremities, not to portray any existing "philosophies" accurately.


Optimalism


Optimalism is a battle of maximal goals against a minimal environment. There are absolute predefined boundaries that provide hard upper limits for the computational complexity, and these boundaries are then pushed by fitting as much expressive power as possible between them. This approach is the one traditionally applied to mature and static hardware platforms by the
video game industry and the demoscene, and it is characterized by the appreciation of optimization in order to reach a high content density regardless of the limitations.


[Frog, Landscape and a lot of Clouds by oys]

A piece of traditional European-style pixel graphics ("Frog, Landscape and a lot of Clouds" by oys) exemplifies many aspects of optimalism. The resolution and color constraints of a video mode (in this case, non-tweaked C-64
multicolor) provide the hard limits, and it is the responsibility of the artist to fill up the space as wisely and densely as possible. Large single-colored areas would look "unfinished", so they are avoided, and if it is possible to fit in more detail or dithering somewhere, it should be done. It is also avoidable to leave an available color unused -- an idea which leads to the infamous "Dutch color scheme" when applied to high/truecolor video modes.


When applied to chip music, the optimalist dogma tells, among all, to fill in all the silent parts and avoid "simple beeps". Altering the values of as many sound chip registers per frame as possible is thought to be efficient use of the chip. This adds to the richness of the sound, which is though to correlate with the quality of the music.


[Artefacts by Plush]

On platforms such as the Commodore 64, the demoscene and video game industry seem to have been having relatively similar ideals. Once an increased computing capacity becomes available, however, an important difference between these cultures is revealed. Whenever the video game
industry gets more disk space or other computational resources, it will try to use it up as aggressively as possible, without starting any optimization efforts until the new boundaries have been reached. The demoscene, on the other hand, values optimality and content density so much that it often prefers to stick to old hardware or artificial boundaries in order to keep the "art of optimality" alive. The screenshot is from the 4K demo "Artefacts" by Plush (C-64).


Despite the cultural differences, however, the core esthetic ideal of optimalism is always "bigger is better"; that an increased perceived content complexity is a requirement for increased beauty. Depending on the circumstances, more or less pushing of boundaries is required.


Reductivism


Reductivism is the diagonal opposite of optimalism. It is the appreciation of minimalism within a maximal set of possibilities, embracing the low complexity itself as an esthetic goal. The approach can be equated with the artistic discipline of minimal art, but it should be remembered that the idea is much older than that. Pythagoras, who lived around 2500 years ago, already appreciated the role of low complexity -- in the form of mathematical beauty such as simple numerical ratios -- in music and art.


The reductivist approach does not lead to a similar pushing of boundaries as optimalism, and in many cases, strict boundaries aren't even introduced. Regardless, a kind of pushing is possible -- by exploring ever simpler structures and their expressive power -- but most reductivists don't seem to be interested in this aspect. It is usually enough that the output comes out as "minimal enough" instead of being "as minimal as possible".


[VVVVVV by Terry Cavanagh]

The visuals of the recent acclaimed Flash-based platformer game, VVVVVV, are a good example of computational minimalism with a reductivist approach. The author, Terry Cavanagh, has not only chosen a set of voluntary "restrictions" (reminiscent of mature computer platforms) to guide the
visual style, but keeps to a reductivist attitude in many other aspects as well. Just look at the "head-over-heels"-type main sprite -- it is something that a child would be able to draw in a minute, and yet it is perfect in the same iconic way as the Pac-Man character is. The style totally serves its purpose: while it is charming in its simplicity and downright naivism, it
shouts out loud at the same time: "Stop looking at the graphics, have fun with the actual game instead!"


[Thrust]

Although reductivism may be regarded as a "newschool" approach, it is possible to find some slightly earlier examples of it as well. The graphics of the 1986 computer game Thrust, for example, has been drawn with simple geometrical lines and arcs. The style is reminiscent of older vector-based arcade games such as Asteroids and Gravitar, and it definitely serves a technical purpose on such hardware. But on home computers with bitmapped screens and sprites, the approach can only be an esthetical one.


Optimalism versus Reductivism


Optimalism and reductivism sometimes clash, and an example of this can be found in the chip music community. After a long tradition of optimalism thru the efforts of the video game industry and the demoscene, a new kind of cultural branch was born. This branch, sometimes mockingly called
"cheaptoon", seems to get most of its kicks from the unrefined roughness of the pure squarewave rather than the pushing of technological and musical boundaries that has been characteristic of the "oldschool way". To an optimalist, a reductivist work may feel lazy or unskilled, while an
optimalist work may feel like "too full" or "too refined" to a reductivist mindset.


Still, when working within constraints, there is room for both approaches. Quite often, an idea is good for both sides; a simple and short algorithm, for example, may be appreciated by an optimalist because the saved bytes leave room for something more,, while a reductivist may regard
the technical concept as beautiful on its own right.


Comparison to Low-Complexity Art


Now I would like to compare my definition of Computationally Minimal Art to another concept with a somewhat similar basis: Jürgen Schmidhuber's Low-Complexity Art.


[A low-complexity face picture by Juergen Schmidhuber]

While CMA is an attempt to formalize "low-tech computer art", Schmidhuber's LCA comes from another direction, being connected to an ages-old tradition that attempts to define beauty by mathematical simplicity. The specific mathematical basis used in Schmidhuber's theory is Kolmogorov complexity, which defines the complexity of a given string of information (such as a picture) as the length of the shortest computer program that outputs it. Kolmogorov's theory works on a high level of generalization, so the choice of language does not matter as long as you
stick to it.


Schmidhuber sees, in "down-to-earth coder terms", that the human mind contains a built-in "compressor" that attempts to represent sensory input in a form as compact as possible. Whenever this compression process succeeds well, the input is perceived as esthetically pleasing. It is a well-studied fact that people generally perceive symmetry and regularity as more beautiful than unsymmetry and irregularity, so this hypothesis of a "mental compressor" cannot be dismissed as just an arbitrary crazy idea.


Low-Complexity Art tests this hypothesis by deliberately producing graphical images that are as compressible as possible. One of the rules of LCA is that an "informed viewer" should be able to perceive the algorithmic simplicity quite easily (which also effectively limits the time complexity of the algorithm, I suppose). Schmidhuber himself has devised a system based
on indexed circle segments for his pictures.


[Superego by viznut/pwp]

The above picture is from "Superego", a tiny pc demo I made in 1998. The picture takes some tens of bytes and the renderer takes less than 100 bytes of x86 code. Unfortunately, there is only one such picture in the demo, although the 4K space could have easily contained tens of pictures. This is because the picture design process was so tedious and counter-intuitive --
something that Schmidhuber has encountered with his own system as well. Anyway, when I encountered Schmidhuber's LCA a couple of years after this experiment, I immediately realized its relevance to size-restricted demoscene productions -- even though LCA is clearly a reductivist approach as opposed to the optimalism of the mainstream demoscene.


What Low-Complexity Art has in common with Computationally Minimal Art is the concern about program+data length; a minimalized Kolmogorov complexity has its place in both concepts. The relationship with other types of complexity is different, however. While CMA is concerned about all the types of complexity of the audiovisual system, LCA leaves time and memory complexity out of the rigid mathematical theory and into the domain of a "black box" that processes sensory input in the human brain. This makes LCA much more theoretical and psychological than CMA, which is mostly concerned about "how the actual bits move". In other words, LCA makes you look at
visualizations of mathematical beauty and ignore the visualization process, while CMA assigns an utmost importance to the visualizer component as well.


Psychological considerations


Now, an important question: why would anyone want to create Computationally Minimal Art for purely esthetical reasons -- novelty and counter-esthetic values aside? After all, those "very artificial bleeping sounds and flashing pixels" are quite alien to an untrained human mind, aren't they? And even many fans admit that a prolonged exposure to those may cause headache.


It is quite healthy-minded to assume that the perception mechanisms of the human species, evolved during hundreds of millions of years, are "optimized" for perceiving the natural world, a highly complex three-dimensional environment with all kinds of complex lighting and shading conditions. The extremely brief technological period has not yet managed to alter the "built-in defaults" of the human mind anyhow. Studies show, for example, that people all over the world prefer to be surrounded by wide-open landscapes with some water and trees here and there -- a preference that was fixed to our minds during our millions of years on the African savannah.


[Synchiropus splendidus, photographed by Luc Viatour]

So, the untrained mind prefers a photorealistic, high-fidelity sensory input, and that's it? No, it isn't that simple, as the natural surroundings haven't evolved independently from the sensory mechanisms of their inhabitants. Fruits and flowers prefer to be symmetric and vivid-colored because animals prefer them that way, and animals prefer them that way because it is beneficial for their survival to like those features, and so on. The natural world is full of signalling which is a result of millions of years of coevolutionary feedback loops, and this is also an important source for our own sense of esthetics. (The fish in the picture, by the way, is a Synchiropus splendidus, photographed by Luc
Viatour
.)


I'm personally convinced that natural signalling has a profound preference for low complexity. Symmetries, regularities and strong contrasts are important because they are easy and effortless to detect, and the implementation requires a relatively low amount of genetic coding on both
the "transmitter" and "receiver" sides. These are completely analogous to the various types of computational complexity.


So, why does enjoying Computationally Minimal Art require "mental training" in the first place? I think it is not because of the minimality itself but because of certain pecularities that arise from the higher complexity of the natural world. We can't see individual atoms or even cells, so we haven't evolved a built-in sense for pixel patterns. Also, the sound generation
mechanisms in nature are mostly optimized to the constraints of pneumatics rather than electricity, so we don't really hear squarewave arpeggios in the woods (although some birds may come quite close).


But even though CMA requires some special adjustment from the human mind, it is definitely not alone in this area. Our cultural surroundings are full of completely unnatural signals that need similar adjustments. Our music uses instruments that sound totally different from any animal, and
practically all musical genres (apart from the simplest lullabies, I think) require an adjustment period. So, I don't think there's nothing particularly "alien" in electronic CMA apart from the fact that it still hasn't yet integrated in our mainstream culture.


CMA unplugged


The final topic we cover here is the extent where Computationally Minimal Art, using our strict definition, can be found. As the definition is independent from technology, it is possible to find ur-examples that predate computers or even electricity.


In our search, we are ignoring the patterns found in the natural world because none of them seem to be discrete enough -- that is, they fail to have "human-countable bits". So, we'll limit ourselves to the artifacts found in human culture.


[Bubble Bobble cross-stitch from spritestitch.com

Embroidery is a very old area of human culture that has its own tradition of pixel patterns. I guess everyone familiar with electronic pixel art has seen cross-stitch works that immediately bring pixel graphics in mind. The similarities have been widely noted, and there have been href="http://www.spritestitch.com/">quite many craft projects inspired by old video games. But is this just a superficial resemblance or can we count it as Computationally Minimal Art?


[Traditional monochrome bird patterns in cross-stitch]

Cross-stitch patterns are discrete, as they use a limited set of colors and a rigid grid form which dictates the positions of each of the X-shaped, single-colored stitches. "Individual bits are perceivable" because each pixel is easily visible and the colors of the "palette" are usually easy to tell apart. The low number of pixels limits the maximum description length, and one doesn't need to keep many different things in mind while working either. Thus, cross-stitch qualifies all the parts of the definition of Computationally Minimal Art.


What about the minimization of complexity? Yes, it is also there! Many traditional patterns in textiles are actually algorithmic or at least highly repetitive rather than "fully hand-pixelled". This is somewhat natural, as the old patterns have traditionally been memorized, and the memorization is much easier if mnemonic rules can be applied.


There are also some surprising similarities with electronic CMA. Many techniques (like knitting and weaving) proceed one complete row of "pixels" at a time (analogous to the raster scan of TV-like displays), and often, the set of colors is changed between rows, which is corresponds very well to the use of raster synchronization in oldschool computer graphics. There are even peculiar technique-specific constraints in color usage, just like there are similar constraints in many old video chips.


[Pillow from 'Introduction to Fair Isle']

The picture above (source) depicts a pillow knitted with the traditional Fair Isle technique. It is apparent that there are two colors per "scanline", and these colors are changed between specific lines (compare to rasterbars). The patterns are based on sequential repetition, with the sequence changing on a per-scanline basis.


Perhaps the most interesting embroidery patterns from the CMA point of view are the oldest ones that remain popular. During centuries, the traditional patterns of various cultures have reached a kind of multi-variable optimality, minimizing the algorithmical and technical complexity while maximizing the eye-pleasingness of the result. These patterns may very well
be worth studying by electronic CMA artists as well. Things like this are also an object of study for the field of ethnomathematics, so that's another word you may want to look up if you're interested.


What about the music department, then? Even though human beings have written music down in discrete notation formats for a couple of millennia already, the notes alone are not enough for us. CMA emphasizes the role of the rendering, and the performance therefore needs to be discrete as well. As it seems that every live performance has at least some non-discrete variables, we will need to limit ourselves to automatic systems.


[A musical box]

The earliest automatic music was mechanical, and arguably the simplest conceivable automatic music system is the musical box. Although the musical box isn't exactly discrete, as the barrel rotates continuously rather than stepwise, I'm sure that the pins have been positioned in an engineer's accuracy as guided by written music notation. So, it should be discrete enough to satisfy our demands, and we may very well declare the musical box as being the mechanical counterpart of chip music.


Conclusion


I hope these ideas can provide food for thought for people interested in the various forms of "low-tech" electronic art as well as computational art or "discrete art" in general. I particularly want people to realize the universality of Computationally Minimal Art and how it works very well outside of the rigid "historical" contexts it is often confined into.


I consciously skipped all the cultural commentary in the main text on my quest for proving the universality of my idea, so perhaps it's time for that part now.


In this world of endless growth and accumulation, I see Computationally Minimal Art as standing for something more sustainable, tangible and crafty than what the growth-oriented "mainstream cultural industry" provides. CMA represents the kind of simplicity and timelessness that is totally immune to the industrial trends of fidelity maximization and planned obsolescence. It is something that can be brought to a perfection by an individual artist,
without hiring a thousand-headed army of specialists.


As we are in the middle of a growth phase, we can only guess what kind of forms Computationally Minimal Art will get in the future, and what kind of position it will eventually acquire in our cultural framework. We are living interesting times indeed.