Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

Thursday, 9 April 2015

Bringing magic back to technology

Back in 2011, I was one of the discoverers of "Bytebeat", a type of very short computer programs that generate music. These programs received quite a lot of attention because they seem to be far too short for the complex musical structures they output. I wrote several technical articles about Bytebeat (arxiv, countercomplex 1, countercomplex 2) as well as a Finnish-language academic article about the social dynamics of the phenomenon. Those who just need a quick glance may want to check out one of the Youtube videos.

The popularity of Bytebeat can be partially explained with the concept of "hack value", especially in the context of Hakmem-style hacks -- very short programs that seem to outgrow their size. The Jargon File gives the following formal definition for "hack value" in the context of very short visual programs, display hacks:
"The hack value of a display hack is proportional to the esthetic value of the images times the cleverness of the algorithm divided by the size of the code."
Bytebeat programs apparently have a high hack value in this sense. The demoscene, being distinct from the MIT hacker lineage, does not really use the term "hack value". Still, its own ultra-compact artifacts (executables of 4096 bytes and less) are judged in a very similar manner. I might just replace "cleverness of the algorithm" with something like "freshness of the output compared to earlier work".
Another related hacker concept is "magic", which the Jargon File defines as follows:
1. adj. As yet unexplained, or too complicated to explain; compare automagically and (Arthur C.) Clarke's Third Law: "Any sufficiently advanced technology is indistinguishable from magic." "TTY echoing is controlled by a large number of magic bits." "This routine magically computes the parity of an 8-bit byte in three instructions." 
2. adj. Characteristic of something that works although no one really understands why (this is especially called black magic). 
3. n. [Stanford] A feature not generally publicized that allows something otherwise impossible, or a feature formerly in that category but now unveiled. 
4. n. The ultimate goal of all engineering & development, elegance in the extreme; from the first corollary to Clarke's Third Law: "Any technology distinguishable from magic is insufficiently advanced".
Short programs with a high hack value are magical especially in the first two senses. How and why Bytebeat programs work was often a mystery even to their discoverers. Even when some theory about them was devised, it was often quite difficult to understand or apply. Especially bitwise arithmetic tends to have very esoteric uses in Bytebeat.

The hacker definition of magic indirectly suggests that highly advanced and elegant engineering should be difficult to understand. Indecipherable program code has even been celebrated in contests such as IOCCC. This idea is highly countercultural. In mainstream software industry, clever hacks are despised: all code should be as easy as possible to understand and maintain. The mystical aspects of hacker subcultures are there to compensate for the dumb, odorless and dehumanizing qualities of the industrial chores.

Magic appears in the Jargon File in two ways. Terms such as "black magic", "voodoo programming" and "cargo cult programming" represent cases where the user doesn't know what they are doing or may not even strive to. Another aspect is exemplified by terms such as "deep magic" and "heavy wizardry": there, the technology may be difficult to understand or chaotic to control, but at least there are some talented individuals who have managed to. These aspects could be called "wild" and "domesticated", respectively, or alternatively "superstition" and "esoterica".

Most technology used to be magical in the wild/superstitious way. Cultural evolution does not require individual innovators to understand how their innovations work. Fermentation, for example, had been used for thousands of years without anyone having seen a micro-organism. Despite this, cultural evolution can find very good solutions if enough time is given: traditional craft designs often have a kind of optimality that is very difficult to attain from scratch even with the help of modern science. (See e.g. Robert Boyd et al.'s articles about cultural evolution of technology)

Science and technology have countless examples of "wild magic" getting "domesticated". An example from computer music is the Karplus-Strong string model. Earlier models of acoustic simulation had been constructed via rational analysis alone, so they were prohibitively expensive for real-time synthesis. Then, Karplus and Strong accidentally discovered a very resource-efficient model due to a software bug, and nowadays it is pretty standard textbook material without much magical glamor at all.

Magic and rationality support each other. In good technology, they would coexist in symbiosis. Industrialization, however, brought a cult of obsolescence that prevented this kind of relationship. Traditions, time-proven designs, intuitive understanding and irreducible wisdom started to get obsoleted by one-dimensional reductive analysis. Nowadays, "magic" is only tolerated as bursts of inspiration that must be captured within reductivist frameworks before they break something.

In the 20th century, utilitarian industrial engineering started to get obsoleted by its bastard offspring, tumorous engineering. This is what I discussed in my earlier essay "The resource leak bug of our civilization". Accumulation of bloat and complexity for their own sake is making technology increasingly difficult to rationally understand and control. In computing, where tumourous engineering dominates, designers are already longing back to utilitarian industry where simplicity, controllability, resource-efficiency and expertise were still valued.

When advocating the reintroduction of magic, one must be careful not to endorse the kind of superstitious thinking that already has a good hold on how people relate to technology. Devices that hide their internal logic and instead base their interfaces on guessing what the user wants are kind of Aladdin's lamps to most. You don't really understand how they work, but at least their spirits fulfill your wishes as long as you don't make them angry.

The way how magic manifests itself in traditional technology is diagonally opposite to this. The basic functional principles of a bow, a canoe or a violin can be learned via simple observation and experimentation. The mystery lies elsewhere: in the evolutionary design details that are difficult to rationally explain, in the outworldish talent and wisdom of the master crafter, in the superhuman excellence of the skilled user. If the design has been improved over generations, even minor improvements are difficult to do anymore, which gives it an aura of perfection.

The magic we need more in today's technological world is of the latter kind. We should strive to increase deepness rather than outward complexity, human virtuosity rather than consumerism, flexibility rather than effortlessness. The mysteries should invite attempts at understanding and exploitation rather than blind reliance or worship; this is also the key difference between esoterica and superstition.

One definition of magic, compatible with that in the Jargon File, is that it breaks people's preconceptions of what is possible. In order to challenge and ridicule today's technological bloat, we should particularly aim at discoveries that are "far too simple and random to work but still do". New ways to use and combine the available grassroots-level elements, for instance.

A Bytebeat formula is a simple arrangement of digital-arithmetic operations that have been elementary to computers since the very beginning. It is apparently something that should have been discovered decades ago, but it wasn't. Hakmem contains a few "sound hacks" that could have evolved into Bytebeat if a wide enough counter had been introduced into them, but there are no indications that this ever took place. It is mind-boggling to think about that the space of very short programs remains so uncharted that random excursions there can churn out new interesting structures even after seventy years.

Now consider that we are surrounded by millions of different natural "building blocks" such as plants, micro-organisms and geological materials. I honestly believe that, despite hundreds of thousands of years of cultural evolution, their combinatory space is nowhere near fully charted. For instance, it could be possible to find a rather simple and rudimentary technique that would make micro-organisms transform sand into a building material superior to everything we know today. A favorite fantasy scenario of mine is a small self-sufficient town that builds advanced spacecraft from scratch with "grassroots-level" techniques that seem magical to our eyes.

How to develop this kind of magic? Rational analysis and deterministic engineering will help us to some extent, but we are dealing with systems so chaotic and multidimensional that decades of random experimentation would be needed for many crucial leaps-forward. And we don't really have those decades if we want to beat our technological cancer.

Fortunately, the same Moore's law that empowers tumorous engineering also provides a way out. Computers make it possible to manage chaotic systems in ways other than neurotic modularization. Today's vast computational capacities can be used to simulate the technological trial-and-error of cultural evolution with various level of accuracy. Of course, simulations often fail, but at least they can give us a compass for real-world experimentation. Another important compass is "hack value" or "scientific intuition" -- the modern manifestations of the good old human sense of wonder that has been providing fitness estimations for cultural evolution since time immemorial.

Saturday, 14 March 2015

Counteracting alienation with technological arts and crafts

The alienating effects of modern technology have been discussed a lot during the past few centuries. Prominent thinkers such as Marx and Heidegger have pointed out how people get reduced into one-dimensional resources or pieces of machinery. Later on, grasping the real world has become increasingly difficult due to the ever-complexifying network of interface layers. I touched this topic a little bit in an earlier text of mine.

How to solve the problem? Discussion tends to bipolarize into quarrels between techno-utopians ("technological progress will automatically solve all the problems") and neo-luddites ("the problems are inherent in technology so we should avoid it altogether"). I looked for a more constructive view and found it in Albert Borgmann.

According to Borgmann, the problem is not in technology or consumption per se, but in the fact that we have given them the primary importance in our lives. To solve the problem, Borgmann proposes that we give the importance to something more worthwhile instead – something he calls "focal things and practices". His examples include music, gardening, running, and the culture of the table. Technological society would be there to protect these focalities instead of trying to make them obsolete.

In general, focal things and practices are something that are somehow able to reflect the whole human existence. Something where self-expression, excellence and deep meanings can be cultivated. Traditional arts and crafts often seem to fulfill the requirements, but Borgmann becomes skeptical whenever high technology gets involved. Computers or modern cars easily alienate the hands-on craftsperson with their blackboxed microelectronics.

Perhaps the most annoying part in Eric S. Raymond's "How To Become A Hacker" is the one titled "Points For Style". Raymond states there that an aspiring hacker should adopt certain non-computer activites such as language play, sci-fi fandom, martial arts and musical practice. This sounds to me like an enforcement of a rather narrow subcultural stereotype, but reading Borgmann made me realize an important point there: computer activities alone aren't enough even for computer hackers – they need to be complemented by something more focal.

Worlds drifting apart

So far so good: we should maintain a world of focal things supported by a world of high-tech things. The former is quite earthly, so everything that involves computing and such belongs to the latter. But what if these two worlds drift too far apart?

Borgmann believes that focal things can clarify technology. The contrast between focal and technological helps people put high-tech in proper roles and demand more tangibility from it. If the technology is material enough, its material aspects can be deepened by the materiality of the focal things. When dealing with information technology, however, Borgmann's idea starts losing relevance. Virtual worlds no longer speak a material language, so focal traditions no longer help grasp their black boxes. Technology becomes a detached, incomprehensible bubble of its own – a kind of "necessary evil" for those who put the focal world first.

In order to keep the two worlds anchored together, I suppose we need to build some islands between them. We need things and practices that are tangible and human enough to be earthed by "real" focal practices, but high-tech enough to speak the high-tech language.

Hacker culture provides one possible key. The principles of playful exploration and technological self-expression can be expanded to many other technologies besides computing. Even if "true focality" can't be reached, the hacker attitude at least counteracts passive alienation. Art and craft building on the assumed essence of a technology can be powerful in revealing the human-approachable dimensions of that technology.

How many hackers do we need?

I don't think it is necessary for every user of a complex technology to actively anchor it to reality. However, I do think everyone's social circle should include people who do. Assuming a a minimal Dunbar's number of 100, we can deduce that at least one percent of users of any given technology in any social group should be part of a "hacker culture" that anchors it.

Anchoring a technology requires a relationship deeper than what mere rational expertise provides. I would suggest that at least 10% of the users of a technology (preferrably a majority, however) should have a solid rational understanding of it, and at least 10% of these should be "hackers". A buffer of "casual experts" between superficial and deep users would also have some sociodynamical importance.

We also need to anchor those technologies that we don't use directly but which are used for producing the goods we consume. Since everyone eats food and wears clothes, every social circle needs to have some "gardening hackers" and "textile hackers" or something with a similar anchoring capacity. In a scenario where agriculture and textile industry are highly automated, some "automation hackers" may be needed as well.

Computing needs to be anchored from two sides – physical and logical. The physical aspect could be well supported by basic electronics craft or something like ham radio, while the logical side could be nurtured by programming-centered arts, maybe even by recreational mathematics.

The big picture

Sophisticated automation leaves people with increasing amounts of free time. Meanwhile, knowledge and control over technology are held by ever fewer. It is therefore quite reasonable to use the extra free time for activities that help keep technology in people's hands. A network of technological crafters may also provide alternative infrastructure that decreases dependence on the dominant machinery.

In an ideal world, people would be constantly aware of the skills and interests present in their various social circles. They would be ready to adopt new interests depending on which technologies need stronger anchoring. The society in general would support the growth and diversification of those groups that are too small or demographically too uniform.

At their best, technological arts would have a profound positive effect on how the majority experiences technology – even when practiced by only a few. They would inspire awe, appreciation and fascination in the masses but at the same time invite them to try to understand the technology.

This was my humble suggestion on a possible way how to counteract technological alienation. I hope I managed to be inspiring.

Sunday, 7 September 2014

How I view our species and our world

My recent blog post "The resource leak bug of our civilization" has gathered some interest recently, especially after getting noticed by Ran Prieur in his blog. I therefore decided to translate another essay to give it a wider context. Titled "A few words about humans and the world", it is intended to be a kind of wholesome summary of my worldview, and it is especially intended for people who have had difficulties in understanding the basis of some of my opinions.

---

This writeup is supposed to be concise rather than convincing. It therefore skips a lot of argumentation, linking and breakdowns that might be considered necessary by some. I'll get back to them in more specific texts.

1. Constructions

Humans are builders. We build not only houses, devices and production machinery, but also cultures, conceptual systems and worldviews. Various constructions can be useful as tools, however we also have an unfortunate tendency to chain ourselves to them.

Right now, humankind has chained itself to the worship of abundance: it is imperative to produce and consume more and more of everything. Quantitative growth is imagined to be the same thing as progress. Especially during the last hundred years, the theology of abundance has invaded so deep and profound levels, that most people don't even realize its effect. It's not just about consumerism on a superficial level, but about the whole economic system and worldview.

Extreme examples of growth ideology can be easily found in the digital world, where it manifests as a raised-to-the-power-two version. What happens if worshippers of abundance get their hands on a virtual world where the amount of available resources increases exponentially? Right, they will start bloating up the use of resources, sometimes even for its own sake. It is not at all uncommon to require a thousand times more memory and computational power than necessary for a given task. Mindless complexity and purposeless activities are equated with technological advancement. The tools and methods the virtual world is being built with have been designed from the point of view of idealized expansion, so it is difficult to even imagine alternatives.

I have some background in a branch of hacker culture, demoscene, where the highest ideal is to use minimal resources in an optimal way. The nature of the most valued progress there is condensing rather than expanding: doing new things under ever stricter limitations. This has helped me perceive the distortions of the digital world and their counterparts in the material world.

In everyday life, the worship of growth shows up, above all, as complexification of everything. It is becoming increasingly difficult to understand various socio-economic networks or even the functionality of ordinary technological devices. This alienates people from the basics of their lives. Many try to fight this alienation by creating pockets of understandability. Escapism, conservatism and extremism rise. On the other hand, there is also an increase in do-it-yourself culture and longing to a more self-sufficient way of life. People should be encouraged into these latter-mentioned, positive means to counter alienation instead of channels that increase conflicts.

An ever greater portion of techno-economical structures consists of useless clutter, so-called economic tumors. They form when various decision-makers attempt to keep their acquired cake-pieces as big as possible. Unnecessary complexity slows down and unilateralizes progress instead of being a requirement for it. Expansion needs to be balanced with contraction -- you can't breath in without breating out.

The current phase of expansion is finally about to end, since the fossil fuels that made it possible are getting rarer, and we still don't know about an equally powerful replacement. As the phase took so long, the transition into contraction will be difficult to many. An increasingly bigger portion of economy will escape into the digital world, where it is possible to maintain the unrealistic swelling longer than in the material world.

Dependencies of production can be depicted as a pyramid where the things on the higher levels are built from the things below. In today's world, people always try to build on the top, so the result looks more like a shaky tower than a pyramid. Most new things could be easily built at lower levels. The lowest levels of the pyramid could also be strengthened by giving more room for various self-sufficient communities, local production and low-tech inventions. Technological and cultural evolution is not a one-dimensional road where "forward" and "backward" are the only alternatives. Rather, it is a network of possibilities burgeoning towards every direction, and even its strange side-loops are worth knowing.

2. Diversity

It is often assumed that growth would increase the amount of available options. In principle, this is true -- there are more and more different products on store shelves -- but their differences are more and more superficial. The same is true with ways of life: it is increasingly difficult to choose a way of life that wouldn't be attached to the same chains of production or models of thinking as every other way of life. The alternatives boil down into the same basic consumer-whoredom.

Proprietors overstandardize the world with their choices, but this probably isn't very conscious activity. When there are enough decision-makers who play the same game with the same rules, the world will eventually shape around these rules (including all the ingrained bugs and glitches). Conspiracy theories or evil-incarnates are therefore not required to explain what's going on.

The human-built machinery is getting increasingly more complex, so it is also increasingly more difficult to talk about it in concrete terms. Many therefore seek help from conceptual tools such as economic theories, legal terminology or ideologies, and subsequently forget that they are just tools. Nowadays, money- and production-centered ways of conceptualizing the world have become so dominant that people often don't realize that there are other alternatives.

Diversity helps nature adapt to changes and recover from disasters. For the same reason, human culture should be as diverse as possible especially now that the future is very uncertain and we have already started to crash into the wall. It is necessary to make it considerably less difficult to choose radically different ways of life. Much more room should be given to experimental societies. Small and unique languages and cultures should be treasured.

There's no one-size-fits-all model that would be best for everyone. However, I believe that most people would be happiest in a society that actively maintains human rights and makes certain that no one is left behind. Dictatorship of majority, however, is not that crucial feature of a political system in a world where everyone can freely choose a suitable system. Regardless, dissidents should be given enough room in every society: everyone doesn't necessarily have the chance to choose a society, and excessive unanimosity tends to be quite harmful anyway.

3. Consciousness

Thousands of years ago, the passion for construction became so overwhelming that the quest for mental refinement didn't keep with the pace. I regard this as the main reason why human beings are so prone to become slaves of their constructs. Rational analysis is the only mental skill that has been nurtured somewhat sufficiently, and even rational analysis often becomes just a tool for various emotional outbursts and desires. Even very intelligent people may be completely lost with their emotions and motivations, making them inclined to adopt ridiculously one-dimensional thought constructs.

Putting one's own herd before anyone else is an example of attitude that may work among small hunter-gatherer groups, but which should have no more place in the modern civilization. A population that has the intellectual facilities to build global networks of cause and effect should also have the ability to make decisions on the corresponding level of understanding instead of being driven by pre-intellectual instincts.

Assuming that humankind still wants to maintain complex societal and technological structures, it should fill its consciousness gap. Any school system should teach the understanding and control of one's own mind at least as seriously as reading and writing. New practical mental methods, suitable for an ever greater variety of people, should be developed at least as passionately as new material technology.

For many people, worldview is still primarily a way of expressing one's herd instincts. They argue and even fight about whose worldview is superior. It is hopeful that future will bring a more individual attitude towards them: there is no single "truth" but different ways for conceptualizing the reality. A way that is suitable for one mind may be even destructive to another mind. Science produces facts and theories that can be used as building blocks for different worldviews, but it is not possible to put these worldviews into an objective order of preference.

4. Life

The purposes of life for individual human beings stem from their individual worldviews, so it is futile to suggest rules-of-thumb that suit all of them. It is much easier to talk about the purpose of biological life, however.

The basic nature of life, based on how life is generally defined, is active self-preservation: life continuously maintains its form, spreads and adapts into different circumstances. The biological role of a living being is therefore to be part of an ecosystem, strengthening the ecosystem's potential for continued existence.

The longer there is life on Earth, the more likely it is to expand into outer space at some point of time. This expansion may already take place during the human era, but I don't think we should specifically strive for it before we have learned how to behave non-destructively. However, I'm all for the production of raw material and energy in space, if it helps us abstain from raping our home planet.

At their best, intelligent lifeforms could function as some sort of gardeners. Gardeners that strengthen and protect the life in their respective homeworlds and help spread it to other spheres. However, I don't dare to suggest that the current human species have the prequisites for this kind of role. At this moment, we are so lost that we couldn't become even a galactic plague.

Some people regard the human species as a mistake of evolution and want us to abandon everything that differentiates us from other animals. I see no problem per se in the natural behavior of homo sapiens, however: there's just an unfortunate misbalance of traits. We shouldn't therefore abandon reason, abstractions or constructivity but rebalance them with more conscious self-improvement and mental refinement.

5. The end of the world

It is not possible to save the world, if it means saving the current societies and consumer-centric lifestyles. At most, we can soften the crash a little bit. It is therefore more relevant to concentrate on activities that make the postapocalyptic world more life-friendly.

As there is still an increasing amount of communications technology and automation in the world, and the privileged even have increasingly more free time, these facilities should be used right now for sowing the seeds for a better world. If we start building alternative constructs only when the circumstances force us to, the transition will be extremely painful.

People increasingly dwell in easiness bubbles facilitated by technology. It is therefore a good idea to bring suitable signals and facilities into these bubbles. Video game technology, for example, can be used to help reclaim one's mind, life and material environment. Entertainment in general can be used to increase the interest in such a reclaim.

Many people imagine progress as a kind of unidirectional growth curve and therefore regard the postapocalyptic era as a "return to the past". However, the future world is more likely to become radically different from any previous historical era -- regardless of some possible "old-fashioned" aspects. It may therefore more relevant to use fantasy rather than history to envision the future.

Tuesday, 5 August 2014

The resource leak bug of our civilization


A couple of months ago, Trixter of Hornet released a demo called "8088 Domination", which shows off real-time video and audio playback on the original 1981 IBM PC. This demo, among many others, contrasts favorably against today's wasteful use of computing resources.

When people try to explain the wastefulness of today's computing, they commonly offer something I call "tradeoff hypothesis". According to this hypothesis, the wastefulness of software would be compensated by flexibility, reliability, maintability, and perhaps most importantly, cheap programming work. Even Trixter himself favors this explanation.

I used to believe in the tradeoff hypothesis as well. I saw demo art on extreme platforms as a careful craft that attains incredible feats while sacrificing generality and development speed. However, during recent years, I have become increasingly convinced that the portion of true tradeoff is quite marginal. An ever-increasing portion of the waste comes from abstraction clutter that serves no purpose in final runtime code. Most of this clutter could be eliminated with more thoughtful tools and methods without any sacrifices. What we have been witnessing in computing world is nothing utilitarian but a reflection of a more general, inherent wastefulness, that stems from the internal issues of contemporary human civilization.

The bug


Our mainstream economic system is oriented towards maximal production and growth. This effectively means that participants are forced to maximize their portions of the cake in order to stay in the game. It is therefore necessary to insert useless and even harmful "tumor material" in one's own economical portion in order to avoid losing one's position. This produces an ever-growing global parasite fungus that manifests as things like black boxes, planned obsolescence and artificial creation of needs.

Using a software development metaphor, it can be said that our economic system has a fatal bug. A bug that continuously spawns new processes that allocate more and more resources without releasing them afterwards, eventually stopping the whole system from functioning. Of course, "bug" is a somewhat normative term, and many bugs can actually be reappropriated as useful features. However, resource leak bugs are very seldom useful for anything else than attacking the system from the outside.

Bugs are often regarded as necessary features by end-users who are not familiar with alternatives that lack the bug. This also applies to our society. Even if we realize the existence of the bug, we may regard it as a necessary evil because we don't know about anything else. Serious politicians rarely talk about trying to fix the bug. On the contrary, it is actually getting more common to embrace it instead. A group that calls itself "Libertarians" even builds their ethics on it. Another group called "Extropians" takes the maximization idea to the extreme by advocating an explosive expansion of humankind into outer space. In the so-called Kardashev scale, the developmental stage of a civilization is straightforwardly equated with how much stellar energy it can harness for production-for-its-own-sake.

How the bug manifests in computing


What happens if you give this buggy civilization a virtual world where the abundance of resources grows exponentially, as in Moore's law? Exactly: it adopts the extropian attitude, aggressively harnessing as much resources as it can. Since the computing world is virtually limitless, it can serve as an interesting laboratory example where the growth-for-its-own-sake ideology takes a rather pure and extreme form. Nearly every methodology, language and tool used in the virtual world focuses on cumulative growth while neglecting many other aspects.

To concretize, consider web applications. There is a plethora of different browser versions and hardware configurations. It is difficult for developers to take all the diversity in account, so the problem has been solved by encapsulation: monolithic libraries (such as Jquery) that provide cross-browser-compatible utility blocks for client-side scripting. Also, many websites share similar basic functionality, so it would be a waste of labor time to implement everything specifically for each application. This problem has also been solved with encapsulation: huge frameworks and engines that can be customized for specific needs. These masses of code have usually been built upon previous masses of code (such as PHP) that have been designed for the exactly same purpose. Frameworks encapsulate legacy frameworks, and eventually, most of the computing resources are wasted by the intermediate bloat. Accumulation of unnecessary code dependencies also makes software more bug-prone, and debugging becomes increasingly difficult because of the ever-growing pile of potentially buggy intermediate layers. 

Software developers tend to use encapsulation as the default strategy for just about everything. It may feel like a simple, pragmatic and universal choice, but this feeling is mainly due to the tools and the philosophies they stem from. The tools make it simple to encapsulate and accumulate, and the industrial processes of software engineering emphasize these ideas. Alternatives remain underdeveloped. Mainstream tools make it far more cumbersome to do things like metacoding, static analysis and automatic code transformations, which would be far more relevant than static frameworks for problems such as cross-browser compatibility.

Tell a bunch of average software developers to design a sailship. They will do a web search for available modules. They will pick a wind power module and an electric engine module, which will be attached to some kind of a floating module. When someone mentions aero- or hydrodynamics, the group will respond by saying that elementary physics is a far too specialized area, and it is cheaper and more straight-forward to just combine pre-existing modules and pray that the combination will work sufficiently well.

Result: alienation


The way of building complex systems from more-or-less black boxes is also the way how our industrial society is constructed. Computing just takes it more extreme. Modularity in computing therefore relates very well to the technology criticism of philosophers such as Albert Borgmann.

In his 1984 book, Borgmann uses the term "service interface", which even sounds like software development terminology. Service interfaces often involve money. People who have a paid job, for example, can be regarded as modules that try to fulfill a set of requirements in order to remain acceptable pieces of the system. When using the money, they can be regarded as modules that consume services produced by other modules. What happens beyond the interface is considered irrelevant, and this irrelevance is a major source of alienation. Compare someone who grows and chops their own wood for heating to someone who works in forest industry and buys burnwood with the paycheck. In the former case, it is easier to get genuinely interested by all the aspects of forests and wood because they directly affect one's life. In the latter case, fulfilling the unit requirements is enough.

The way of perceiving the world as modules or devices operated via service interfaces is called "device paradigm" in Borgmann's work. This is contrasted against "focal things and practices" which tend to have a wider, non-encapsulated significance to one's life. Heating one's house with self-chopped wood is focal. Also arts and crafts have a lot of examples of focality. Borgmann urges a restoration of focal things and practices in order to counteract the alienating effects of the device paradigm.

It is increasingly difficult for computer users to avoid technological alienation. Systems become increasingly complex and genuine interest towards their inner workings may be discouraging. If you learn something from it, the information probably won't stay current for very long. If you modify it, subsequent software updates will break it. It is extremely difficult to develop a focal relationship with a modern technological system. Even hard-core technology enthusiasts tend to ignore most aspects of the systems they are interested in. When ever-complexifying computer systems grow ever deeper ingrained into our society, it becomes increasingly difficult to grasp even for those who are dedicated to understand it. Eventually even 
they will give up.

Chopping one's own wood may be a useful way to counteract the alienation of the classic industrial society, as oldschool factories and heating stoves still have some basics in common. In order to counteract the alienation caused by computer technology, however, we need to find new kind of focal things and practices that are more computerish. If they cannot be found, they need to be created. Crafting with low-complexity computer and electronic systems, including the creation of art based on them is my strongest candidate for such a focal practice among those practices that already exist in subcultural form.

The demoscene insight


I have been programming since my childhood, for nearly thirty years. I have been involved with the demoscene for nearly twenty years. During this time, I have grown a lot of angst towards various trends of computing.

Extreme categories of the demoscene -- namely, eight-bit democoding and extremely short programs -- have been helpful for me in managing this angst. These branches of the demoscene are a useful, countercultural mirror that contrasts against the trends of industrial software development and helps grasp its inherent problems.

Other subcultures have been far less useful for me in this endeavour. The mainstream of open source / free software, for example, is a copycat culture, despite its strong ideological dimension. It does not actively question the philosophies and methodologies of the growth-obsessed industry but actually embraces them when creating duplicate implementations of growth-obsessed software ideas.

Perhaps the strongest countercultural trend within the demoscene is the move of focus towards ever tighter size limitations, or as they say, "4k is the new 64k". This trend is diagonally opposite to what the growth-oriented society is doing, and forces to rethink even the deepest "best practices" of industrial software development. Encapsulation, for example, is still quite prominent in the 4k category (4klang is a monolith), but in 1k and smaller categories, finer methods are needed. When going downwards in size, paths considered dirty by the mainstream need to be embraced. Efficient exploration and taming of chaotic systems needs tools that are deeply different from what have been used before. Stephen Wolfram's ideas presented
in "A New Kind of Science" can perhaps provide useful insight for this endeavour.

Another important countercultural aspect of the demoscene is the relationship with computing platforms. The mainstream regards platforms as neutral devices that can be used to reach a predefined result, while the demoscene regards them as a kind of raw material that has a specific essence of its own. Size categories may also split platforms into subplatforms, each of which has its own essence. The mainstream wants to hide platform-specific characteristics by encapsulating them into uniform straightjackets, while the demoscene is more keen to find suitable esthetical approaches for each category. In Borgmannian terms, demoscene practices are more focal.

Demoscene-inspired practices may not be the wisest choice for pragmatic software development. However, they can be recommended for the development of a deeper relationship with technology and for diminishing the alienating effects of our growth-obsessed civilization.

What to do?


I am convinced that our civilization is already falling and this fall cannot be prevented. What we can do, however, is create seeds for something better. Now is the best time for doing this, as we still have plenty of spare time and resources especially in rich countries. We especially need to propagate the seeds towards laypeople who are already suffering from increasing alienation because of the ever more computerized technological culture. The masses must realize that alternatives are possible.

A lot of our current civilization is constructed around the resource leak bug. We must therefore deconstruct the civilization down to its elementary philosophies and develop new alternatives. Countercultural insights may be useful here. And since hacker subcultures have been forced to deal with the resource leak bug in its most extreme manifestation for some time already, their input can be particularly valuable.

Saturday, 17 March 2012

"Fabric theory": talking about cultural and computational diversity with the same words

In recent months, I have been pondering a lot about certain similarities between human languages, cultures, programming languages and computing platforms: they are all abstract constructs capable of giving a unique form or flavor to anything that is made with them or stems from them. Different human languages encourage different types of ideas, ways of expression, metaphors and poetry while discouraging others. Different programming languages encourage different programming paradigms, design philosophies and algorithms while discouraging others. The different characteristics of different computing platforms, musical instruments, human cultures, ideologies, religions or subcultural groups all similarly lead to specific "built-in" preferences in expression.

I'm sure this sounds quite meta, vague or superficial when explained this way, but I'm convinced that the similarities are far more profound than most people assume. In order to bring these concepts together, I've chosen to use the English word "fabric" to refer to the set of form-giving characteristics of languages, computers or just about anything. I've picked this word partly because of its dual meaning, i.e. you can consider a fabric a separate, underlying, form-giving framework just as well as an actual material from which the different artifacts are made. You may suggest a better word if you find one.

Fabrics

The fabric of a human language stems (primarily) from its grammar and vocabulary. The principle of lingustic relativity, also known as the Sapir-Whorf hypothesis, suggests that language defines a lot about how our ways of thinking end up being like, and there is even a bunch of experimental support for this idea. The stronger, classical version of the hypothesis, stating that languages build hard barriers that actually restrict what kind of ideas are possible, is very probably false, however. I believe that all human languages are "human-complete", i.e. they are all able to express the same complete range of human thoughts, although the expression may become very cumbersome in some cases. In most Indo-European languages, for example, it is very difficult to talk about people without mentioning their real or assumed genders all the time, and it may be very challenging to communicate mathematical ideas in an Aboriginal language that has a very rudimentary number system.

Many programmers seem to believe that the Sapir-Whorf hypothesis also works with programming languages. Edsger Dijkstra, for example, was definitely quite Whorfian when stating that teaching BASIC programming to students made them "mentally mutilated beyond hope of regeneration". The fabric of a programming language stems from its abstract structure, not much unlike those of natural languages, although a major difference is that the fabrics of programming languages tend to be much "purer" and more clear-cut, as they are typically geared towards specific application areas, computation paradigms and software development philosophies.

Beyond programming languages there are computer platforms. In the context of audiovisual computer art, the fabric of a hardware platform stems both from its "general-purpose" computational capabilities and the characteristics of its special-purpose circuitry, especially the video and sound hardware. The effects of the fabric tend to be the clearest in the most restricted platforms, such as 8-bit home computers and video game consoles. The different fabrics ("limitations") of different platforms are something that demoscene artists have traditionally been concerned about. Nowadays, there is even an academic discipline with an expanding series of books, "Platform Studies", that asks how video games and other forms of computer art have been shaped by the fabrics of the platforms they've been made for.

The fabric of a human culture stems from a wide memetic mess including things like taboos, traditions, codes of conduct, and, of course, language. In modern societies, a lot stems from bureaucratic, economic and regulatory mechanisms. Behavior-shaping mechanisms are also very prominent in things like video games, user interfaces and interactive websites, where they form a major part of the fabric. The fabric of a musical instrument stems partly from its user interface and partly from its different acoustic ranges and other "limitations". It is indeed possible to extend the "fabric theory" to quite a wide variety of concepts, even though it may get a little bit far-fetched at times.

Noticing one's own box

In many cases, a fabric can become transparent or even invisible. Those who only speak one language can find it difficult to think beyond its fabric. Likewise, those who only know about one culture, one worldview, one programming language, one technique for a specific task or one just-about-anything need some considerable effort to even notice the fabric, let alone expand their horizons beyond it. History shows that this kind of mental poverty leads even some very capable minds into quite disastrous thoughts, ranging from general narrow-mindedness and false sense of objectivity to straightforward religious dogmatism and racism.

In the world of computing, difficult-to-notice fabrics come out as standards, de-facto standards and "best practices". Jaron Lanier warns about "lock-ins", restrictive standards that are difficult to outthink. MIDI, for example, enforces a specific, finite formalization of musical notes, effectively narrowing the expressive range of a lot of music. A major concern risen by "You are not a gadget" is that technological lock-ins of on-line communication (e.g. those prominent in Facebook) may end up trivializing humanity in a way similar to how MIDI trivializes music.

Of course, there's nothing wrong with standards per se. Standards, also including constructs such as lingua francas and social norms, can be very helpful or even vital to humanity. However, when a standard becomes an unquestionable dogma, there's a good chance for something evil to happen. In order to avoid this, we always need individuals who challenge and deconstruct the standards, keeping people aware of the alternatives. Before we can think outside the box, we must first realize that we are in a box in the first place.

Constraints

In order to make a fabric more visible and tangible, it is often useful to introduce artificial constraints to "tighten it up". In a human language, for example, one can adopt a form of constrained writing, such as a type of poetry, to bring up some otherwise-invisible aspects of the linguistic fabric. In normal, everyday prose, words are little more than arbitrary sequences of symbols, but when working under tight constraints, their elementary structures and mutual relationships become important. This is very similar to what happens when programming in a constrained environment: previously irrelevant aspects, such as machine code instruction lengths, suddenly become relevant.

Constrained programming has long traditions in a multitude of hacker subcultures, including the demoscene, where it has obtained a very prominent role. Perhaps the most popular type of constraint in all hacker subcultures in general is the program length constraint, which sets an upper limit to the size of either the source code or the executable. It seems to be a general rule that working with ever smaller program sizes brings the programmer ever closer to the underlying fabric: in larger programs, it is possible to abstract away a lot of it, but under tight constraints, the programmer-artist must learn to avoid abstraction and embrace the fabric the way it is. In the smallest size classes, even such details as the ordering of sound and video registers in the I/O space become form-giving, as seen in the sub-32-byte C-64 demos by 4mat of Ate Bit, for example.

Mind-benders

Sometimes a language or a platform feels tight enough even without any additional constraints. A lot of this feeling is subjective, caused by the inability to express oneself in the previously learned way. When learning a new human language that is completely different to one's mother tongue, one may feel restricted when there's no counterpart for a specific word or grammatical cosntruct. When encountering such a "boundary", the learner needs to rethink the idea in a way that goes around it. This often requires some mind-bending. The same phenomenon can be encountered when learning different programming languages, e.g. learning a declarative language after only knowing imperative ones.

Among both human and programming languages, there are experimental languages that have been deliberately constructed as "mind-benders", having the kind of features and limitations that force the user to rethink a lot of things when trying to express an idea. Among constructed human languages, a good example is Sonja Elen Kisa's minimalistic "Toki Pona" that builds everything from just over 120 basic words. Among programming languages, the mind-bending experiments are called "esoteric programming languages", with the likes of Brainfuck and Befunge often mentioned as examples.

In computer platforms, there's also a lot of variance in "objective tightness". Large amounts of general-purpose computing resources make it possible to accurately emulate smaller computers; that is, a looser fabric may sometimes completely engulf a tighter one. Because of this, the experience of learning a "bigger" platform after a "smaller" one is not usually very mind-bending compared to the opposite direction.

Nothing is neutral

Now, would it be possible to create a language or a computer that would be totally neutral, objective and universal? I don't think so. Trying to create something that lacks fabric is like trying to sculpt thin air, and fabrics are always built from arbitrarities. Whenever something feels neutral, the feeling is usually deceptive.

Popular fabrics are often perceived as neutral, although they are just as arbitrary and biased as the other ones. A tribe that doesn't have very much contact with other tribes typically regards its own language and culture as "the right one" and everyone else as strange and deviant. When several tribes come together, they may choose one language as their supposedly neutral lingua franca, and a sufficiently advanced group of tribes may even construct a simplified, bland mix-up of all of its member languages, an "Esperanto". But even in this case, the language is by no means universal; the fabric that is common between the source languages is still very much present. Even if the language is based on logical principles, i.e. a "Lojban", the chosen set of principles is arbitrary, not to mention all the choices made when implementing those principles.

Powerful computers can usually emulate many less powerful ones, but this does not make them any less arbitrary. On the contrary, modern IBM PC compatibles are full of arbitrary desgin choices stacked on one another, forming a complex spaghetti of historical trials and errors that would make no sense at all if designed from scratch. The modern IBM PC platform therefore has a very prominent fabric, and the main reason why it feels so neutral is its popularity. Another reason is that the other platforms have many a lot of the same design choices, making today's computer platforms much less diverse than what they were a couple of decades ago. For example, how many modern platforms can you name that use something other than RGB as their primary colorspace, or something other than a power of two as their word length?

Diversity is diminishing in many other areas as well. In countries with an astounding diversity, like Papua-New-Guinea, many groups are abandoning their unique native languages and cultures in favor of bigger and more prestigious ones. I see some of that even in my own country, where many young and intelligent people take pride in "thinking in English", erroreusnly assuming that second-language English would be somehow more expressive for them than their mother tongue. In a dystopian vision, the diversity of millennia-old languages and cultures is getting replaced by a global English-language monoculture where all the diversity is subcultural at best.

Conclusion

It indeed seems to be possible to talk about human languages, cultures, programming languages, computing platforms and many other things with similar concepts. These concepts also seem so useful at times that I'm probably going to use them in subsequent articles as well. I also hope that this article, despite its length, gives some food for thought to someone.

Now, go to the world and embrace the mind-bending diversity!

Wednesday, 7 September 2011

A new propaganda tool: Post-Apocalyptic Hacker World

I visited the Assembly demo party this year, after two years of break. It seemed more relevant than in a while, because I had an agenda.

For a year or so, I have been actively thinking about the harmful aspects of people's relationships with technology. It is already quite apparent to me that we are increasingly under the control of our own tools, letting them make us stupid and dependent. Unless, of course, we promote a different world, a different way of thinking, that allows us to remain in control.

So far, I've written a couple of blog posts about this. I've been nourishing myself with the thoughts of prominent people such as Jaron Lanier and Douglas Rushkoff who share the concern. I've been trying to find ways of promoting the aspects of hacker culture I represent. Now I felt that the time was right for a new branch -- an artistic one based on a fictional
world.

My demo "Human Resistance", that came 2nd in the oldskool demo competition, was my first excursion into this new branch. Of course, it has some echoes of my earlier productions such as "Robotic Liberation", but the setting is new. Instead of showing ruthless machines genociding the helpless mankind, we are dealing with a culture of ingenious hackers who manage to outthink a superhuman intellect that dominates the planet.



"Human Resistance" was a relatively quick hack. I was too hurried to fix the problems in the speech compressor or to explore the real potential of Tau Ceti -style pseudo-3D rendering. The text, however, came from my heart, and the overall atmosphere was quite close to what I intended. It introduces a new fictional world of mine, a world I've temporarily dubbed "Post-Apocalyptic Hacker World" (PAHW). I've been planning to use this world not only in demo productions but also in at least one video game. I haven't released anything interactive for like fifteen years, so perhaps it's about time for a game release.

Let me elaborate the setting of this world a little bit.

Fast-forward to a post-singularitarian era. Machines control all the resources of the planet. Most human beings, seduced by the endless pleasures of procedurally-generated virtual worlds, have voluntarily uploaded their minds into so-called "brain clusters" where they have lost their humanity and individuality, becoming mere components of a global superhuman intellect. Only those people with a lot of willpower and a strong philosophical stance against dehumanization remained in their human bodies.

Once the machines initiated an operation called "World Optimization", they started to regard natural formations (including all biological life) as harmful and unpredictable externalities. As a result, planet Earth has been transformed into something far more rigid, orderly and geometric. Forests, mountains, oceans or clouds no longer exist. Strange, lathe-like artifacts protrude from vast, featureless plains. Those who had studied ancient pop culture immediately noticed a resemblance to some of the 3D computer graphics of the 1980s. The real world has now started to look like the computed reality of Tron or the futuristic terrains of video games such as Driller, Tau Ceti and Quake Minus One.

Only a tiny fraction of biological human beings survived World Optimization. These people, who collectively call themselves "hackers", managed to find and exploit the blind spots of algorithmic logic, making it possible for them to establish secret, self-relying underground fortresses where human life can still struggle on. It has become a necessity for all human beings to dedicate as much of their mental capacities as possible to outthinking the brain clusters in order to eventually conquer them.

Many of the tropes in Post-Apocalyptic Hacker World are quite familiar. A human resistance movement fighting against a machine-controlled world, haven't we seen this quite many times already? Yes, we have, but I also think my approach is novel enough to form a basis for some cutting-edge social, technological and political commentary. By emphasizing things like the role of total cognitive freedom and radical understanding of things' inner workings in the futuristic hacker culture, it may be possible to get people realize their importance in the real world as well. It is also quite possible to include elements from real-life hacker cultures and mindsets in the world, effectively adding to their interestingness.

The "PAHW game" (still without a better title) is already in an advanced stage of pre-planning. It is going to become a hybrid CRPG/strategy game with random-generated worlds, very loose scripting and some very unique game-mechanical elements. This is just a side project so it may take a while before I have anything substantial to show, but I'll surely let you know once I have. Stay tuned!

Sunday, 24 July 2011

Don't submit yourself to a game machine!

(This is a translation of a post in my Finnish blog)

Some generations ago, when people said they were playing a game, they usually meant a social leisure activity that followed a commonly decided set of rules. The devices used for gaming were very simple, and the games themselves were purely in the minds of the players. It was possible to play thousands of different games with a single constant deck of cards, and it was possible for anyone to invent new games and variants.

Technological progress brought us "intelligent" gaming devices that reduced the possibility of negotiation. It is not possible to suggest an interesting rule variant to a pinball machine or a one-handed bandit; the machine only implements the rules it is built for. Changing the game requires technical skill and a lot of time, something most people don't have. As a matter of fact, most people aren't even interested in the exact rules of the game, they just care about the fun.

Nowadays, people have submitted ever bigger portions of their lives to "gaming machines" that make things at least superficially easier and simpler, but whose internal rules they don't necessarily understand at all. A substantial portion of today's social interaction in developed countries, for example, takes place in on-line social networking services. Under their hoods, these services calculate things like message visibility -- that is, which messages and whose messages are supposed to be more important for a given user. For most people, however, it seems to be completely OK that a computer owned by a big, distant corporation makes such decisions for them using a secret set of rules. They just care about the fun.

It has always been easy to use the latest media to manipulate people, as it takes time from the audience to develop criticism. When writing was a new thing, most people would regard any text as a "word of God" that was true just because it was written. In comparison, today's people have a thick wall of criticism against any kind of non-interactive propaganda, be that textual, aural or visual, but whenever a game-like interaction is introduced, we often become completely vulnerable. In short, we know how to be critical about an on-line news items but not how to be critical about the "like" and "share" buttons under them.

Video games, in many ways, surpasses traditional passive media in the potential of mental manipulation. A well-known example is the so-called Tetris effect caused by a prolonged playing of a pattern-matching game. The game of Tetris "programs" its player to constantly analyze the on-screen wall of blocks and mentally fit different types of tetrominos in it. When a player stops playing after several hours, the "program" may remain active, causing the player to continue mentally fitting tetrominos on outdoor landscapes or whatever they see in their environment. Other kinds of games may have other kinds of effects. I have personally also experienced an "adventure game effect" that caused me to unwillingly think about real-world things and locations from the point of view of "progressing in the script". Therefore, I don't think it is a very far-fetched idea that spending a lot of time on an interactive website gives our brains a permission to adapt to the "game mechanics" and unnoticeably alter the way how we look at the world.

So, is this a real threat? Are they already trying to manipulate our minds in game-mechanical means, and how? There has been perhaps even too much criticism of Facebook compared to other social networking sites, but I'm now it as an example as it is currently the most familiar one for the wide audience.

As many people probably understand already, Facebook's customer base doesn't consist of the users (who pay nothing for the service) but of marketeers who want their products to be sold. The users can be thought as mere raw material that can be refined to better fit the requirements of the market. This is most visible in the user profile mechanic that encourages users to define themselves primarily with multiple choices and product fandom. The only space in the profile that allows for a longer free text has been laid below all the "more important things". Marketeers don't want personal profile pages but realiable statistics, high-quality consumption habit databases and easily controllable consumers.

The most prominent game-mechanical element in Facebook is "Like", which affects nearly everything on the site. It is a simple and easily processable signal whose use is particularly encouraged. In its internal game, Facebook scores users according to how active "likers" they are, and gives more visibility to the messages of those users that score higher. Moderate users of Facebook, who use their whole brain to consider what to "Like" or not or what to share and not, gain less points and less visibility. This is how Facebook rewards the "virtuous" users and punishes the "sinful" ones.

What about those users who actually want to understand the inner workings of the service, in order to use it better for their own purposes? Facebook makes this very difficult, and I believe it is on purpose. The actual rules of the game haven't been documented anywhere, so users need to follow intuitive guesses or experiment with the thing. If a user actually manages to reverse-engineer part of the black box, he or she can never trust that it continues to work in the same way. The changes in the rules of the internal game can be totally unpredictable. This discourages users from even trying to understand the game they are playing and encourages them to trust the control of their private lives to the computers of a big, distant company.

Of course, Facebook is not representative of all forms of on-line sociality. The so-called imageboards, for example, are diagonally opposite to Facebook in many areas: totally uncommercial and simple-to-understand sites where real names or even pseudonyms are rarey used. As these sites function totally differently from Facebook, it can be guessed that they also affect their users' brains in a different way.

Technically, imageboards resemble discussion boards, but with the game-mechanical difference that they encourage a faster, more spontaneous communication which usually feels more like a loud attention-whoring contest than actual discussion. A lot of the imageboard culture can be explained as mere consequences of the mechanics. The fact that images are often more prominent than text in threads makes it possible for users to superficially skim around the pictures and only focus on the parts that seize their attention. This contributes to the fast tempo that invites the users to react very quickly and spontaneously, usually without any means of identification, as if as part of a rebellious mob. The belief in radical anonymity and hivemind power have ultimately become some kind of core values of the imageboard culture.

The possibility of anonymous commentary gives us a much greater sense of freedom than we get by using our real name or even a long-term pseudonym. Anonymous provocateurs don't need to be afraid of losing their face. They feel free to troll around from the bottom of their heart, looking for moments of "lulz" they get by heating someone up. The behavior is probably familiar to anyone who has been reading anonymous comments on news websites or toilet walls. Imageboards just take this kind of behavior to its logical extreme, basing all of its social interaction on a spontaneous mob behavior.

Critics of on-line culture, such as Lanier and Rushkoff, have often expressed their concern of how on-line socialization trivializes our view of other people. Instead of interacting with living people with rich personalities, we seem to be increasingly dealing with lists, statistics and faceless mobs who we interact with using "Like", "Block" and "Add Friend" buttons. I'm also concerned about this. Even when someone rationally understands on the rational level that this is just an abstraction required by the means of communication to work, we may accidentally and unnoticeably become programmed by the "Tetris effects" of these media. Awareness and criticism may very well reduce the risk, but I don't believe they can make anyone totally immune.

So, what can we do? Should we abandon social networking sites altogether to save the humanity of the human race? I don't think denialism helps anything. Instead, we should learn how to use the potential of interactive social technology in constructive rather than destructive means. We should develop new game mechanics that, instead of promoting collective stupidity and dehumanization, augment the positive sides of humanity and encourage us to improve ourselves. But is this anything great masses could become interested in? Do they any longer care about whether they remain as independent individuals? Perhaps not, but we can still hope for the best.

Friday, 17 June 2011

We need a Pan-Hacker movement.

Some decades ago, computers weren't nearly as common as they are today. They were big and expensive, and access to them was very privileged. Still, there was a handful of people who had the chance to toy around with a computer in their leisure time and get a glimpse of what a total, personal access to a computer might be like. It was among these people, mostly students in MIT and similar facilities, where the computer hacker subculture was born.

The pioneering hackers felt that computers had changed their life for the better and therefore wanted to share this new improvement method with everyone else. They thought everyone should have an access to a computer, and not just any kind of access but an unlimited, non-institutionalized one. Something like a cheap personal computer, for example. Eventually, in the seventies, some adventurous hackers bootstrapped the personal computer industry, which led to the the so-called "microcomputer revolution" in the early eighties.

The era was filled with hopes and promises. All kinds of new possibilities were now at everyone's fingertips. It was assumed that programming would become a new form of literacy, something every citizen should be familiar with -- after all, using a computer to its fullest potential has always required programming skill. "Citizens' computer courses" were broadcasted on TV and radio, and parents bought cheap computers for their kids to ensure a bright future for the next generation. Some prophets even went far enough to suggest that personal computers could augment people's intellectual capacities or even expand their consciousnesses in the way how psychedelic drugs were thought to do.

In the nineties, however, reality stroke back. Selling a computer to everyone was apparently not enough for automatically turning them into superhuman creatures. As a matter of fact, digital technology actually seemed to dumb a lot of people down, making them helpless and dependent rather than liberating them. Hardware and software have become ever more complex, and it is already quite difficult to build reliable mental models about them or even be aware of all the automation that takes place. Instead of actually understanding and controlling their tools, people just make educated guesses about them and pray that everything works out right. We are increasingly dependent on digital technology but have less and less control over it.

So, what went wrong? Hackers opened the door to universal hackerdom, but the masses didn't enter. Are most people just too stupid for real technological awareness, or are the available paths to it too difficult or time-consuming? Is the industry deliberately trying to dumb people down with excessive complexity, or is it just impossible to make advanced technology any simpler to genuinely understand? In any case, the hacker movement has somewhat forgotten the idea of making digital technology more accessible to the masses. It's a pity, since the world needs this idea now more than ever. We need to give common people back the possibility to understand and master the technology they use. We need to let them ignore the wishes of the technological elite and regain the control of their own lives. We need a Pan-Hacker movement.

What does "Pan-Hacker" mean? I'll be giving three interpretations that I find equally relevant, emphasizing different aspects of the concept: "everyone can be a hacker", "everything can be hacked" and "all hackers together".

The first interpretation, "everyone can be a hacker", expands on the core idea of oldschool hackerdom, the idea of making technology as accessible as possible to as many as possible. The main issue is no longer the availability of technology, however, but the way how the various pieces of technology are designed and what kind of user cultures are formed around them. Ideally, technology should be designed so that it invites the user to seize the control, play around for fun and gradually develop an ever deeper understanding in a natural way. User cultures that encourage users to invent new tricks should be embraced and supported, and there should be different "paths of hackerdom" for all kinds of people with all kinds of interests and cognitive frameworks.

The second interpretation, "everything can be hacked", embraces the trend of extending the concept of hacking out of the technological zone. The generalized idea of hacking is relevant to all kinds of human activities, and all aspects of life are relevant to the principles of in-depth understanding and hands-on access. As the apparent complexity of the world is constantly increasing, it is particularly important to maintain and develop people's ability to understand the world and all kinds of things that affect their lives.

The third interpretation, "all hackers together", wants to eliminate the various schisms between the existing hacker subcultures and bring them into a fruitful co-operation. There is, for example, a popular text, Eric S. Raymond's "How To Become A Hacker", that represents a somewhat narrow-minded "orthodox hackerdom" that sees the free/open-source software culture as the only hacker culture that is worth contributing to. It frowns upon all non-academic hacker subcultures, especially the ones that use handles (such as the demoscene, which is my own primary reference point to hackerdom). We need to get rid of this kind of segregation and realize that there are many equally valid paths suitable for many kinds of minds and ambitions.

Now that I've mentioned the demoscene, I would like to add that all kinds of artworks and acts that bring people closer to the deep basics of technology are also important. I've been very glad about the increasing popularity of chip music and circuit-bending, for example. The Pan-Hacker movement should actively look for new ways of "showing off the bits" to different kinds of audiences in many kinds of diverse contexts.

I hope my writeup has given someone some food of thought. I would like to elaborate my philosophy even further and perhaps do some cartography on the existing "Pan-Hacker" activity, but perhaps I'll return to that at some later time. Before that, I'd like to hear your thoughts and visions about the idea. What kind of groups should I look into? What kind of projects could Pan-Hacker movement participate in? Is there still something we need to define or refine?

Monday, 6 June 2011

Ancient binary symbolism and why it is relevant today

It is a well-known fact the human use of binary strings (or even binary numbers, see Pingala) predates electronics and automatic calculators by thousands of years.

Divination was probably the earliest human application for binary arrays. There are several systems in Eurasia and Africa that assign fixed semantics to bitstrings of various lengths. The Chinese I Ching gives meanings to the 3- and 6-bit arrays, while the systems used in the Middle East, Europe and Africa tend to prefer groups of 4 and 8 bits.

These systems of binary mysticism have been haunting me for quite many years already. As someone who has been playing around with bits since childhood, I have found the idea of ancient archetypal meanings for binary numbers very attractive. However, when studying the actual systems in order to find out the archetypes, I have always encountered a lot of noise that has blocked my progress. It has been a little bit frustrating: behind the noise, there are clear hints of an underlying logic and an original protosemantics, but whenever I have tried to filter out the noise, the solution has escaped my grasp.

Recently, however, I finally came up with a solution that satisfies my sense of esthetics. I even pixelled a set of "binary tarot cards" for showing off the discovery:


For a more complete summary, you may want to check out this table that contains a more elaborate set of meanings for each array and also includes all the traditional semantics I have based them on.

Of course, I'm not claiming that this is some kind of a "proto-language" from which all the different forms of binary mysticism supposedly developed. It is just an attempt to find an internally consistent set of meanings that match the various traditional semantics as closely as possible.

Explanation

In my analysis, I have translated the traditional binary patterns into modern Leibnizian binary numbers using the following scheme:

This is the scheme that works best for I Ching analysis. The bits on the bottom are considered heavier and more significant, and they change less frequently, so the normal big-endian reading starts from the bottom. The "yang" line, consisting of a single element, maps quite naturally to the binary "1", especially given that both "yang" and "1" are commonly associated with activity.

I have drawn each "card picture" based on the African shape of the binary array (represented as rows of one or two stones). I have left the individual "stones" clearly visible so that the bitstrings can be read out from the pictures alone. Some of the visual associations are my own, but I have also tried to use traditional associations (such as 1111=road/path, 0110=crossroads, 1001=enclosure) as often as they feel relevant and universal enough.

In addition to visual associations, the traditional systems have also formed semantics by opposition: if the array 1111 means "journey", "change" and "death", its inversion 0000 may obtain the opposite meanings: "staying at home", "stability" and "life". The visual associations of 0000 itself no longer matter as much.

The two operations used for creating symmetry groups are inversion and mirroring. These can be found in all families of binary divination: symmetric arrays are always paired with their inversions (e.g. 0000 with 1111), and asymmetric arrays with their reversions (e.g. 0111 with 1110).

Because of the profound role of symmetry groups, I haven't represented the arrays in a numerical order but in a 4x4 arrangement that emphasizes the mutual relationships via inversion and mirroring. Each of the rows in the "binary tarot" picture represents a group with similar properties:
  • The top row contains the four symmetrical arrays (which remain the same when mirrored).
  • The second row contains the arrays for which mirroring and inversion are equivalent.
  • The two bottom rows represent the two groups whose members can be derived from each other solely by mirroring and inversion.
The semantics within each group are interrelated. For example, the third row ("up", "in", "out", "down") can be labelled "the directions". In order to emphasize this, I have chosen a pair of dichotomies for each row. For example, the row of the directions uses the dichotomies "far-near" and "horizontal-vertical", and the array called "up" combines the poles "far"+"vertical". All the dichotomies can be found in my summary table.

The arrays in the top two groups have an even parity while those on the bottom two groups have an odd parity. This difference is important at least in Al-Raml and related systems, where the array getting the role of a "judge" in a divination table must have an even parity; otherwise there is an error in the calculation.

The members of each row can be derived from one another by eXclusive-ORing them with a symmetrical array (0000, 1111, 0110 or 1001). For this reason, I have also organized the arrangement as a XOR table.

The color schemes used in the card pictures are based on the colors in various 16-color computer palettes and don't carry further symbolism (even though 0010 happens to have the meaning of "red" in Al-Raml and Geomancy as well). Other than that, I have abstained from any modern technological connections.

But why?

Our subjective worlds are full of symbolism that brings various mental categories together. We associate numbers, letters, colors and even playing cards to various real-world things. We may have superstitions about them or give them unique personalities. Synesthetics even do this involuntarily, so I guess it is quite a basic trait for the human mind.

Binary numbers, however, have remained quite dry in this area. We don't really associate them with anything else, so they remain alien to us. Even experts who are constantly dealing with binary technology prefer to hide them or abstract them away. This alienation combined to the increasing role of digitality in our lives is the reason why I think there should be more exposure for the various branches of binary symbolism.

In many cultures, binary symbolism has attained a role so central that people base their conceptions of the world on it. A lot of traditional Chinese cosmology is basically commentary of I Ching. The Yoruba of West Africa use the eight-bit arrays of the Ifa system as "hash codes" to index their whole oral tradition. Some other West African peoples -- the Fon and the Ewe -- extend this principle far enough to give every person an eight-bit "kpoli" or "life sign" at their birth.

I guess the best way to bring some binary symbolism to our modern technological culture might be using it in art. Especially the kind of art such as pixel art, chip music and demoscene productions that embrace the bits, bringing them forward instead of hiding them. This is still just a meta-level idea, however, and I can't yet tell how to implement in it practice. But once I've progressed with it, I'll let you know for sure!

Tuesday, 1 February 2011

On electronic wastefulness

Many things are horribly wrong in this world.

People are becoming more and more aware of this. Environmental and economic problems have strengthened the criticism towards consumer culture, monetary power and political systems, and all kinds of countercultural movements are thriving. At the same time, however, ever more people are increasingly dependent on digital technology, which gets produced, bought, used and abandoned in greater masses than ever, causing an ever bigger impact on the world in the form of waste and pollution.

Because of this, I have decided to finally summarize my thoughts on how digital technology reflects the malfunctions of our civilization. I became a hobbyist programmer as a schoolkid in the mid-eighties, and fifteen years later I became a professional software developer. Despite all this baggage, I'm going to attempt to keep my words simple enough for common people to understand. Those who want to get convinced by citations and technical argumentation will get those at some later time.

Counter-explosion

For over fifty years, the progress of digital technology has been following the so-called Moore's law, which predicts that the number of transistors that fit on a microchip doubles every two-or-so years. This means that it is possible to produce digital devices that are of the same physical size but have ever more memory, ever more processing speed and ever greater overall capabilities.

Moore's law itself is not evil, as it also means that it is possibile to perform the same functions with ever less use of energy and raw material. However, people are people and behave like people: whenever it becomes possible to do something more easily and less consumingly, they start doing more of this something. This phenomenon is called "rebound effect" based on a medical term of the same name. It can be seen in many kinds of things: less fuel-consuming cars make people drive more, and less calories in food make weight-losers eat more. The worst case is when the actual savings becomes negative: a thing that is supposed to reduce consumption actually increases it instead.

In information technology, the most prominent form of rebound effect is the bloating of software, which takes place in the same rate of explosiveness as the improvement of hardware. This phenomenon is called Wirth's law. If we took a time machine ride back to 1990 and told the contemporaries that desktop computers would be becoming thousand times faster in twenty years, they would surely assume that almost anything would happen instantaneously with them. If we then corrected them by saying that software programs still take time to start up in the 2010s and that it is sometimes painful to tolerate their slowness and unresponsiveness, they wouldn't believe it. How is it even possible to write programs so poorly that they don't run smoothly with a futuristic, thousand times more powerful computer? This fact would become even harder to believe if we told them that it also applies to things like word processors which are used for more or less exactly the same things as before.

One reason for the unnecessary largeness, slowness and complexity of software is the dominant economic ideal of indefinite growth, which makes us believe that bigger things are always better and it is better to sell customers more than they need. Another reason is that rapid cycles of hardware upgrade make software developers indifferent: even if an application program were mindlessly slow and resource-consuming even on latest hardware, no one will notice it a couple of years later when the hardware is a couple of times faster. Nearly any excuse is valid for bloat. If it is possible to shorten software development cycles even slightly by stacking all kinds of abstraction frameworks and poorly implemented scripting languages on top of one another, it will be done.

The bloat phenomenon annoys people more and more in their normal daily life, as all kinds of electric appliances starting from the simplest flashlight contain increasingly complex digital technology, which drowns the user in uncontrollable masses of functionality and strange software bugs. The digitalization of television, for example, brought a whole bunch of computer-style immaturity to the TV-watching experience. I've even seen an electric kitchen stove that didn't heat up before the user first set up the integrated digital clock. Diverse functionality itself is not evil, but if the mere existence of extra features disrupts the use of the basic ones, something is totally wrong.

Even though many things in our world tend to swell and complexify, it is difficult to find a physical-world counterpart to software bloat, as the amount of matter and living space on our planet does not increase exponentially. It is not possible to double the size of one's apartment every two years in order to fit in more useless stuff. It is not possible to increase the complexity of official paperwork indefinitely, as it would require more and more food and accommodation space for the expanding army of bureaucrats. In the physical world, it is sometimes necessary to evaluate what is necessary and how to compress the whole in order to fit more. Such necessity does not exist in the digital world, however; there, it is possible to constantly inhale and never exhale.

Disposability

The prevailing belief system of today's world equates well-being with material abundance. The more production and consumption there is, the more well-being there is, and that's it. Even though the politicians in rich countries don't want to confess this belief so clearly anymore, they still use concepts such as "gross national product", "economic growth" and "standard of living" which are based on the idealization of boundless abundance.

As it is the holy responsibility of all areas of production to grow indefinitely, it is important to increase consumption regardless of whether it is sensible or not. If it is not possible to increase the consumption in natural ways, planned obsolensce comes to rescue. Some decades ago, people bought washing machines and television sets for the twenty years to follow, but today's consumers have the "privilege" of buying at least four of both during the same timespan, as the lifespans of these products have been deliberately shortened.

The scheduled breaking of electric appliances is now easier than ever, as most of them have an integrated microprocessor running a program of some kind. It is technically possible, for example, to hide a timer in this program, causing the device to either "break" or start misbehaving shortly after the warranty is over. This kind of sabotage may be beneficial for the sales of smaller and cheaper devices, but it is not necessary in the more complex ones; in their case, the bloated poor-quality software serves the same purpose.

Computers get upgraded especially when the software somehow becomes intolerably slow or even impossible to run. This change can take place even if the computer is used for exactly the same things as before. Bloat makes new versions of familiar software more resource-consuming, and when reforms are introduced on familiar websites, they tend to bloat up as well. In addition, some operating systems tend to slow down "automatically", but this is fortunately something that can be fixed by the user.

The experience of slowness, in its most annoying form, is caused by too long response times. The response time is the time between user's action and the indication that the action has been registered. Whenever the user moves the mouse, the cursor on the screen must immediately match the movement. Whenever the user presses a letter key on the keyboard, the same letter must appear on the screen immediately. Whenever the user clicks a button on the screen, the
graphic of the button must change immediately. According to usability research, the response time must be less than 1/10 seconds or the system feels laggy. When it has taken more than a second, the user's blood pressure is already increasing. After ten seconds, the user is convinced that "the whole piece of junk has locked up".

Slow response times are usually regarded as an indicator that the device is slow and that it is necessary to buy a new one. This is a misconception, however. Slow response times are indicators of nothing else than indifferent attitudes to software design. Every computing device that has become available during the last thirty years is completely capable of delivering the response within 1/10 seconds in every possible situation. Despite this fact, the software of the 2010s is still usually designed in such a way that the response is provided once the program has first finished all the more urgent tasks. What is supposed to be more important than serving the user? In the mainframe era, there were quite many such things, but in today's personal computing, this should never be the case. Fixing the response time problems would be a way to permanently make technology more comfortable to use as well as to help the users tolerate the actual slowness. The industry, however, is strangely indifferent to these problems. Response times are, from its point of view, something that "get fixed" automatically, at least for a short while and in some areas, at hardware upgrades.

Response time problems are just a single example of how the industry considers it more important to invent new features than to fix problems that irritate the basic user. A product that has too few problems may make consumers too satisfied. So satisfied that they don't feel like buying the next slightly "better" model which replaces old problems with new ones. Companies that want to ensure their growth prefer to do everything multiple times in slightly substandard ways instead of seeking any kind of perfection. Satisfaction is the worst enemy of unnecessary growth.

Is new hardware any better?

I'm sure that most readers have at least heard about the problems caused by the rat race of upgrade and overproduction. The landfills in rich countries are full of perfectly functioning items that interest no one. Having anything repaired is stupid, as it is nearly always easier and cheaper to just buy new stuff. Selling used items is difficult, as most people won't accept them even for free. Production eats up more and more natural resources despite all the efforts of "greening up" the production lines and recycling more and more raw material.

The role of software in the overproduction cycle of digital technology, however, is not so widely understood. Software is the soul of every microprocessor-based device, and it defines most of what it is like to use the device or how much of its potential can be used. Bad software can make even good hardware useless, whereas ingenious software can make even a humble device do things that the original designer could never have imagined. It is possible to both lengthen and shortern product lifetimes via software.

New hardware is often advocated with new features that are not actually features of the hardware but of the software it runs. Most of the features of the so-called "smartphones", for example, are completely software-based. It would be perfectly possible to rewrite the software of an old and humble cellphone in order to give it a bunch of features that would effectively turn it into a "smartphone". Of course, it is not possible to do complete impossibilities with software; there is no software trick that makes a camera-less phone take photos. Nevertheless, the general rule is that hardware is much more capable than its default software. The more the hardware advances, the more contrast there is between the capabilities of the software and the potential of the hardware.

If we consider the various tasks for which personal computers are used nowadays, we will notice that only a small minority of them actually requires a lot from the hardware. Of course, bad software may make some tasks feel more demanding than what they actually are, but that's another issue. For instance, most of the new online services, from Facebook to Youtube and Spotify, could very well be implemented so that they run with the PCs of the late 1990s. Actually, it would be possible to make them run more smoothly than how the existng versions run on today's PC. Likewise, with better operating systems and other software, we could make the same old hardware feel faster and more comfortable to use than today's hardware. From this we can conclude that the computing power of the 2000s is neither useful, necessary nor pleasing for most users. Unless we count the pseudo-benefit that it makes bad and slow software easier to tolerate, of course.

Let us now imagine that the last ten years in personal computing went a little bit differently -- that most of the computers sold to the great masses would have been "People's Computers" with a fixed hardware setup. This would have meant that the hardware performance would have remained constant for the last ten years. The 2011 of this alternate universe would probably be somewhat similar to our 2011, and some things could even be better. All the familiar software programs and on-line services would be there, they would just have been implemented more wisely. The use of the computers would have become faster and more comfortable during the years, but this would have been due to the improvement of software, not hardware. Ordinary people would never need to think about "hardware requirements", as the fixedness of the hardware would ensure that all software, services and peripherials work. New computers would probably be lighter and more energy-efficient, as the lack of competition in performance would have moved the competition to these areas. These are not just fringe utopian ideas; anyone can make similar conclusions by studying the history of home computing where several computer and console models have remained constant for ten years or more.

Of course it is easy to come up with ideas of tasks that demand more processing power than what was available to common people ten years ago or even today. A typical late-1990s desktop PC, for example, plays ordinary DVD-quality movies perfectly but may have major problems with the HD resolutions that are fashionable in the early 2010s. Similarly, by increasing the numbers, it is possible to come up with imaginary resolutions that are out of the reach of even the most expensive special-purpose equipment available today. For many people, this is exactly what technological progress means -- increase in numerical measures, the possibility to do the same old things in ever greater scales. When a consumer replaces an old TV with a new one, he or she gets a period of novelty vibes from the more magnificent picture quality. After a couple of years, the consumer can buy another TV and get the novelty vibes once again. If we had an access to unlimited natural resources, it would be possible to go on with this vanity cycle indefinitely, but still without improving anyone's quality of life in any considerable extent.

Most of the technological progress facilitated by the personal computing resources of the 2000s has been quantitative -- doing the same old stuff that became possible in the 1990s but with bigger numbers. Editing movies and pictures that have ever more pixels, running around in 3D video game worlds that have ever more triangles. It is difficult to even imagine a computational task relevant to an ordinary person that would require the number-crunching power of a 2000s home computer due to its nature alone, without any quantitative exaggeration. This could very well be regarded as an indicator that we already have enough processing power for a while. The software and user culture are lagging so far behind the hardware improvements, that it would be better to concentrate on them instead and leave the hardware on the background.

Helplessness

In addition to the senseless abundance of material items, today's people are also disturbed by a senseless abundance of information. Information includes not only the ever expanding flood of video, audio and text coming from the various media, but also the structural information incorporated in material and immaterial things. The expansion of this structural information manifests as increasing complexity of everything: consumer items, society systems, cultural phenomena. Those who want to understand the tools they use and the things that affect their life, must absorb ever greater amounts of structural information about them. Many people have already given up with understanding and just try to get along.

Many frown upon people who can't boil an egg or attach a nail to a wall without a special-purpose egg-boiler or nailgun, or who are not even interested in how the groceries come to the store or the electricity to the wall socket. However, the expanding flood of information and the complexification of everything may eventually result in a world where neo-helplessness and poor common knowledge are the normal condition. In computing, complexification has already gone so far that even many experts don't dare to understand how the technology works but prefer to guess and randomize.

Someone who wants to master a tool must build a mental model of its operation. If the tool is a very simple one, such as a hammer, the mental model builds up nearly automatically after a very short study. If someone who uses a hammer accidentally hits their finger with it, they will probably accuse themself instead of the hammer, as the functionality of a hammer can be understood perfectly even by someone who is not so capable in using it. However, when a computer program behaves against the user's will, the user will probably accuse the technology instead of themself. In situations like this, the user's mental model of how the program works does not match with its actual functionality.

The more bloated a software program is, the more effort the user needs to take in order to build an adequate mental model. Some programs are even marketing-minded enough to impose its new and glorious features to the user. This doesn't help at all in forming the mental model. Besides, most users don't have a slightest interest in extensive exploration but rather use a simple map and learn to tolerate the uncertainty caused by its rudimentariness. When we also consider that programs may change their functionality quite a lot between versions, even enthusiasts will turn cynical and frustrated when their precious mental maps become obsolete.

Many software programs try to fix the complexity problem by increasing the complexity instead of decreasing it. This mostly manifests as "intelligence". An "intelligent" programs monitors the user, guesses their intents and possibly suggests various courses of actions based on the intents. For example, a word processor may offer help in writing a letter, or a file manager may suggest things to do with a newly inserted memory stick. The users are offered all kinds of controlled ready-made functionality and "wizards" even for tasks they would surely prefer to do by themselves, at least if they had a chance to learn the normal basic functionality. If the user is forced to use specialized features before learning the basic ones, he or she will be totally helpless in situations where a special-purpose feature for the particular function does not exist. Just like someone who can use egg-boilers and nailguns but not kettles or hammers.

The reasons why technology exists are making things easier to do and facilitating otherwise impossible tasks. However, if a technological appliance becomes so complex that its use is more like random guessing than goal-oriented controlling, we can say that the appliance no longer serves its purpose and that the user has been taken over by technology. For this reason, it is increasingly important to keep things simple and controllable. Simplicity, of course, does not mean mere superficial pseudo-simplicity that hides the internal complxity, but the avoidance of complexity on all levels. The user cannot be in full control without having some kind of an idea about what the tool is doing at any given time.

In software, it may be useful to reorder the complexity so that there is a simple core program from which any additional complexity is functionally separated until the user deliberately activates it. This would make the programs feel reliable and controllable even with simple mental maps. An image processing software, for example, could resemble a simple paint program at its core level, and its functionality could be learned perfectly after a very short testing period. All kinds of auxilary functions, automations and other specialities could be easily found if needed, and the user could extend the core with them depending on the particular needs. Still, their existence would never disturb those users who don't need them. Regardless of the level of the user, the mental map would always match how the program actually works, and the program would therefore never surprise the user by acting against his or her expectations.

Software is rarely built like this, however. There is not much interest in the market for movements that make technology genuinely more approachable and comprehensible. Consumer masses who feel themselves helpless in regards to the technology are, after all, easier to control than masses of people who know what they are doing (or at least think so). It is much more beneficial for the industry to feed the helplessness by drowning the people in trivialities, distancing them for the basics and perhaps even submitting them under the power of an all-guessing artificially-intelligent assistant algorithm.

Changing the world

I have now discussed all kinds of issues, of which I have mostly accused bad software, and of whose badness I have mostly accused the economic system that idealizes growth and material abundance. But is it possible to do something about these issues? If most of the problems are indeed software-related, then couldn't they be resolved by producing better software, perhaps even outside of the commercial framework if necessary?

When calling for a counter-force for commercial software development, the free and open-source software (FOSS) movement is most commonly mentioned. FOSS software has mostly been produced as volunteer work without monetary income, but as the result of the work can be freely duplicated and used as basis of new work, they have managed to cause a much greater impact than what voluntary work usually does. The greatest impact has been among technology professionals and hobbyists, but even laypeople may recognize names such as Linux, Firefox and OpenOffice (of which the two latter are originally proprietary software, however).

FOSS is not bound to the requirements of the market. Even in cases where it is developed by corporations, people operating outside the commercial framework can contribute to it and base new projects on it. FOSS has therefore, in theory, the full potential of being independent of all the misanthropic design choices caused by the market. However, FOSS suffers from most of these problems just as much as proprietary software, and it even has a whole bunch of its own extra problems. Reasons for this can be found in the history of the movement. Since the beginning, the FOSS movement has mostly concentrated on cloning existing software without spending too much energy on questioning the dominant design principles. The philosophers of the movement tend to be more concerned about legal and political issues instead of technical ones: "How can we maximize our legal rights?" instead of "How should we design our software so that it would benefit the whole humanity instead of just the expert class?"

I am convinced that FOSS would be able to give the world much more than what it has already given if it could form a stronger contrast between itself and the growth-centric industry. In order to strengthen the contrast, we need a powerful manifest. This manifest would need to profoundly denounce all the disturbances to technological progress caused by the growth ideology, and it would need to state the principles on which software design should be based on in order to benefit human beings and nature in the best possible way. Of course, this manifest wouldn't exist exclusively for reinventing the wheel, but also for re-evaluating existing technology and redirecting its progress towards the better.

But what can ordinary people do? Even a superficial awareness of the causes of problems is better than nothing. One can easily learn to recognize many types of problems, such as those related to response times. One can also learn to accuse the right thing instead of superficially crying how "the computer is slow" or "the computer is misbehaving". Changes in language are also a nice way of spreading awareness. If people in general learned to accuse software instead of hardware, then they would probably also learn to demand software-based solutions for their problems instead of needlessly purchasing new hardware.

When hardware purchases are justifiable, those concerned of the environment will prefer second-hand hardware instead of new, as long as there is enough power for the given purposes. It is a common misconception to assume that new hardware would always consume less power than old -- actually, the trend has more often been exactly the opposite. During a period of ten years from the mid-1990s to the mid-2000s, for example, the power consumption of a typical desktop PC (excluding the monitor) increased tenfold, as the industry was more zealous to increase processing power than to improve energy efficiency. Power consumption curves for video game consoles have been even steeper. Of course, there are many examples of positive development as well. For example, CRT screens are worth replacing with similarly-sized LCD screens, and laptops also typically consume less than similar desktop PCs.

There is a strong market push towards discontinuing all kinds of service and repair activity. Especially in case of cellphones and other small gadgets, "service" more and more often means that the gadget is sent out to the manufacturer which dismantles it for raw material and sends a new gadget to the customer. For this reason, it may be reasonable to consider the difficulty of do-it-yourself activity when choosing a piece of hardware. As all forms of DIY culture seem to be waning due to a lack of interest, it is worthwhile to support them in all possible ways in order to ensure that there will still be someone in the future who can repair something.

Of course, we all hope that the world would change in a way such that the human- and nature-friendly ways to do things would always be the most beneficial ones even in "the reality of numbers and charts". Such a change will probably take longer than a few decades, however, regardless of the volume of the political quarrel. It may therefore not be wise to indefinitely wait for the change of the system, as it is already possible to participate in practical countercultural activity today. Even in things related to digital technology.