Sunday, 14 July 2013

Slower Moore's law wouldn't be that bad.

Many aspects of the world of computing are dominated by Moore's law -- the phenomenon that the density of integrated circuits tends to double every two years. In mainstream thought, this is often equated with progress -- a deterministic forward-march towards the universal better along a metaphorical one-dimensional path. In this essay, I'm creating a fictional alternative timeline to bring up some more dimensions. A more moderate pace in Moore's law wouldn't necessarily be that bad after all.

Question: What if Moore's law had been progressing at a half speed since 1980?

I won't try to explain the point of divergence. I just accept that, since 1980, certain technological milestones would have been rarer and fewer. As a result, certain quantities would have doubled only once every four years instead of every two years. The RAM capacities, transistor counts, hard disk sizes and clock frequencies would have just reached the 1990s level in the year 2000, and in the year 2013, we would be on the 1996 level in regards to these variables.

I'm excluding some hardware-related variables from my speculation. Growth in telecommunications bandwidths, including the spread of broadband, are more related to infrastructural development than Moore's law. I also consider the technological development in things like batteries, radio tranceivers and LCD screens to be unrelated to Moore's law, so their progress would have been more or less unaffected apart from things like framebuffer and DSP logic.

1. Most milestones of computing culture would not have been postponed.

When I mentioned "the 1996 level", many readers probably envisioned a world where we would be "stuck in the year 1996" in all computing-related aspects. Noisy desktop Pentiums running Windows 95s and Netscape Navigators, with users staring in awe at rainbow-colored, static, GIF-animation-plagued websites over landline dialup connections. This tells about mainstream views about computer culture: everything is so one-dimensionally techno-determinist that even progress in purely software- and culture-related aspects is difficult to envision without their supposed hardware prequisities.

My view is that progress in computing and some other high technology has always been primarily cultural. Things don't become market hits straight after they're invented, and they don't get invented straight after they're technologically possible. For example, there were touchscreen-based mobile computers as early as 1993 (Apple Newton), but it took until 2010 before the cultural aspects were right for their widespread adoption (iPad). In the Slow-Moore world, therefore, a lot of people would have tablets just like in our world, despite the fact that they wouldn't probably have too many colors.

The mainstream adoption of the Internet would have taken place in the mid-1990s just like in the real world. 1987-equivalent hardware would have been completely sufficient for the boom to take place. Public online services such as Videotex and BBSes had been available since the late 1970s, and Minitel had already gathered millions of users in France in the 1980s, so even a dumb text terminal would have sufficed on the client side. The power of the Internet compared to its competitors was its global, free and decentralized nature, so it would have taken off among common people even without graphical web browsers.

Assuming that the Internet had become popular with character-based interfaces rather than multimedia-enhanced hypertext documents, its technical timeline would have become somewhat different. Terminal emulators would have eventually accumulated features in the same way as Netscape-like browsers did in the real world. RIPscrip is a real-world example of what could have become dominant: graphics images, GUI components and even sound and video on top of a dumb terminal connection. "Dynamic content" wouldn't require horrible kludges such as "AJAX" or "dynamic HTML", as the dumb terminal approach would have been interactive and dynamic enough to begin with. The gap between graphical and text-based applications would be narrower, as well as the gap between "pre-web" and "modern" online culture.

The development of social media was purely culture-driven: Facebook would have been technically possible already in the 1980s -- feeds based on friend lists don't require more per-user computation than, say, IRC channels. What was needed was cultural development: several "generations" of online services were required before all the relevant ideas came up. In general, most online services I can think of could have taken place in some form or another, about the same time as they appeared in the real world.

The obvious exceptions would be those services that require a prohibitive amount of server-side storage. An equivalent of Google Street View would perhaps just show rough shapes of the buildings instead of actual photographs. YouTube would focus on low-bitrate animations (something like Flash) rather than on full videos, as the default storage space available per user would be quite limited. Client-side video/audio playback wouldn't necessarily be an issue, since MPEG decompression hardware was already available in some consumer devices in the early 1990s (Amiga CD32) and would have therefore been feasible in the Slow-Moore year 2004. Users would just be more sensitive about disk space and would therefore avoid video formats for content that doesn't require actual video.

All the familiar video games would be there, as the resource-hogging aspects of games can generally be scaled down without losing the game itself. It could even be argued that there would be far more "AAA" titles available, assuming that the average budget per game would be lower due to lower fidelity requirements.

Domestic broadband connections would be there, but they would be more often implemented via per-apartment ethernet sockets than via per-apartment broadband modems. The amount of DSP logic required by some protocols (*DSL) would make per-apartment boxes rather expensive compared to the installation of some additional physical wires. In rural areas, traditional telephone modems would still be rather common.

Mobile phones would be very popular. Their computational specs would be rather low, but most of them would still be able to access Internet services and run downloadable third-party applications. Neither of these requires a lot of power -- in fact, every microprocessor is designed to run custom code to begin with. Very few phones would have built-in cameras, however -- the development of cheap and tiny digital camera cells has a lot to do with Moore's law. Also, global digital divide would be greater -- there wouldn't be extremely cheap handsets available in poor countries.

It must be emphasized here that even though IC feature sizes would be in the "1996 level", we wouldn't be building devices from the familiar 1996 components. The designs would be far more advanced and logic-efficient. Hardware milestones would have been more like "reinventing the wheel" than accumulating as much intellectual property as possible on a single chip. RISC and Transputer architectures would have displaced X86-like CISCs a long time ago and perhaps even given way to ingenious inventions we can't even imagine.

Affordable 3D printers would be just around the corner, just like in the real world. Their developmental bottlenecks have more to do with the material printing process itself than anything Moorean. Similarly, the setbacks in the progress of virtual reality helmets have more to do with optics and head-tracking sensors than semiconductors.

2. People would be more conscious about the use of computing resources.

As mentioned before, digital storage would be far less abundant than in the real world. Online services would still have tight per-user disk quotas and many users would be willing to actually pay for more space. Even laypeople would have a rather good grasp about kilobytes and megabytes and would often put effort in choosing efficient storage formats. All computer users would need to regularly choose what is worth keeping and what isn't. Online privacy would generally be better, as it would be prohibitively expensive for service providers to neurotically keep the complete track record of every user.

As global Internet backbones would have considerably slower capacities than local and mid-range networks, users would actually care about where each server is geographically located. Decentralized systems such as IRC and Usenet would therefore never have given way to centralized services. Search engines would be technically more similar to YacY than Google, social media more similar to Diaspora than Facebook. Even the equivalent of Wikipedia would be a network of thousands of servers -- a centralized site would have ended up being killed by deletionists. Big businesses would be embracing this "peer-to-peer" world instead of expanding their own server farms.

In general, Internet culture would be more decentralized, ephemeral and realtime than in the real world. Live broadcasts would be more common than vlogs or podcasts. Much less data would be permanently stored, so people would have relatively small digital footprints. Big companies would have far less power over users.

Attitudes towards software development would be quite different, especially in regards to efficiency and optimization. In the real world, wasteful use of computational resources is systematically overlooked because "no one will notice the problem in the future anyway". As a result, we have incredibly powerful computers whose software still suffers from mainframe-era problems such as ridiculously high UI latencies. In a Slow-Moore world, such problems would have been solved a long time ago: after all, all you need is a good user-level control to how the operating system priorizes different pieces of code and data, and some will to use it.

Another problem in real-world software development is the accumulation of abstraction layers. Abstraction is often useful during development, as it speeds up the process and simplifies maintenance, but most of the resulting dependencies are a completely waste of resources in the final product. A lot of this waste could be eliminated automatically by the use of advanced static analysis and other methods. From the vast contrast between carefully size-optimized hobbyist hacks and bloated mainstream software we might guess that some mind-boggling optimization ratios could be reached. However, the use and development of such tools has been seriously lagging behind because of the attitude problems caused by Moore's law.

In a Slow-Moore world, the use of computing resources would be extremely efficient compared to current standards. This wouldn't mean that hand-coded assembly would be particularly common, however. Instead, we would have something like "hack libraries": huge collections of efficient solutions for various problems, from low-level to high-level, from specific to generic. All tamed, tested and proven in their respective parameter ranges. Software development tools would have intelligent pattern-matchers that would find efficient hacks from these libraries, bolt them together in optimal arrangements and even optimize the bolts away. Hobbyists and professionals alike would be competing in finding ever smarter hacks and algorithms to include in the "wisdombase", thus making all software incrementally more resource-efficient.

3. There would still be a gap between digital and "real" content.

Regardless of how efficently hardware resources are used, unbreakable limits always exist. In a Slow-Moore world, for instance, film photography would still be superior in quality to digital photography. Also, since the digital culture would be far more resource-conscious, large resolutions wouldn't even be desirable in purely digital contexts.

Spreading "memes" as bitmap images is a central piece of today's Internet culture. Even snippets of on-line discussions get spread as bitmapped screenshots. Wasteful, yes, but compatible and therefore tolerable. The Slow-Moore Internet would probably be much more compatible with low-bit formats such as plaintext or vector and character graphics.

Since the beginning of digital culture, there has been a desire to import content from "meatspace" into the digital world. At first, people did it in laborous ways: books were typed into text files, paintings and photographs were repainted with graphics editors, songs were covered with tracker programs. Later, automatic methods appeared: pictures could be scanned, songs could be recorded and compressed into MP3-like formats. However, it took some time before straight automatic imports could compete against skillful manual effort. In low resolutions, skillful pixel-pushing still makes a difference. Synthesized songs take a fraction of the space of an equivalent MP3 recording. Eventually, the difference diminished, and no one longer cared about it.

In a Slow-Moore world, the timeline of digital media would have been vastly different. A-priori-digital content would still have vast advantages over imported media. Artists looking for worldwide appreciation via the Internet would often choose to take the effort to learn born-digital methods instead of just digitizing their analog works. As a result, many traditional disciplines of computer art would have grown enormous. Demoscene and low-bit techniques such as procedural content generation and tracker-like synthesized music would be the mainstream norm in the Internet culture instead of anything "underground".

Small steps towards photorealism and higher fidelity would still be able to impress large audiences, as they would still notice the difference. However, in a resource-conscious online culture, there would also probably be a strong countercultural movement against "high-bit" -- a movement seeking to embrace the established "Internet esthetics" instead of letting it be taken over and marginalized by imports.

Record and film companies would definitely be suing people for importing, covering and spreading their copyrighted material. However, they would still be able to sell it in physical formats because of their superior quality. There would also be a class of snobs who hate all "computer art" and all the related esthetic while preferring "real, physical formats".

4. Conclusion

A Slow-Moore world would be somewhat "backwards" in some respects but far more sensible or even more advanced in others. As a demoscener with an ever-growing conflict against today's industry-standard attitudes, I would probably prefer to live with a more moderate level of Moorean inflation. However, a Netflix fan who likes high-quality digital photography and doesn't mind being in surveillance would probably choose otherwise.

The point in my thought experiment was to justify my view that the idea of a linear tech tree strongly tied to Moore's law is a banal oversimplification. There are many other dimensions that need to be noticed as well.

The alternative timeline may also be used as inspiration for real-world projects. I would definitely like to see whether an aggressively optimizing code generation tool based on "hack libraries" could be feasible. I would also like to see the advent of a mainstream operating system that doesn't suck.

Nevertheless: Down with Moore's law fetishism! It's time for a more mature technological vision!

Saturday, 5 January 2013

I founded a new "oldschool" computer magazine.

Maybe it's a sensible time to tell a bit what I've been up to for the past few months.

In September 2012, I founded Skrolli, a new Finnish computer magazine. This turn in my life surprised even myself.

It started from an image that went viral. Produced by my friend CCR with a lot of ideas from me, it was a faux magazine cover speculating what the longest-living Finnish home computing magazine, MikroBitti, would be like today if it had never renewed itself after the eighties. The magazine happens to be somewhat iconic to those Finns who got immersed to computing before the turn of the millennium, so it reached some relevant audience quite efficiently.

The faux cover was meant to be a joke, but the abundance of comments like "I would definitely subscribe to this kind of magazine" made me seriously consider the possibility of actually creating something like it. I put up a simple web page stating the idea of a new "countercultural" computer magazine that is somewhat similar to what MikroBitti used to be like. In just a few days, over a hundred people showed up on the dedicated IRC channel, and here we are.

Bringing the concept of an oldschool microcomputer magazine to the present era needs some thoughtful reflection. The world has changed a lot; computer hobbyists no longer exist as a unified group, for example. Everyone uses a computer for leisure, and it is sometimes difficult to draw line between those who are interested in the applications and those who are genuinely interested in the technology. Different activities also have their own subcultures with their own communication channels, and it is often hard to relate to someone whose subculture has a very different basis.

Skrolli defines computer culture as something where the computational aspects are irreducible. It is possible to create visual art or music completely without digital technology, for example, but once the computer becomes the very material (like in case of pixel art or chip music), the creative activity becomes relevant to our magazine. Everything where programming or other direct access to the computational mechanisms is involved is also relevant, of course.

I also chose to target the magazine to my own language group. In a nation of six million, the various subcultures are closer to one another, so it is easier to build a common project that spans the whole scale. The continuing existence of large computer hobbyist events in this country might also simplify the task. If the magazine had been started in English or even German, there would have been a much greater risk of appealing only to a few specialized niches.

In order to keep myself motivated, I have been considering the possibility that Skrolli will actually start a new movement. Something that brings the computational aspects of computer entuhsiasm back to daylight and helps the younger generation to find a true, non-compromising relationship with digital technology. Once the movement starts growing on its own, without being tied to a single project, language barriers will no longer exist for it.

I will be busy with this stuff for at least a couple of months until we get the first few issues printed (yes, it will be primarily a paper magazine as a statement against short-living journalism). After that, it is somewhat likely that I will finish the projects I temporarily abandoned: there will probably be a JIT-enabled version IBNIZ, and the IBNIZ democoding contest I promised will be arranged. Stay tuned!

Thursday, 19 April 2012

The relationship between "New Aesthetic" and Computationally Minimal Art

A couple of weeks ago, something called "New Aesthetic" was brought to my attention. It is difficult to find any sort of coherent definition for the idea, but it seems like an umbrella label for a wide variety of visual things that somehow look computational, often in not-so-computational contexts. The main spreader of the meme is apparently a Tumblr blog that collects pictures of things such as pixellated glitches in textiles, real-life voxel sculptures, mugs decorated with website graphics, digitally glitched photographs, satellite images as well as all kinds of other things that evocate suitably futuristic associations.



Despite the profound vagueness of the umbrella term, it is not difficult to notice the general trend it refers to. Just a decade ago, a computationally inspired real-life object would have been a unique novelty item, but nowadays there are such things all around us. I mentioned an aspect of this trend back in 2010 in my article on Computationally Minimal Art, where I noticed that "retrocomputing esthetics" is not just thriving in its respective subcultures (such as demoscene or chip music scene) but popping up every now and then in mainstream contexts as well -- often completely without the historical or nostalgic vibe usually associated with retrocomputing.

As the concept of "New Aesthetic" overlaps a lot of my ponderings, I now feel like building some semantics in order to relate the ideas to one another:

"New Aesthetics", as I see it, is a rather vague umbrella term that contains a wide variety of things but has a major subset that could be called "Computationally Inspired".

"Computationally Inspired" is anything that brings the concepts and building blocks of the "digital world" into non-native contexts. T-shirts, mugs and other real-life objects decorated with big-pixel art or website imagery are obvious examples. In a wide sense, even anything that makes the basic digital building blocks more visible within a digital context might be "Computationally Inspired" as well: big-pixel low-fi computer graphics on a new high-end computer, for example.

"Computationally Minimal" is anything that uses a very low amount of computational resources, often making the digital building blocks such as pixels very discernible. Two years ago, I defined "Computationally Minimal Art" as follows: "[A] form of discrete art governed by a low computational complexity in the domains of time, description length and temporary storage. The most essential features of Computationally Minimal Art are those that persist the longest when the various levels of complexity approach zero."

We can see that Computationally Inspired and Computationally Minimal have a lot of overlap but neither is a subset of another. Cross-stitch patterns are CM almost by definition as they have a limited number of discrete "pixels" with a limited number of different colors, but they are not CI unless they depict something that comes from the "computer world", such as video game characters. On the other hand, a sculpture based on a large amount of digitally corrupted data is definitely CI but falls out of the definition of CM due to the size of the source data.

What CM and CI and especially their intersection have in common, however, is the tendency of showing off discrete digital data and/or computational processes, which gives them a lot of esthetic similarity. In CI, this is usually a goal in itself, while in CM, it is most often a side-effect of the related goal of low computational complexity. In either case, however, the visual result often looks like big-pixel graphics. This has caused confusion among many New Aesthetic bloggers who use adjectives such as "retro", "8-bit" or "nostalgic" when referring to this phenomenon, when what they are witnessing is just a way how the essence of digital technology tends to manifest visually.

There has been a lot of on-line discussion revolving New Aesthetic during the past month, and a lot of it seems like pseudo-intellectual, reality-detached mumbo-jumbo to me. In order to gain some insight and substance, I would like to recommend all the bloggers to take serious a look into the demoscene and other established forms of computer-centric expression. You may also find out that a lot of this stuff is actually not that new to begin with, it has just been gaining a lot of new momentum recently.

Saturday, 17 March 2012

"Fabric theory": talking about cultural and computational diversity with the same words

In recent months, I have been pondering a lot about certain similarities between human languages, cultures, programming languages and computing platforms: they are all abstract constructs capable of giving a unique form or flavor to anything that is made with them or stems from them. Different human languages encourage different types of ideas, ways of expression, metaphors and poetry while discouraging others. Different programming languages encourage different programming paradigms, design philosophies and algorithms while discouraging others. The different characteristics of different computing platforms, musical instruments, human cultures, ideologies, religions or subcultural groups all similarly lead to specific "built-in" preferences in expression.

I'm sure this sounds quite meta, vague or superficial when explained this way, but I'm convinced that the similarities are far more profound than most people assume. In order to bring these concepts together, I've chosen to use the English word "fabric" to refer to the set of form-giving characteristics of languages, computers or just about anything. I've picked this word partly because of its dual meaning, i.e. you can consider a fabric a separate, underlying, form-giving framework just as well as an actual material from which the different artifacts are made. You may suggest a better word if you find one.

Fabrics

The fabric of a human language stems (primarily) from its grammar and vocabulary. The principle of lingustic relativity, also known as the Sapir-Whorf hypothesis, suggests that language defines a lot about how our ways of thinking end up being like, and there is even a bunch of experimental support for this idea. The stronger, classical version of the hypothesis, stating that languages build hard barriers that actually restrict what kind of ideas are possible, is very probably false, however. I believe that all human languages are "human-complete", i.e. they are all able to express the same complete range of human thoughts, although the expression may become very cumbersome in some cases. In most Indo-European languages, for example, it is very difficult to talk about people without mentioning their real or assumed genders all the time, and it may be very challenging to communicate mathematical ideas in an Aboriginal language that has a very rudimentary number system.

Many programmers seem to believe that the Sapir-Whorf hypothesis also works with programming languages. Edsger Dijkstra, for example, was definitely quite Whorfian when stating that teaching BASIC programming to students made them "mentally mutilated beyond hope of regeneration". The fabric of a programming language stems from its abstract structure, not much unlike those of natural languages, although a major difference is that the fabrics of programming languages tend to be much "purer" and more clear-cut, as they are typically geared towards specific application areas, computation paradigms and software development philosophies.

Beyond programming languages there are computer platforms. In the context of audiovisual computer art, the fabric of a hardware platform stems both from its "general-purpose" computational capabilities and the characteristics of its special-purpose circuitry, especially the video and sound hardware. The effects of the fabric tend to be the clearest in the most restricted platforms, such as 8-bit home computers and video game consoles. The different fabrics ("limitations") of different platforms are something that demoscene artists have traditionally been concerned about. Nowadays, there is even an academic discipline with an expanding series of books, "Platform Studies", that asks how video games and other forms of computer art have been shaped by the fabrics of the platforms they've been made for.

The fabric of a human culture stems from a wide memetic mess including things like taboos, traditions, codes of conduct, and, of course, language. In modern societies, a lot stems from bureaucratic, economic and regulatory mechanisms. Behavior-shaping mechanisms are also very prominent in things like video games, user interfaces and interactive websites, where they form a major part of the fabric. The fabric of a musical instrument stems partly from its user interface and partly from its different acoustic ranges and other "limitations". It is indeed possible to extend the "fabric theory" to quite a wide variety of concepts, even though it may get a little bit far-fetched at times.

Noticing one's own box

In many cases, a fabric can become transparent or even invisible. Those who only speak one language can find it difficult to think beyond its fabric. Likewise, those who only know about one culture, one worldview, one programming language, one technique for a specific task or one just-about-anything need some considerable effort to even notice the fabric, let alone expand their horizons beyond it. History shows that this kind of mental poverty leads even some very capable minds into quite disastrous thoughts, ranging from general narrow-mindedness and false sense of objectivity to straightforward religious dogmatism and racism.

In the world of computing, difficult-to-notice fabrics come out as standards, de-facto standards and "best practices". Jaron Lanier warns about "lock-ins", restrictive standards that are difficult to outthink. MIDI, for example, enforces a specific, finite formalization of musical notes, effectively narrowing the expressive range of a lot of music. A major concern risen by "You are not a gadget" is that technological lock-ins of on-line communication (e.g. those prominent in Facebook) may end up trivializing humanity in a way similar to how MIDI trivializes music.

Of course, there's nothing wrong with standards per se. Standards, also including constructs such as lingua francas and social norms, can be very helpful or even vital to humanity. However, when a standard becomes an unquestionable dogma, there's a good chance for something evil to happen. In order to avoid this, we always need individuals who challenge and deconstruct the standards, keeping people aware of the alternatives. Before we can think outside the box, we must first realize that we are in a box in the first place.

Constraints

In order to make a fabric more visible and tangible, it is often useful to introduce artificial constraints to "tighten it up". In a human language, for example, one can adopt a form of constrained writing, such as a type of poetry, to bring up some otherwise-invisible aspects of the linguistic fabric. In normal, everyday prose, words are little more than arbitrary sequences of symbols, but when working under tight constraints, their elementary structures and mutual relationships become important. This is very similar to what happens when programming in a constrained environment: previously irrelevant aspects, such as machine code instruction lengths, suddenly become relevant.

Constrained programming has long traditions in a multitude of hacker subcultures, including the demoscene, where it has obtained a very prominent role. Perhaps the most popular type of constraint in all hacker subcultures in general is the program length constraint, which sets an upper limit to the size of either the source code or the executable. It seems to be a general rule that working with ever smaller program sizes brings the programmer ever closer to the underlying fabric: in larger programs, it is possible to abstract away a lot of it, but under tight constraints, the programmer-artist must learn to avoid abstraction and embrace the fabric the way it is. In the smallest size classes, even such details as the ordering of sound and video registers in the I/O space become form-giving, as seen in the sub-32-byte C-64 demos by 4mat of Ate Bit, for example.

Mind-benders

Sometimes a language or a platform feels tight enough even without any additional constraints. A lot of this feeling is subjective, caused by the inability to express oneself in the previously learned way. When learning a new human language that is completely different to one's mother tongue, one may feel restricted when there's no counterpart for a specific word or grammatical cosntruct. When encountering such a "boundary", the learner needs to rethink the idea in a way that goes around it. This often requires some mind-bending. The same phenomenon can be encountered when learning different programming languages, e.g. learning a declarative language after only knowing imperative ones.

Among both human and programming languages, there are experimental languages that have been deliberately constructed as "mind-benders", having the kind of features and limitations that force the user to rethink a lot of things when trying to express an idea. Among constructed human languages, a good example is Sonja Elen Kisa's minimalistic "Toki Pona" that builds everything from just over 120 basic words. Among programming languages, the mind-bending experiments are called "esoteric programming languages", with the likes of Brainfuck and Befunge often mentioned as examples.

In computer platforms, there's also a lot of variance in "objective tightness". Large amounts of general-purpose computing resources make it possible to accurately emulate smaller computers; that is, a looser fabric may sometimes completely engulf a tighter one. Because of this, the experience of learning a "bigger" platform after a "smaller" one is not usually very mind-bending compared to the opposite direction.

Nothing is neutral

Now, would it be possible to create a language or a computer that would be totally neutral, objective and universal? I don't think so. Trying to create something that lacks fabric is like trying to sculpt thin air, and fabrics are always built from arbitrarities. Whenever something feels neutral, the feeling is usually deceptive.

Popular fabrics are often perceived as neutral, although they are just as arbitrary and biased as the other ones. A tribe that doesn't have very much contact with other tribes typically regards its own language and culture as "the right one" and everyone else as strange and deviant. When several tribes come together, they may choose one language as their supposedly neutral lingua franca, and a sufficiently advanced group of tribes may even construct a simplified, bland mix-up of all of its member languages, an "Esperanto". But even in this case, the language is by no means universal; the fabric that is common between the source languages is still very much present. Even if the language is based on logical principles, i.e. a "Lojban", the chosen set of principles is arbitrary, not to mention all the choices made when implementing those principles.

Powerful computers can usually emulate many less powerful ones, but this does not make them any less arbitrary. On the contrary, modern IBM PC compatibles are full of arbitrary desgin choices stacked on one another, forming a complex spaghetti of historical trials and errors that would make no sense at all if designed from scratch. The modern IBM PC platform therefore has a very prominent fabric, and the main reason why it feels so neutral is its popularity. Another reason is that the other platforms have many a lot of the same design choices, making today's computer platforms much less diverse than what they were a couple of decades ago. For example, how many modern platforms can you name that use something other than RGB as their primary colorspace, or something other than a power of two as their word length?

Diversity is diminishing in many other areas as well. In countries with an astounding diversity, like Papua-New-Guinea, many groups are abandoning their unique native languages and cultures in favor of bigger and more prestigious ones. I see some of that even in my own country, where many young and intelligent people take pride in "thinking in English", erroreusnly assuming that second-language English would be somehow more expressive for them than their mother tongue. In a dystopian vision, the diversity of millennia-old languages and cultures is getting replaced by a global English-language monoculture where all the diversity is subcultural at best.

Conclusion

It indeed seems to be possible to talk about human languages, cultures, programming languages, computing platforms and many other things with similar concepts. These concepts also seem so useful at times that I'm probably going to use them in subsequent articles as well. I also hope that this article, despite its length, gives some food for thought to someone.

Now, go to the world and embrace the mind-bending diversity!