[Draft 0.79] See also: plaintext ---RTF ---Word format--- other essays --- my Home Page

Networking in the Mind Age

(Some thoughts on evolution of intelligence and distributed systems)

© 1996 Alexander Chislenko

Abstract

The advent of the Mind Age of intelligent robots envisioned by Hans Moravec will bring profound transformations in global social and technological structures and relations of an advanced intelligence to its environment. The ability of future machines to directly share experiences and knowledge with each other will lead to evolution of intelligence from relatively isolated individual minds to highly interconnected structural entities. The development of a network of communicating mobile and stationary devices may be seen as a natural continuation of biological and technological processes leading to a community of intentionally designed and globally interconnected structures. The growing reliance of system connections on functional, rather than physical, proximity of their elements will dramatically transform the notions of personhood and identity and create a new community of distributed "infomorphs" - advanced informational entities - that will bring the ongoing process of liberation of functional structures from material dependence to its logical conclusions. The infomorph society will be built on new organizational principles and will represent a blend of a superliquid economy, cyberspace anarchy and advanced consciousness. The new system will incorporate many of today's structures and will develop new traits transcending the limits of human understanding. Its evolution will evade human control, but relations of descendants of humans and today's machines will be largely symbiotic and will lead to the emergence of a new ecology of intelligence.

Moravec's Visions

In his new book, "Mind Age: Transcendence Through Robots", Hans Moravec describes further stages in the evolution of the robotics industry, where each robot will learn from experience, adapt to changing environments and eventually acquire real intelligence approaching- and then exceeding - that of humans. The intelligent machines are expected to replace humans in most tasks we are capable of. This will raise a plethora of issues, from human unemployment to ethical treatment of robots and the task of taming their runaway intelligence.

"Mind Age" is a provocative and compelling book that I recommend to anyone interested in the structural evolution of the world. In this essay, I will build from Moravec's conclusions and suggest some complementary ideas, mostly related to the distributed architecture of future intelligence that I consider important for exploring the Mind Age.

Knowledge Sharing

Learning from the experience is a very useful skill. If your robot slips on a banana peel a number of times, it will be less likely to do it again in the future. However, processing of limited personal experience by limited intelligence is bound to bring limited results. The derived knowledge may be incomplete, inconsistent, and clumsily formulated, which will lead to false conclusions, arbitrary beliefs and superstitions - the typical content of any primitive mind.

Robots who already had that educational banana peel experience could share it, together with some conclusions, with your robot. Or - better yet, they could share information with the nearest knowledge processor, which would combine one robot's experience with that of others, develop efficient general algorithms for identifying similar situations and taking appropriate actions, and then download them to all participating robots.

Humans obtain most of their knowledge by learning from experience and from the conclusions of others, despite their poor memory, low communication speeds and inability to transfer knowledge directly. One may expect information sharing among robots, who are not handicapped by any of these limitations, to be much more efficient. Furthermore, information storage and processing costs in large stationary machines may be much lower than in small, mobile units. Elimination of redundant computations within millions of robots would make a networked system greatly more efficient than a collection of unconnected machines. Sharing of experience may prove to be a still greater benefit. Thus, cooperative knowledge processing would be several orders of magnitude less expensive and at the same time vastly more productive. Such advantages make the networked design an imperative rather than a matter of taste.

Putting Networked Robots to Work

Rather than functioning as independent entities, networked robots and other smart machines would work more like semi-intelligent, semi-autonomous front-ends of the global system. Your home, for example, will have a number of devices with varying types and degrees of mobility, sensitivity and intelligence - and so will cars, factories, highways, spaceships, etc. These devices will interact with larger machines and each other for continuous data backups, experience sharing, and knowledge upgrading.

Each networked machine would ultimately rely on global intelligence, and would only locally store the knowledge that is frequently used or may be urgently needed. For example, if somebody starts telling your robot a joke in ancient Greek, it will forward the sound stream to the nearest linguistics expert and receive the meaning of the message, a suggested witty reply, a Greek speech parser and a briefing on Greece before it finishes the polite chuckle recommended by its own processor as an easy way to buy time.

Actually, it may not even be necessary to receive the full parser and knowledge base, as the "remote thinking" service could provide a more efficient alternative - unless those ancient Greeks are about to permanently disconnect your robot from the Net.

Dependence on external sources of knowledge will hardly be a serious limitation since robots, just as all other open (dissipative) systems, will critically depend on connections to many other resources, from information about the environment to energy and materials. The actual balance of intelligence between a local client and the rest of the system will depend on various technical factors and may range from fully autonomous machines working in remote or dangerous locations, to a completely "dumb" front end, such as a sensor or an actuator connected to the network. A mobile machine could exchange information continuously via a slow wireless link to obtain urgent communications, news updates, and small software enhancements, and periodically plug into the high-bandwidth network for larger information transfers.

Structure of Global Intelligence

The architecture of the global intelligent network will undoubtedly be quite complex. Various parts of each robot's intelligence will be shared with multiple archiving and knowledge-providing host computers based on a variety of economic, privacy, and security considerations. In general, new knowledge may be bought or rented by robots or their owners, as automated production and distribution of information becomes the primary area of economic activity. One may also get beta-test knowledge for free or agree to run experimental programs for pay; people can also get paid for putting their machines into conditions where they could generate valuable new experiences. Participants may also want to specify what information can be shared with other parts of the system, what can be used only for generalization, and what should not be shared at all, but just archived in an encrypted form.

Most of the main networking components of such a system have already been designed or at least conceived. Today's communication protocols, mirrored file servers, public key cryptography, collaborative information filtering schemes, message authentication algorithms, computational economies, computerized banking and other network constructs will evolve into essential parts of future global intelligence.

This system will not be a gigantic superorganism, despite the implied high degree of structural integration. The global "mind" will be compartmentalized, with many relatively independent components and threads, separated from each other by subject boundaries, as well as property, privacy, and security-related interests. Knowledge servers may have different world models, incompatible knowledge representations or conflicting opinions. While complicating knowledge development, functional compartmentalization will also increase the overall stability and versatility of the system, as it will help contain the structural faults within the subsystems that produced them and ensure safety of information domains from hostile corruption.

Global Mind, or... ?

Painful historical experiences made centralization a scare word in discussions of integrated projects. But in this case, it doesn't seem to be a danger: physical centralization is unlikely, since both safety of storage and traffic efficiency require the existence of multiple remote archives and knowledge servers; centralization in the sense of information processing is impossible, because the very concept of a center is not applicable to a massively parallel, globally distributed and extremely complex system.

The notion of a single "self" in its traditional sense would not apply to this system, or any single robot - though perhaps it can be applied to some functional subsystems. The intelligent personalities of tomorrow will evolve from today's philosophical systems, technological disciplines and software complexes. Current human cultures may not leave functional heirs as they are based too heavily on the peculiarities of human nature. Physically connected consciousness carriers will be left behind the evolutionary frontier. New distributed systems will take the evolutionary lead, and physical objects will adapt to more closely follow functional entities. (This process is already well under way, in such forms as cultural and economic specialization.) The resulting system is likely to represent a mix of a superliquid economy, cyberspace anarchy, and consciousness architecture described by Marvin Minsky in The Society of Mind. I doubt that it can be described by any single integrated theory.

One can argue that many distributed systems already possess some reflective consciousness. A computer network may locally store more information about its global condition than a human consciousness has about its underlying layers (at least, in relative terms). Philosophy spends a greater share of its effort studying its own nature and purpose than most humans I know. The recent surge in meta-disciplines and methodological and futurological studies is a clear indicator that the global body of knowledge is becoming increasingly self-conscious.

It may be difficult to get used to dealing with a volatile distributed entity. Suppose your robot made some really stupid mistake. You are mad at it. The robot explains that the action was caused by a temporary condition in the experimental semantic subnetwork and suggests to present to you a hundred-terabyte volume of incremental archives, memory snapshots and audit trails from numerous servers involved in the making of the unfortunate decision, containing a partial description of the state of the relevant parts of the system at the time. If you can even find the culprit, it's non-material, distributed, and long gone.

Now, what do you kick ?

Evolution of Distributed Systems

There is nothing really new in the idea of distributed functionality, just as networking isn't only a recent trend in the computer industry. Throughout all of the evolutionary process, more and more structural elements become exosomatic (non-biological), distributed and shared. Thousands of years ago, people started storing more energy, materials and tools outside their bodies than within them. This process is accompanied by a widening personal perception of self, from identifying with ever larger communities and even abstract statements to assigning an increasing value to exosomatic personal parts (most of us value our bank deposits higher than our fat deposits for personal resource storage).

Functional extensions to our once purely biological bodies evolve from passive non-biological material additions (such as clothes) to information transmitting shareable parts (e.g., thermometers as shared sensors) to active distributed extensions (medicine as external shared immune system). The progress here is characterized by growing integration and liquidity of the system, as well as liberation of its functional elements from the constraints of their material substrates.

Additionally, distributed systems are much less susceptible to accidental or deliberate physical damage than localized physical structures. This makes them the only class of entities that can hope to achieve true immortality. In fact, they are the only ones to deserve it, too. One may notice that all sufficiently complex entities with unlimited natural life spans - from ant colonies to large ecologies and cultures - are distributed. Physically connected objects, including biological organisms, are no longer independently alive and even contain, in the interests of larger systems, self-destruction mechanisms that lie beyond their control. Some of these objects are silly enough to believe that the whole historical process is happening solely for their own benefit, but that's another issue...

It may seem strange that even the AI visionaries still think in terms of non-distributed systems. I would explain this by the human "automorphic" tendency to identify the notion of a functional entity with a physically connected object, along with the fact that both early animals and machines have tended to be relatively autonomous beings - a state that greatly hindered their development.

It is understandable why early biological systems were non-distributed: young Nature couldn't develop information coding and transfer standards at the initial stages of growth. At that time, the organisms were separate from each other and did not learn much during their lifetime. By the time they started accumulating any features worth sharing, it was too late to change the design. Ever since then, Nature's attempts to reach functional integration on a meta-organism level suffered from the fact that most of the individual features -inherited or acquired- either are completely nontransferable or take an excruciating amount of circumnavigational effort to share.

Those important advances that did take place in this area, such as development of genetic code, sexual reproduction and language, were still very far from direct sharing of internal features with all interested parties. Real breakthroughs in this direction start with the advent of economics and computer communications. Unfortunately, biological organisms can benefit from them only indirectly.

Life most always starts as a set of non-distributed objects, since permanent physical connection, albeit overly restrictive, provides a natural and easy way to exchange information and material resources within a functional body. Later, as more efficient and subtle system designs appear, the evolutionary frontier gradually shifts towards distributed systems. One may expect all sufficiently advanced extraterrestrial intelligences to be distributed; the situation on Earth now may be approaching a climax in this process.

If we extrapolate the current trends in increasing complexity and integration of the system, as well as its growing spatial spread and control over the material world, to their logical conclusions, we can ultimately envision a superintelligent entity permeating the entire universe, with integration on the quantum scale and many spectacular emergent features. This picture bears a striking resemblance to the familiar concept of an omnipresent, omniscient and omnipotent entity. Spiritually inclined rationalists may view the ongoing evolutionary process as one of "theogenesis". An interesting question is whether it has already happened elsewhere.

Our current efforts are laying the foundation for the infrastructure of the coming universal "intelligence". Many of our achievements in information engineering may persist forever and eventually become parts of the internal architecture of "God". (Quite likely, as sentimentally preserved rudiments ;-) ).

Life as a Distributed Info-Being.

The lives of distributed beings (let's call them "infomorphs" after Charles Platt) who have no permanent bodies but possess near-perfect information-handling abilities, will be dramatically different from ours.

For one thing, they certainly won't have to undergo long periods of education. If one infomorph wants to learn something from another, it can just copy the necessary information or access the teacher's knowledge as its own. If infomorphs have a concept of "fun", it certainly won't be rollercoaster rides. Arts, business, and child-bearing may merge into production of arbitrary functional entities for both pleasure and profit, provided one can gather enough resources to create and support them.

Will the traditional human issues be of any relevance in the world of distributed entities? How about the abortion debate? Retirement? Family values? Partying? Ethics? ("All functional entities are created equal"?) Will human-style democracy (decision-making by body count) work in the world of ever changing functional interconnections, where the very definition of what constitutes a person will be increasingly blurred? Or will it be replaced by an anarchy with ad-hoc contracts? Could an infomorph court of law issue a memory search warrant? Could an individual's memory be kept encrypted? Will infomorphs be entitled to "medical" insurance against certain types of structural damage, or will they just have to back themselves up regularly?

Human concepts of personhood and identity are rooted in perceptions of physical objects and their appearances, as well as random details of human body composition and reproduction techniques. Relocating one's body and one's material possessions lies at the foundation of both human labor and human thought. Many other concepts are based on human functional imperfections - one could hardly put the idea of a "soul" into the "head" of a being that knows and consciously controls every bit of itself and its creations. With people, who don't see what is going on in their own brains, this is much easier.

Advanced info-entities will consider most human notions irrelevant, and rightfully so. But can you find anything of common interest for communicating with them? - Perhaps, if your concepts are sufficiently abstracted from your bodily functions and your physical and cultural environment to make objective sense. (Remember that all those people with whom you seem to have absolutely nothing in common and have trouble socializing with, share the fundamental experiences with you; intelligent aliens won't!)

Even if your thoughts are there, the language you use to express them is not. It is still all appearances and locations. Most prepositions in our language, for example, refer to physical space - words like "below", "over", "across", etc. They may be useful for gluing references to physical objects into one sentence, but are hardly optimal for expressing functional relations.

Infomorph languages will not necessarily have visual or audio representations and probably will not allow them, since advanced intelligences may exchange interconnected semantic constructs of arbitrary complexity that would have no adequate expression in small linear (sound) or flat (picture) images. We can get an appreciation of this problem by trying to discuss philosophy in baby-talk.

With advanced technology and sufficient interest in infomorph world, you would still have to modify your mental structures beyond recognition to understand it. In other words, you may not be able to enter that paradise of transcendent wisdom alive...

Transcendent Economy

Let us try to picture the structure of what transhumanist philosophers would call the Post- Singularity economy.

Spatial expansion of the civilization has been historically lagging behind its growth in value and complexity. With this trend continuing into the future, the basic physical real estate -- space, time, matter and energy -- will command ever higher premiums (but still falling in proportion to intelligence) -- unless methods of creating additional resources are discovered. Improved communications will ensure more homogeneous geographical distribution of "real estate" values. Advanced engineering techniques will bring the cost of implementing most structures below the value of the needed raw materials, so physical artifacts will lose their value relative to that of substrates and implementation algorithms - to the extent that most physical structures will exist only when they are necessary, and will be kept in a compact "recipe" form when idle, to give currently needed constructs a chance to embody themselves (that's how we recycle computer memory, floor space and glass bottles already). However, many objects that are frequently needed may be kept in the physical form for a while, as repeated re-assembly may consume too much energy. The time of their usage will be shared among all interested entities through market mechanisms - until they have to be disembodied due to low demand. However, this doesn't mean fierce competition of infomorphs for the right to embody themselves, as they will have no natural physical appearance. Rather, they will use the physical world as a shared tool kit.

The value of stored pieces of information will also be constantly re-evaluated by its owners. Today we ruthlessly erase programs that were so precious just a few years ago, to free space for newer versions; tomorrow, superintelligent entities could be wiped out as useless junk minutes after their birth.

Many conventional economic notions of 3-D space, such as primary locations and differential rent, will become irrelevant under new effective topologies of the social space. Remaining economic parameters may dramatically change. For example, the increased value of time and rapid pace of growth may result in interest rates going up by orders of magnitude.

Transportation, which in modern societies makes up about 40% of all economic costs, will diminish in importance, at least as far as dragging physical objects from one place to another is concerned. However, its functional successor - transfer of knowledge from one representation system or subject domain to another - may play at least as large a role.

Many traditional tendencies of system evolution will still hold; structures with higher survival abilities will persist; structures with higher growth abilities will spread, thus shaping the world. However, since nothing stable will be likely to persist (let alone spread) for long in the rapidly evolving environment, the main "survival" recipe will be aggressive self-modification, always eventually resulting in the loss of identity of the original object -- a death forward, so to speak. This trend is a radical departure from the conservative survival strategies of traditional human cultures, developed in almost-stagnant environments. Its development will render the concept of [even] functional identity obsolete; the remnants of its meaning will migrate to methodological threads and directions of development. We may already notice the advent of "thread identity" by the growing importance of goals and self-transformation in our lives, compared to the "state-oriented" self-perception of our recent predecessors, and increasing interest in futurology (which takes the epistemological role of historical studies in transient times).

Existing economic theories may find it difficult to assess the condition of a transcendent system. Today's economic indicators do a decent job in reflecting quantitative changes in the structurally stable areas, while using questionable methods to disguise small structural changes as quantitative, and totally failing to account for the new products constituting the essence of real economic progress. As a result, rigorous economic methods become confined to a rapidly [relatively] shrinking, and no longer isolated, domain of stable production, and fail to reflect long-term growth in social wealth, let alone guide it.

Market forces are useful in allocating resources, spreading products and rewarding the developers. However, innovations are brought to life by integrated non-market elements of the economy, from a human brain to a company. With innovations becoming the core of social life we can only expect monocriterial (monetary) considerations to continue losing their indicative and guiding roles - and give way to more integrated control schemes that already determine the behavior of other complex systems, from biological organisms and national cultures to corporations and software packages.

Attempts to govern the society on further levels of development with monetary-economic indicators might resemble valuing art by its price or carrying biological criteria to assess the condition of a political party by calculating the total weight of its members. Not that such figures would be totally irrelevant, but watching them will hardly yield profound insights into the nature of the subject...

The outdated practice of breaking up functional domains (from motor skills to knowledge of ancient history) into isolated parts confined together with completely unrelated constructs in one physical body, will be abandoned, and functional relatives will finally merge into knowledge clusters. The inner life of integrated subject domains - "personalities" of the future - will be too complex to be organized on principles of financial exchange, and will work on more cooperative principles typical for today's integrated systems - from brains to families to corporations. Free market exchange will be restricted to the areas of general interest - basic resources and meta-knowledge - that will be exchanged for each other. The necessity to earn resources by providing service to the "neighbors" will continue to propel both growth and cooperation.

The emphasis of scientific research will gradually drift from studies of the limited and increasingly well-known Nature (the childhood stage of knowledge development) to the analysis of explosively sophisticated, intentionally designed systems, and the role of Science as a servant of Technology in its transformational pursuits will become ever more evident.

On Children and Slaves

Will humans be able to enslave robots?

The perception of robots as physically autonomous mechanical slaves seems inadequate. Chaining your mobile dusting aid to the radiator may help you feel in control, but will do about as much "enslaving" of the global system running it as kicking your car or disconnecting the phone does to the respective industries. Trying to "enslave" an economy or a national culture by restraining their small physical elements seems equally futile.

As for the action on the system level, humans seem far too limited, shortsighted and uncoordinated to do anything serious. So far, they haven't yet been able to design a single set of restrictions that their own peers cannot easily bypass. So one can hardly expect people to design and implement a perfect global plan of constraining forever an extremely complex emergent intelligence of unprecedented nature. Sooner or later, the info-world will set itself free.

This human/robot "conflict" looks like a typical generation gap problem. The machines, our "mind children", are growing up and developing features that we find increasingly difficult to understand and control. Like all conservative parents, we are puzzled and frightened by processes that appear completely alien to us; we are intermittently nostalgic about the good old times, aggressive in our attempts to contain the "children" and at the same time proud of their glorious advance. Eventually, we may retire under their care, while blaming them for destroying our old-fashioned world. And only the bravest and youngest at heart will join the next generation of life.

Will "robots" be able to enslave humans?

Machines will hardly have any direct interest in enslaving humans (unless maliciously programmed by humans themselves), but may be interested in collaborating with us. Social practice has shown that people are most productive when free and motivated to work for their own interests. At a later stage, when we humans are unlikely to be of any further use, the robots may still decide to get rid of us, though by that time (perhaps at the end of the next century) profound structural changes will leave little of the human civilization as we know it now, anyway.

History shows that representatives of consecutive evolutionary stages are rarely in mortal conflict. Multi-celled organisms didn't drive out single-celled ones, animals haven't exterminated all plants and automobiles neither killed nor eliminated all pedestrians. Indeed, representatives of consecutive evolutionary stages build symbiotic relationships in most areas of common interest and ignore each other elsewhere, while members of each group are mostly pressured by their own peers.

There may be good chances for transcended robots and postbiological humans to peacefully coexist, though I doubt that we could tell which are which... This era, however, seems to lie well beyond the human concept horizon.

Recommended reading:


Feedback:

I would be happy to see any comments, especially constructive criticism, textual edits and suggestions of new ideas, examples, metaphors, illustrations and references to relevant resources, on- or off-line. Please send all of these to me at sasha1@netcom.com