V. 0.84-5 - 15 April, 1997 Sasha

Intelligent Information Filters
and
Enhanced Reality

© 1996,1997 Alexander Chislenko

Preface

I started to think seriously about the ideas of augmented perception and personalized views of reality after reading a number of Internet messages containing proposals to introduce language standards for on-line communications. Frequently, people suggest restricting certain forms of expression or polishing the language of the posts to make them less offensive and more generally understandable. While looking forward to the advantages of improved communications, I wanted see them provided by tools that would at the same time make the language mix of the Net more free and diverse.

In this article, I suggest that active information filtering technologies may help us approach this goal for both textual and multimedia information. I also pursue this concept further, discussing the introduction of augmented perception and Enhanced Reality (ER), and share some observations and predictions of the transformations in people's perception of the world and themselves in the course of the technological progress.

Text Translation and Its Consequences

[IMAGE] Many of us are used to having incoming e-mail filtered, decrypted, formatted and shown in our favorite colors and foNts. These techniques can be taken further. Customization of spelling (e.g., American to British or archaic to modern) would be a straightforward process. Relatively simple conversions could also let you see any text with your favorite date and time formats, use metric or British measures, implement obscenity filters, abbreviate or expand acronyms, omit or include technical formulas, personalize synonym selection and punctuation rules, and use alternative numeric systems and alphabets (including phonetic and pictographic). Text could also be digested for a given user, translated to his native language and even read aloud with his favorite actor's voice.

My friend Gary Bean suggested possible implementation of "cliché translators" that would explicitly convey the meaning of a sentence that is known to the translator, but not necessarily to the reader. For example, the phrase "that's an interesting idea" might be translated as "I have serious reservations about this". In the reverse operation, words and phrases can be replaced with politically correct euphemisms.

After the recent Communication Decency Act, Robert Carr developed a remarkable "HexOn Exon" program that allows the user to convert obscene words in the messages into the names of the senators responsible for this Act, and vice versa. Besides presenting a humorous attempt to bypass the new obscenity censorship, this program demonstrates that allocating both responsibilities and rights for the contents of a message among multiple authoring and filtering agencies may not be easy.

Translation between various dialects and jargons, though difficult, should still take less effort than the translation between different natural languages, since only a part of message semantics has to be processed. Good translation filters would give "linguistic minorities" -- speakers of languages ranging from Pig Latin to E-Prime and Loglan -- a chance to practice their own languages while communicating with the rest of the world.

Some jargon filters have already been developed, and you can benefit from them by enjoying reading Ible-Bay, the Pig Latin version of the Bible, or using Dialectizer program to convert your English texts to Fudd or Cockney.

Such translation agents would allow rapid linguistic and cultural diversification, to the point where the language you use to communicate with the world could diverge from everybody else's as far as the requirement of general semantic compatibility may allow. It is interesting that today's HTML Guide already calls for the "divorce of content from representation", suggesting that you should focus on what you want to convey rather than on how people will perceive it.

Some of these features will require full-scale future artificial intelligence, such as "sentient translation programs" described by Vernor Vinge in "A Fire Upon The Deep"). In the meantime, they could be successfully emulated by human agents. [IMAGE]

Surprisingly, even translations between different measurement systems can be difficult. For example, your automatic translator might have trouble converting such expressions as "a few inches away", "the temperature will be in the 80s" or "a duck with two feet". A proficient translator might be able to convey the original meaning, but the best approach would be to write the message in a general semantic form which would store the information explicitly, indicating in the examples above where the terms refer to measurements, whether you insist on the usage of the original system, and the intended degree of precision. As long as the language is expressive enough, it is suitable for the task - and this requirement is purely semantic; symbol sets, syntax, grammar and everything else can differ dramatically.

A translation agent would interactively convert natural-language texts to this semantic lingua franca and interpret them back according to a given user profile. It could also reveal additional parts of the document depending on users' interests, competence in the field, and access privileges.

Currently, we can structure our mental images any way we want so long as we can translate them to a common language. This has led to relatively stable standardized languages and a great variability among minds. Likewise, intelligent software translators could let us make our languages as liberated as our minds and push the communication standards beyond our biological bodies. (It really means just further exosomatic expansion of the human functional body, but the liberation still goes beyond the traditional human interpretation of "skin-encapsulated" personal identity.)

So will there be more variety or more standardization? Most likely both, as flexible translation will help integrate knowledge domains currently isolated by linguistic and terminological barriers, and at the same time will protect linguistically adventurous intellectual excursions from the danger of losing contact with the semantic mainland. Intelligent translators could facilitate the development of more comprehensive semantic architectures that would make the global body of knowledge at the same time more diverse and more coherent.

Information may be stored and transmitted in the general semantic form. With time, an increasing number of applications can be expected to use the enriched representation as their native mode of operation. Client translation software will provide an emulation of the traditional world of "natural" human interactions while humans still remain to appreciate it. The semantic richness of the system will gradually shift away from biological brains, just as data storage, transmission and computation have in recent history. Humans will enjoy growing benefits from the system they launched, but at the expense of understanding the increasingly complex "details" of its internal structure, and for a while will keep playing an important role in guiding the flow of events. Later, after the functional entities liberate themselves from the realm of flesh that gave birth to them, the involvement of humans in the evolutionary process will be of little interest to anybody except humans themselves.

Enhanced Multimedia

[IMAGE] Similar image transformation techniques can be applied to multimedia messages. Recently, a video system was introduced that allows you to "soften the facial features" of the person on the screen. Advanced real-time video filters could remove wrinkles and pimples from your face or from the faces of your favorite political figures, caricature their opponents, give your mother-in-law a Klingon [IMAGE] persona on your video-phone, re-clothe people in your favorite fashion, and replace visual clutter in the background with something tasteful.

It also seems possible to augment human senses with transparent external information pre-processors. For example, if your audio/video filters notice an object of potential interest that fails to differ from its signal environment enough to catch your attention, the filters can amplify or otherwise differentiate (move, flash, change pitch, etc.) the signal momentarily, to give you enough time to focus on the object, but not enough to realize what triggered your attention. In effect, you would instantly see your name in a text or find Waldo in a puzzle as easily as you would notice a source of loud noise or a bright light.

While such filters do not have to be transparent, they may be a way to provide a comfortable "natural" feeling of augmented perception for the next few generations of humans, until the forthcoming integration of technological and neural processing systems makes such kludgy patches obsolete.

Some non-transparent filters can already be found in military applications. Called "target enhancements", they allow military personnel to see the enemy's tanks and radars nicely outlined and blinking.

More advanced filtering techniques could put consistent dynamic edits into the perceived world.

o Volume controls could sharpen your senses by allowing you to adjust the level of the signal or zoom in on small or distant objects.

o Calibration tools could expand the effective spectral range of your perception by changing the frequency of the signal to allow you to hear ultrasound or perceive X-rays and radiowaves as visible light.

o Conversions between different types of signals may allow you, for example, to "see" noise as fog while enjoying quiet, or convert radar readings from decelerating pedestrians in front of you into images of red brake lights on their backs.

o Artificial annotations to perceived images would add text tags with names and descriptions to chosen objects, append warning labels with skull and crossbones on boxes that emit too much radiation, and surround angry people with red auras (serving as a "cold reading" aid for wanna-be psychics).

o Reality filters may help you filter all signals coming from the world the way your favorite mail reader filters you messages, based on your stated preferences or advice from your peers. With such filters you may choose to see only the objects that are worthy of your attention, and completely remove useless and annoying sounds and images (such as advertisements) from your view.

o Perception utilities would give you additional information in a familiar way -- project clocks, thermometers, weather maps, and your current EKG readings upon [the image of] the wall in front of you, or honk a virtual horn every time a car approaches you from behind. They could also build on existing techniques that present us with recordings of the past and forecasts of the future to help people develop an immersive trans-temporal perception of reality.

o "World improvement" enhancements could paint things in new colors, put smiles on faces, "babify" figures of your incompetent colleagues, change night into day, erase shadows and improve landscapes.

o Finally, completely artificial additions could project northern lights, meteorites, and supernovas upon your view of the sky, or populate it with flying toasters, virtualize and superimpose on the image of the real world your favorite mythical characters and imaginary companions, and provide other educational and recreational functions.

I would call the resulting image of the world Enhanced Reality (ER).

Structure of Enhanced Reality

One may expect that as long as there are things left to do in the physical world, there will be interest in application of ER technology to improve our interaction with real objects, while Virtual Reality (VR) in its traditional sense of pure simulation can provide us with safe training environments and high-bandwidth fiction. Later, as ER becomes considerably augmented with artificial enhancements, and VR incorporates a large amount of archived and live recordings of the physical world, the distinctions between the two technologies may blur.

[IMAGE] Some of the interface enhancements can be made common, temporarily or permanently, for large communities of people. This would allow people to interact with each other using, and referring to, the ER extensions as if they were parts of the real world, thus elevating the ER entities from individual perceptions to parts of shared, if not objective, reality. Some of such enhancements can follow the existing metaphors. A person who has a reputation as a liar, could appear to have a long nose. Entering a high-crime area, people may see the sky darken and hear distant funeral music. Changes in global political and economic situations with possible effect on some ethnic groups may be translated into bolts of thunder and other culture-specific omens.

[IMAGE] Other extensions could be highly individualized. It is already possible, for example, to create personalized traffic signs. Driving by the same place, an interstate truck driver may see a "no go" sign projected on his windshield, while the driver of the car behind him will see a sign saying "Bob's house - next right". More advanced technologies may create personalized interactive illusions that would be loosely based on reality and propelled by real events, but would show the world the way a person wants to see it. The transparency of the illusion would not be important, since people are already quite good at hiding bitter or boring truths behind a veil of pleasant illusions. Many people even believe that their entirely artificial creations (such as music or temples) either "reveal" the truth of the world to them or, in some sense, "are" the truth. Morphing unwashed Marines into singing angels or naked beauties would help people reconcile their dreams with their observations.

Personal illusions should be built with some caution however. The joy of seeing the desired color on the traffic light in front of you may not be worth the risk. As a general rule, the more control you want over the environment, the more careful you should be in your choice of filters. However, if the system creating your personal world also takes care of all your real needs, you may feel free to live in any fairy tale you like.

In many cases, ER may provide us with more true-to-life information than our "natural" perception of reality. It could edit out mirages, show us our "real" images in a virtual mirror instead of the mirror images provided by the real mirror, or allow to see into -- and through -- solid objects. It could also show us many interesting phenomena that human sensors cannot perceive directly. Giving us knowledge of these things has been a historical role of science. Merging the obtained knowledge with our sensory perception of the world may be the most important task of Enhanced Reality.

Historical Observations

People have been building artificial symbolic "sur-realities" for quite a while now, though their artifacts (from art to music to fashions to traffic signs) have been mostly based on the physical features of the perceived objects. Shifting some of the imaging workload to the perception software may make communications more balanced, flexible, powerful and inexpensive.

With time, a growing proportion of objects of interest to an intelligent observer will be entirely artificial, with no inherent "natural" appearance. Image modification techniques then may be incorporated into integrated object designs that would simultaneously interface with a multitude of alternative intelligent representation agents.

[IMAGE] The implementation of ER extensions would vary depending on the available technology. At the beginning, it could be a computer terminal, later a headset, then a brain implant. The implant can be internal in more than just the physical sense, as it can actually post- and re-process information supplied by biological sensors and other parts of the brain. The important thing here is not the relative functional position of the extension, but the fact of intentional redesign of perception mechanisms -- a prelude to the era of comprehensive conscious self-engineering. The ultimate effects of these processes may appear quite confusing to humans, as emergence of things like personalized reality and fluid distributed identity could undermine their fundamental biological and cultural assumptions regarding the world and the self. The resulting "identity" architectures will form the kernel of trans-human civilization.

The advancement of human input processing beyond the skin boundary is not a novel phenomenon. In the audiovisual domain, it started with simple optics and hearing aids centuries ago and is now making rapid progress with all kinds of recording, transmitting and processing machinery. With such development, "live" contacts with the "raw world" data might ultimately become rare, and could be considered inefficient, unsafe and even illegal. This may seem an exaggeration, but this is exactly what has already happened during the last few thousand years to our perception of a more traditional resource -- food. Using nothing but one's bare hands, teeth and stomach for obtaining, breaking up, and consuming naturally grown food is quite unpopular in all modern societies for these very reasons. In the visual domain, contacts with objects that have not been intentionally enhanced for one's perception (in other words, looking at real, unmanipulated, unpainted objects without glasses) are still rather frequent for many people, and the process is still gaining momentum, in both usage time and the intensity of the enhancements.

Rapid progress of technological artifacts and still stagnant human body construction create an imperative for continuing gradual migration of all aspects of human functionality beyond the boundaries of the biological body, with human identity becoming increasingly exosomatic (non-biological).

Truth vs. Convenience

Enhanced Reality could bring good news to privacy lovers. If the filters prove sufficiently useful to become an essential part of the [post]human identity architecture, the ability to filter information about your body and other possessions out of the unauthorized observer's view may be implemented as a standard feature of ER client software. In Privacy-Enhanced Reality, you can be effectively invisible.

Of course, unless you are forced to "wear glasses", you can take them off any time and see the things the way they "are" (i.e., processed only by your biological sensors and filters that had been developed by the blind evolutionary process for jungle conditions and obsolete purposes). In my experience, though, people readily abandon the "truth" of implementation details for the convenience of the interface and, as long as the picture looks pleasing, have little interest in peeking into the binary or HTML source code or studying the nature of the physical processes they observe - or listening to those who understand them. Most likely, your favorite window into the real world is already not the one with the curtains - it's the one with the controls...

Many people seem already quite comfortable with the thought that their environment might have been purposefully created by somebody smarter than themselves, so the construction of ER shouldn't come to them as a great epistemological shock.

Canonization of chief ER engineers (probably, well-deserved) could help these people combine their split concepts of technology and spirituality into the long-sought-after "holistic worldview".

Biofeedback and self-perception.

Perception enhancements may also be used for augmenting people's view of their favorite object of observation -- themselves. Biological evolution has provided us with a number of important self-sensors, such as physical pain, that supply us with information about the state of our bodies, restrict certain actions and change our emotional states. Nature invented these for pushing our primitive ancestors to taking actions they wouldn't be able to select rationally. Unfortunately, pain is not a very accurate indicator of our bodily problems. Many serious conditions do not produce any pain until it is too late to act. Pain focuses our attention on symptoms of the disease rather than causes, and is non-descriptive, uncontrollable, and often counterproductive.

Technological advances may provide us with the informational, restrictive and emotional functions of pain without most of the above handicaps. Indicators of important, critical, or abnormal bodily functions could be put on output devices such as a monitor, watch or even your skin. It is possible to restrain your body slightly when, for example, your blood pressure climbs too high, and to emulate other restrictive effects of pain. It may also be possible to create "artificial symptoms" of some diseases. For example, showing to a patient a graph demonstrating spectral divergence of his alpha- and delta- rhythms that may indicate some neurotransmitter deficiency, may not be very useful. It would be much better to give the patient a diagnostic device that is easier to understand and more "natural-looking":

- "Hello, Doctor, my toenails turned green!"
- "Don't worry, it's a typical arti-symptom of the
XYZ condition, I'm sending you the pills".
(Actually, a watch may serve a lot better than toenails as a display.)

Sometimes, a direct feedback generating real pain may be implemented for patients who do not feel it when their activities approach dangerous thresholds. For example, a non-removable, variable-strength earclip that would cause increasing pain in your ear when your blood sugar climbs too high may dissuade you from having that extra piece of cake. A similar clip could make a baby cry out for help every time its EKG readings go bad. A more ethical solution with improved communication could be provided by attaching this clip to the doctor's ear. "I feel your pain..."

Similar techniques could be used to connect inputs from external systems to human biological receptors. Wiring exosomatic sensors to our nervous systems may allow us to better feel our environments, and start perceiving our technological extensions as parts of our bodies (which they already are). On the other hand, poor performance of your company could now give you a real pain in the neck...

Distant Future

Consequent technological advances in ER, biofeedback and other areas will lead to further blurring of demarcation lines between biological and technological systems, bodies and tools, selves and possessions, personalities and environments. These advances will eventually bring to life a world of complex self-engineered interconnected entities that may keep showing emulated "natural" environments to the few remaining [emulations of?] "natural" humans, who would never look behind the magic curtain for fear of seeing that crazy functional soup...

Terminological Exercises: ER <-> EP ->...-> IE -> ?! -> .

You must realize that most ER technologies suggested in this article have little to do with changing reality and everything to do with changing our perception of it. Though ER techniques still change the-world-as-we-see-it, it would be more accurate to call them EP, for Enhanced Perception, and reserve the term ER for conceptualizing traditional technologies. The traditional technologies have always been aimed at improvement of human perception of the environment, from digestion of physical objects by the stomach (cooking) to digestion of info-features by the brain (time/clock). Since there is hardly any functional difference in how and at what stage the clock face and other images are added to our view of the world, and as the technologies will increasingly intermix, an appropriate general term may be Enhanced Interface of Self with the Environment - and, as in the case of biofeedback, the Enhanced Interface of Self with Self.

With future waves of structural change dissolving the borders between self and environment, the term may generalize into Harmonization of Structural Interrelations. Still later, when interfaces become so smooth and sophisticated that human-based intelligence will hardly be able to tell where the system core ends and interface begins, we'd better just call it Improvement of Everything. Immediately after that, we will lose any understanding of what is going on and what constitutes an improvement, and should not try to name things anymore. Not that it would matter much if we did...

Social Implications

We can imagine that progress in human information processing will face some usual social difficulties. Your angry "Klingon" relatives may find unexpected allies among "proboscically enhanced" (a.k.a. long-nosed) people protesting against using their alternative standard of beauty as a negative stereotype. The girl next door may be wary that your "re-clothing" filters leave her in Eve's dress. Parents could be suspicious that their clean-looking kids appear to each other as tattooed skin-heads or bloodthirsty demons, or replace their obscenity masks with the popular "Beavis and Butthead" obscenity-enhancement filter. Extreme naturalists will demand that the radiant icons of the Microsoft logo and Coca-Cola bottle gracefully crossing their sky should be replaced by sentimental images of the sun and the moon that once occupied their place. Libertarians would lobby their governments for the "freedom of impression" laws, while drug enforcement agencies may declare that the new perception-altering techniques are just a technological successor of simple chemical drugs, and should be prohibited for not providing an approved perception of reality.

My readers often tell me that if any version of Enhanced, Augmented or Annotated Reality gets implemented, it might be abused by people trying to manipulate other people's views and force perceptions upon them. I realize that all human history is filled with people's attempts to trick themselves and others into looking at the world through the wrong glasses, and new powerful technologies may become very dangerous tools if placed in the wrong hands, so adding safeguards to such projects seems more than important.

Unfortunately though, a description of any idea sufficiently complex for protecting the world from such disasters wouldn't fit into an article that my contemporaries would take time to read. So I just do what I can -- clean my glasses and observe the events -- and share some impressions.


________________________________________________

If you are interested in my more general and long-term views on evolution of intelligence, personhood and identity, you can access my essays on Cyborgs and Mind Age and other resources related to these topics via my Web home page at http://www.lucifer.com/~sasha/home.html.

I am grateful to Ron Hale-Evans, Bill Alexander, and Gary Bean for inspiration and discussions that helped me shape this text.


Backlinks Guestbook Support

Resources for exploration