Re: virus: Re: Rationality

Alex Williams (
Fri, 7 Mar 1997 09:14:35 -0500 (EST)

> > Precisely, just bits of memetic programming/memetic agents interacting.
> > Tell me, can you predict the exact results of, say, the Amazon river
> > basin, over the next thirty-thousand years?
> Not personally, but I'm sure a reasonable approximation is possible.

But I'm not asking for a rough approximation; I can make a rough
approximation of /your/ behaviour in the next 5yrs given sufficent
knowledge of you. Does that invalidate your `free will'? I'm asking
for an exact prediction, an exact statement of the state of all the
entities within the Amazon river basin over 30,000yrs. Surely there
are fewer discrete biological entities in the ARB than memes in our
heads, and they interact much much more slowly.

> Are you talking about the life within the river basin, or the actual
> shape and flow of the river itself? Sorry, you've confused me as
> to where this is coming from.

The river itself would be too easy ... I'm refering to the state and
existance/offspring of every biological agent within the ARB. In
other words, its `sufficently complex' that you cannot predict its
exact future states, much like your mind, and as such the `fate' of
the ARB is indistinguishable from your `free will.'

> > Can you predict what a Tierran environment will look like after
> > 50,000,000 generation?
> Given enough time, I suspect that it would be possible to recreate
> a number of possible outcomes.

But that's not good enough, you see. I can do that in regards to your
mind. Does that invalidate your `free will'?

> The difference (and I'm assuming here, because I don't know) is that
> we think we are conscious. Or at least, I do. I don't know about
> you and your island :) I could be deluding myself that I'm conscious,
> but to delude myself, what would I have to delude? My consciousness
> presumably. Ok, now break away from that for a moment, and turn to
> the computer OS. You're computer's not sentient/conscious (that
> was the assumption I made). If it were, it would be akin to some
> kind of killing to unplug it :) If, therefore, we connect the
> 2 ideas, and model ourselves as just highly complex operating systems,
> we run into the problem or our self-awareness. This is why I
> think that we cannot be /just/ a meme-complex.

The meme <I'm conscious> needs absolutely no referent in the Real
World(tm). Its just a very ingrained meme that affects a lot of the
behaviours that your memetic emergence does. Its not delusion, its
just not true, which is an entirely different thing.

If I create a program that exhibits all the traits of life, is it
murder to terminate the program? Tierra itself models organisms that
are possible to define as `life.' If I write a program that insists
that its conscious, how are you going to prove to /it/, not
necessarily to an observer but to /it/ that its not?

Self-awareness is just an internal model of ourselves, its really not
all that spectacular a trick at all, not an essence vital but just a
"good trick", as others have discussed in other threads. Its good for
both modeling yourself in a predictive fashion and double-checking
your modeling assumptions as you run the model on past experiences.

Self-awareness is something that appears to be just a good software
trick, consciousness requires only that the entity assert that its
conscious. I don't see a catch.