Re: virus: Re: parroting

Nathan Russell (frussell@frontiernet.net)
Sun, 23 Aug 1998 20:04:25 -0400


Eric Boyd wrote:

> Hi,
>
> "the great tinkerer" <gr8tinkerer@hotmail.com> wrote:
> > artificial intelligence cannot work until you can program a
> > reaction to every action.
>
> Pardon me, but bullshit! All one needs is a reaction scheme -- general
> rules, and a fall-back position. Whether one even needs that for
> "intelligence" is another question entirely -- do humans have a reaction
> for every action? I doubt it...
>
> > intelligence is the ability to react.
>
> In that case, certain forms of bacteria certainly score highly on
> intelligence tests...
>
> > intelligence feeds off of conciousness and awareness.
>
> No... both consciousness and awareness are *properties* of intelligence.
>
> > we learn from observing: a book, a person, an email etc.
> > can you teach a computer to program itself by what it sees?
>
> Vision, perhaps not yet (any body read anything more about the Ping-Pong
> robot?). However, if you're looking for computers which learn from their
> past actions (and the actions of of those they encounter) you need look no
> farther than the chess programs. I believe there was also a checkers
> program about 1950 which "bootstrapped" itself so far that it easily beat
> it's programmer...
>
> > id like to see a computer that can install a new hard drive in
> > itself when it runs out of disk space, and can fix itself when
> > it has a problem.
>
> And I'd like to see a human who could grow a second brain, or even just
> regenerate a limb which was missing. My point being humans don't know how
> to extend themselves yet either... although we're working on it!
>

Are you implying that earthworms or spunges - both of which can regenerate
very well - are superior to us?

>
>
>
> > that computer is the first step towards artificial intelligence,
> > the next step of course would be to "teach it to learn."
>
> Again, computers learning from their own encounters has been done many
> times[1] -- a defining property of intelligence, yes, but it's just not
> quite enough.
>
> Real AI, as Hofstadter says, will involve self reference (consciousness)
> and "chunking" up. (*induction*/generalization has proven to be a
> difficult concept for AI programs...)
>
> ERiC
>
> [1] In fact, I'm working on such a program myself right now -- an
> evolutionary version of my AI engine for my 3-dimensional tic-tac-tuc-toe
> program. Even just the static version usually beats me, but I'm not happy
> because it's clear that it's offense is not very good; it's defense just
> prevents me from ever winning...
>

-Nathan