Re: virus: Re: parroting

Sebastian Kinsey (drsebby@hotmail.com)
Tue, 25 Aug 1998 11:37:14 PDT


>From owner-virus@lucifer.com Sun Aug 23 11:01:01 1998
>Received: (from majordom@localhost)
> by maxwell.kumo.com (8.9.1/8.9.1) id LAA16913
> for virus-outgoing; Sun, 23 Aug 1998 11:55:20 -0600
>Message-ID: <35DF0AF5.9CA1C17C@qlink.queensu.ca>
>Date: Sat, 22 Aug 1998 14:16:21 -0400
>From: Eric Boyd <6ceb3@qlink.queensu.ca>
>Organization: Religious Engineers Inc.
>X-Mailer: Mozilla 4.03 [en] (Win95; I)
>MIME-Version: 1.0
>To: virus@lucifer.com
>Subject: Re: virus: Re: parroting
>References: <19980822073718.993.qmail@hotmail.com>
>Content-Type: text/plain; charset=us-ascii
>Content-Transfer-Encoding: 7bit
>Sender: owner-virus@lucifer.com
>Precedence: bulk
>Reply-To: virus@lucifer.com
>
>Hi,
>
>"the great tinkerer" <gr8tinkerer@hotmail.com> wrote:
>> artificial intelligence cannot work until you can program a
>> reaction to every action.
>
>Pardon me, but bullshit! All one needs is a reaction scheme -- general
>rules, and a fall-back position. Whether one even needs that for
>"intelligence" is another question entirely -- do humans have a
reaction
>for every action? I doubt it...
>
>> intelligence is the ability to react.
>
>In that case, certain forms of bacteria certainly score highly on
>intelligence tests...
>
>> intelligence feeds off of conciousness and awareness.
>
>No... both consciousness and awareness are *properties* of
intelligence.
>
>> we learn from observing: a book, a person, an email etc.
>> can you teach a computer to program itself by what it sees?
>
>Vision, perhaps not yet (any body read anything more about the
Ping-Pong
>robot?). However, if you're looking for computers which learn from
their
>past actions (and the actions of of those they encounter) you need look
no
>farther than the chess programs. I believe there was also a checkers
>program about 1950 which "bootstrapped" itself so far that it easily
beat
>it's programmer...
>
>> id like to see a computer that can install a new hard drive in
>> itself when it runs out of disk space, and can fix itself when
>> it has a problem.
>
>And I'd like to see a human who could grow a second brain, or even just
>regenerate a limb which was missing. My point being humans don't know
how
>to extend themselves yet either... although we're working on it!
>
>> that computer is the first step towards artificial intelligence,
>> the next step of course would be to "teach it to learn."
>
>Again, computers learning from their own encounters has been done many
>times[1] -- a defining property of intelligence, yes, but it's just not
>quite enough.
>
>Real AI, as Hofstadter says, will involve self reference
(consciousness)
>and "chunking" up. (*induction*/generalization has proven to be a
>difficult concept for AI programs...)
>
>ERiC
>
>[1] In fact, I'm working on such a program myself right now -- an
>evolutionary version of my AI engine for my 3-dimensional
tic-tac-tuc-toe
>program. Even just the static version usually beats me, but I'm not
happy
>because it's clear that it's offense is not very good; it's defense
just
>prevents me from ever winning...
>
>

I would suggest remembering another word that might alleviate some
of your dilemmas regarding the 'duallities' of intelligence
--"Wisdom".....I'm guessing that is the difficult one to quantify.
Intelligence it would seem to me, is that certain capacity of
rational/logical process.....whereas wisdom might be specifically in
relation to biological systems' capacities for pattern recognition on an
astronomical scale. Though we tend to attribute some 'magical'
qualities to these; our human talents....I would have to assume they too
are nothing more than number crunching on a grand scale, and likely
requiring that 'uniquely' biological method, or approach as opposed to
the current format of your typical computers' problem solving
engine............that was some run-on sentence!!

______________________________________________________
Get Your Private, Free Email at http://www.hotmail.com