Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Re: Strong AI and consciousness
Message-ID: <vlsi_libCzHpFF.Cz9@netcom.com>
Organization: VLSI Libraries Incorporated
References: <vlsi_libCzHB5I.Fn7@netcom.com> <3aj4a9$9ct@mp.cs.niu.edu>
Date: Sat, 19 Nov 1994 00:59:39 GMT
Lines: 68

In article <3aj4a9$9ct@mp.cs.niu.edu> rickert@cs.niu.edu (Neil Rickert) writes:
>In <vlsi_libCzHB5I.Fn7@netcom.com> vlsi_lib@netcom.com (Gerard Malecki) writes:
>
>>Should computation be temporal in order to achieve consciousness,
>>going by the definition of strong AI? In other words, should
>>states in the program trace have an isomorphic mapping to real
>>time, so that causality in the states implies a physical causality
>>in the agent performing the computation? What if the states are
>>played backwards?
>
>Playing states backwards!  What a neat idea.  I will get an encrypted
>password from /etc/passwd, get the crypt(3) library function, and
>have it play the states backwards.  I never realized that breaking
>encryption could be so easy. :-)
>
>Clearly there is something wrong with the idea of playing things
>backward.

Not if you believe in anti-causal computers. The fact that they do not 
exist in the real world is a different issue.

>
>>I personally do not believe that there could be a conscious stream
>>existing in social security numbers. Unfortunately, strong AI does
>>not seem to address this problem.
>
>Why should it address the problem?  Incidently, the term "strong AI"
>was invented by Searle, so that he could criticize it.  Whether there
>is such a discipline as strong AI which could answer anything is
>uncertain.
>
>>                                  For strong AI is self-contained in
>>its mathematical abstractions that do not depend on physical reality.
>
>That is one of its strong points.  This means that the validity of AI
>assumptions does not depend on whether realism, Berleley's idealism,
>or the Cartesian evil demon provides the best description of the way
>things are.  It would be more troublesome if strong AI did require
>realism, for then it would depend upon unproven assumptions.
>
>>I believe it is physics and reality that provides the beef to
>>consciousness. A program trace could only be as conscious as the
>>blueprint of a passenger aircraft could be expected to carry
>>real passengers.
>
>Very few people, if any, would claim that a mere program trace could
>be conscious. 
>

Which only reinforces my viewpoint. From the above, I assume that 
either you conclude that strong AI cannot produce consciousness or
you make a distinction between program execution and program trace.
But what *is* program execution? It is program trace imported into
physical realism. But if you claim that the former leads to consciousness
while the latter doesn't, you can no longer claim that strong AI is
independent of physical realism. The same applies to causality.
why should causal relationships in a program trace, when
If strong AI is indeed decoupled from realism, there is no reason
why states played backwards should not exhibit consciousness, as long
as the mapping between state indices and real time is isomorphic.
If you try to impose on the execution a *physical* causality relationship
in direct correspondence to the states causality relationship, you
cannot justify it without giving up the basic tenets of strong AI.


Shankar Ramakrishnan
shankar@vlibs.com

