Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Minsky's new article (was: Roger Penro
Message-ID: <jqbCzH4rx.CK8@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <39d8g2$dlm@coli-gate.coli.uni-sb.de> <jqbCzG3K0.85K@netcom.com> <1994Nov18.134842.27593@oxvaxd>
Date: Fri, 18 Nov 1994 17:33:33 GMT
Lines: 78

In article <1994Nov18.134842.27593@oxvaxd>,  <econrpae@vax.oxford.ac.uk> wrote:
>
>
>In article <jqbCzG3K0.85K@netcom.com>, jqb@netcom.com (Jim Balter) writes:
>> In article <CzDqLI.686@cogsci.ed.ac.uk>,
>> Jeff Dalton <jeff@aiai.ed.ac.uk> wrote: 
>>>So what if you have to do some quantum machanical stuff rather
>>>than just run programs?
>>>Why is that such a flame-generating
>>>issue?
>> There are two basic reasons, Jeff.  <Reason One snipped>
>>                                                           Two, scientists
>> intuitively understand the importance of Occam's Razor and accurate models
>> to their pursuits.  The Church-Turing thesis has great explanatory power,
>> and challenges to it must be taken very seriously.  For similar reasons
>> psychophysics (paranormal abilities, the Copenhagen Interpretation,
>> Sarfattiism, etc.) gets such a strong reaction, because it has implications
>> for basic models.
>
>hmm? Does Sarfattiism _really_ challenge the Church-Turing thesis? That thesis
>simply says that any function which can be computed by an algorithm can be
>computed by a Turing machine. I don't see why any of this Sarfatti fizzix
>should challenge that.

I said "for *similar* reasons", not the same reasons.  The issue is Occam's
Razor and basic models, not CT per se.  Searle and Penrose challenge CT;
Sarfatti pschophysics does not, that I know of.

>More importantly, I don't even see why Sarfatti's stuff should be regarded as a
>threat to the possibility of AI. All the evidence that is invoked by Sarfatti,
>Penrose et al is physical and biological evidence. All that this evidence can
>_possibly_ show is that
>	
>(1) Humans manage to have certain mental states because of quantum-physical
>properties.

How could the evidence possibly show that?  That's not the direction of
the arguments from Searle and Penrose.  They argue that, in order to have
certain mental states, the "standard" Church-Turing stuff won't do.
They then hypothesize what might yield such states, meat for Searle and
quantum physics for Penrose.

>It does not and cannot follow from (1) that
>
>(2) In order for a creature to have mental state X, that creature must have
>certain quantum-physical properties

Tell that to Penrose and Searle.  Searle has certainly argued that, in order for
a creature to have mental state X, that creature must be made of meat.

>In other words, just because human brains do something in one way (a
>quantum-physical way) it is quite possible that computers could do that thing
>in a different way (without using quantum effects). (Caveat: this does not
>apply to EPR).

Searle and Penrose both say that computers *cannot* do it.  That's the whole
point. That's why there's an argument.  Strong AI folks do not say that
microtubules cannot possibly be involved in human mentation, or that quantum
mechanics definitely doesn't play a role, only that it isn't *necessarily* so,
because computation is sufficient.  Searle and Penrose propose various
thought experiments and other "logical" arguments saying that computation is
*not* sufficient.

>Now (2) does indeed threaten AI. But physical and biological evidence is
>irrelevant to (2). The proper way to argue for (2) is by claiming that there
>are certain sort of problems which cannot be solved in polynomial time unless
>we use a quantum computer.

You also have to argue that humans are able to solve such problems in polynomial
time.  But all such arguments based upon *behavior* fall flat, since human
bahavior is finite and thus modelable with a FSM.  The fact that humans can
sometimes get the right answers to certain elements of certain classes of
problems is not terribly interesting, although fuzzy thinkers from Lucas to
Penrose have believed it to be.


-- 
<J Q B>
