Language, Consciousness and Intelligence?
Friday, August 22, 2003
 
Marina Brown and Joseph O'Rourke correctly distinguish between various "levels" of functionalism. At some point, functionalism breaks -- but can we guess where?

"low-level" functionalism is the business of replacing individual neurons with functional-equivalent non-biological components. Intelligence (according to the acceptable response to this thought experiment) is maintained.

"high-level" functionalism is the business of placing a human with functional-equivalent non-biological components. (viz: Turing Test) Intelligence may be lost in this process. (i.e., we can have projected intelligence at work).
 
 
computing machinery and intelligence - A.M. Turing, 1950:

Turing's biggest mistake:
"Our hope is that there is so little mechanism in the child-brain that something like it can be easily programmed."

Dr. Turing spent very little time with children, I fear.

Turing's second biggest mistake:
Turing correctly assigns the criticism: 'an easy contrivance'. "to cover such devices as the inclusion in the machine of a record of someone reading a sonnet, with appropriate switching to turn it on from time to time." But what, indeed, is the difference between this "contrivance" and the statistical capture of human behavior -- which represents the state of the art in language-oriented AI?

History's biggest mistake in Turing's wake
Turing makes reference to: "The game is frequently used in practice under the name of viva voce to discover whether some one really understands something or has 'learnt it parrot fashion'. "

We've apparently lost this ability; no wonder the proponents of Strong AI are so sanguine!

The Science Fair dismisses viva voce because 'parrot fashion' is apparently acceptable these days: when the students construct an exhibit on "Magnetic Resonance Imaging" -- they are not expected to do anything beyond 'parrot fashion!'

Loebner Prize contestants simply change the subject -- never attempting to play the viva voce game at all!
 
 
Ran into the very interesting news that statistical methods now represent the state of the art in translation systems. How can we consider such systems to be "intelligent" -- even if the have a functional similarity to intelligence? Such systems (and they also represent the state of the art in Speech Recognition and Robust Natural Language Processing, too) simply accumulate statistics on actual intelligent systems (i.e., humans), and then spit out a best-match response.

Some would be inclined to say that that is precisely how humans learn, too, but I am skeptical of the claim. I've had numerous occasions to stop in my tracks and ask my children "Where did you get that from?" The answer is all too often "nowhere -- I made it up" Hardly a statistical learning reponse.
 
Language and Consciousness -- beyond Artificial Intelligence
  • Alan Turing's paper
  • David Chalmers' site
  • Ray Kurzweil's site
  • Daniel Dennett's site
  • John Searle's paper
  • Michael Webb's site
  • John McCrone's site
  • ARCHIVES
    08/17/2003 - 08/24/2003 / 08/24/2003 - 08/31/2003 / 09/07/2003 - 09/14/2003 / 09/14/2003 - 09/21/2003 / 09/21/2003 - 09/28/2003 / 09/28/2003 - 10/05/2003 / 10/05/2003 - 10/12/2003 / 10/12/2003 - 10/19/2003 / 10/19/2003 - 10/26/2003 / 05/07/2006 - 05/14/2006 / 10/19/2008 - 10/26/2008 /


    Powered by Blogger