Help - Search - Member List - Calendar
Full Version: Create a thinking machine
Utopia-Politics > Utopia Politics > Politics > Science
Pages: 1, 2
necrolyte
OK this is bordering in philosophy, so I might post something similair in the philosophy section, but this will deal more with the scientific side.

How would one program a computer to:
(1) be self-aware
(2) come to a "best possible" solution when no perfect solution is findable
(3) has a desire to learn
(4) can feel emotion
(5) sends its thoughts through logical loops which consider previous memories and thoughts
(6) can formulate opinions based on facts
Ferran
(1) I would htink that the easiest way to make a computer self-aware is program it to create variables (output), and then act upon the variables it creates... (input)? Though, I could be way off.

(2) go to google and search for something. It'll show percentages next to the pages you find, indicating relevancy. to make a computer choose the "best possible" solution, have it choose the one closest to 100% bu using whatever method Google uses.

(3) Oo, that's tricky. You'd have to figure out how to make it desire something, in the first place!

(4) Something to do with dealing with input?

(5) Have it programmed to save input and output in reference to each other, so that when it deals with a certain kind of input, its output can be determined by the closest input that it had recieved before.

(6) I'm not gonna touch this one.

--- Of course, I'm no programmer, so I may be full of BS in regards to this... but most of it seems pretty simple, no?
Sephiroth
I'm a Computer Science major and this is way beyond my skills.
Telum
A computer that runs on binary circuits cant be self-aware. You need more than a collection of yes/no's to become aware.
Sephiroth
Computers don't have the speed needed to run such a program. It wouldn't be as fast as the human brain.

Just for a comparison, imagine two set of animal lists memorized by a person and saved on a harddrive. In the time that the computer finds one type of feline, the human would have found four or five. Computers are linear while the brain contains many interconnected webs of information.

I don't think emotion would translate well to code.
zkajan
i remember reading that animals make decisions based on "fuzzy logic", that is to say, things influence their decisions which shouldn't necessarely. for example in politicspeople will choose a leader they think has nice hair even though his hair won't have any effect on his policies or their effect on the people
necrolyte
I use fuzzy logic when I'm trying to decide which kitten to buy... (ok horrible joke I know)


The human mind is in essence a super-complex biological computer. To recreate it, even with code, must be possible, as all thoughts which pass through our mind are coded. Our mind simply uses more complex codes.

It also uses logical circuits and networks, as we're saying.

So Telum, couldnt you use basic code as the foundation to construct more complex code? Say, how basic binary code can be used as the foundation for different codes? Not being a computer expert, excuse my ignorance if I'm missing something.

Sephiroth; with computer technology advancing at the rapid rate at which it is, aren't technological limitations really only a temporary thing?

This is all like evolution. Basic nervous systems cannot do much, and are not self-aware. They only react in programmed ways similair to how a computer acts. If you hit the H key, a H comes up on the word processor. If you stimulate a Jellyfishe's tenticle nerve, it retracts the tentacle to consume whatever the tentacle has caught.

emotion is the last required brick, naturally. Being the most abstract, it would probably require us to more fully understand the methods we used to create its precursors... the self-awareness and capacity to learn.

...Ferran, I'm thinking about your ideas now and I'm kind of tired so I'll respond later smile.gif
Deus Ex Machina
QUOTE(Telum @ Sep 1 2004, 07:28 PM)
A computer that runs on binary circuits cant be self-aware.  You need more than a collection of yes/no's to become aware.


On the yes/no level, that may be true. However, consider that your brain is a similar collection of binary circuits: if they get input X, they fire off. Otherwise, they don't (well, that may be a tad simplified...). It's once you start taking meaning out of the circuits (e.g. a string of hex into ASCII) that self-awareness et al. arises. On the switches level, the machine is still being a machine. However, the interations between the switches holds a separate level of data.

To attempt to answer necrolyte's question, the key wouldn't be to necessarily code in the behaviors you mentioned, but to create a system of organization of data which allows them to naturally arise.

Anywho, I just finished a book (Godel, Escher, Bach: An Eternal Golden Braid, by Douglas R Hofstadter) in which the author talks about AI, self-awareness, etc.. It's a good read, and old/famous enough so that a decently-sized library has a fairly good chance of carrying it.
libvertaruan
QUOTE
To attempt to answer necrolyte's question, the key wouldn't be to necessarily code in the behaviors you mentioned, but to create a system of organization of data which allows them to naturally arise.


Its called bottom-up programming, and that is, I believe, how our brains are programmed to learn.
Russian
QUOTE
How would one program a computer to:
(1) be self-aware


you narrow the scope. Do you want a computer to be 'human'? thats impossible.

How about an autopilot program? That can monitors its speed, its height, the weather its flying into and change its operations accordingly. Its as 'self-aware' as anything can get. And such programs do exist. You program algorithms for every possible situation, and then you rigulously test it to make sure you havent forgotten anything. Lots of manhours, lots of money later you have a program thats self-aware but only in a specific scope. It can still make faulty decisions.

QUOTE
(2) come to a "best possible" solution when no perfect solution is findable


we can do that. Simple actually. A program matched its input variables against the required conditions for it to take action. If the variables dont match up it takes another pre-programmed action. With programs practically anything is possible except independant thought. Ie; a computer program can't write another program, as of today.

QUOTE
3) has a desire to learn


replace the word desire with design and you have a database management system.

QUOTE
(4) can feel emotion


sure. Of course its possible. But whats the point? Theres actually been experiments on this, a robotic face was programmed to match the facial expressions of people it was talking to.

QUOTE
(5) sends its thoughts through logical loops which consider previous memories and thoughts


Been and done. Very simple.

QUOTE
6) can formulate opinions based on facts


replace term with, can make decisions based on evidence and we have the autopilot model above.

But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.
necrolyte
I would want such a thing to be constructed to see if we can reproduce ourselves using pure technology devoid of biology. As an experiment if you will.
libvertaruan
QUOTE
But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.



Russian, you are a fucking idiot to not understand what this is about.
Deus Ex Machina
QUOTE(Russian @ Sep 1 2004, 10:07 PM)
you narrow the scope. Do you want a computer to be 'human'? thats impossible.

How about an autopilot program? That can monitors its speed, its height, the weather its flying into and change its operations accordingly. Its as 'self-aware' as anything can get. And such programs do exist. You program algorithms for every possible situation, and then you rigulously test it to make sure you havent forgotten anything. Lots of manhours, lots of money later you have a program thats self-aware but only in a specific scope. It can still make faulty decisions.


Why is it impossible for a computer system to mimic the human mind? It (the mind) certainly isn't made of base components much different than a computer (neurons basically amounting to chemical switches). Regardless, I thought people still gave credit to the Church-Turning thesis.

A key part of what I define self awareness is both the ability to `step back' and look at what you're doing, and change your behavior based on it. While a sophisticated program of the type you describe may be able to find patterns in what it's doing and optimize it's preformance accordingly, a human would likely be able to do such a thing up through higher and higher levels, ad infinum (within the limits of memory and time). In addition, a self-aware program would likely be able to combine behaviors in order to react to new situations (e.g.: situation X is similar to this new thing I'm seeing. Let's see if what I do in situation X will be of any help --alternatively-- this new situation is like nothing I've seen before. However, situation Y arose from somewhat similar conditions. Maybe my solution to situation Y will be of help. Et cetera.)


QUOTE
replace the word desire with design and you have a database management system.


There is a distince difference between actively seeking out information on a topic one had no prior knowlege of (me browsing through a library) and seeking out information based on written instructions (google spidering the internet).

QUOTE
sure. Of course its possible. But whats the point? Theres actually been experiments on this, a robotic face was programmed to match the facial expressions of people it was talking to.


Having a robot mimic facial expressions is useless unless there is another level of meaning behind the emotions.

QUOTE
But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.


Being able to develope a truely intelligent computer system would be a great way to study and understand how our minds work and how they evolved. Plus it would be pretty damn cool.
zaragosa
I personally think we're practically there. There's nothing in the human hardware that we can't reproduce (at a million times the efficiency, I might add). The only thing that makes humans difference is the enormous amount of ready background knowledge that we simply pick up as we go along (commonly called 'creativity' and 'inspiration' and such). The fuzzy logic (unlogical combination of information), once we figure out how humans use it, can easily be reproduced.
Llywelyn
There is a difference between the model and the system.

We can make a system behave as if it is intelligent, vaguely, but it is it actually intelligent or is it just mimicking intelligent behavior.

I will go out on a limb here and throw my meager vote behind Penrose in saying that intelligence is fundamentally non-computational in nature. Which means no collection of computational circuits will ever be "intelligent."

EDIT:
This is not to say, for the record, that intelligence will never be artificially generated. Only that it will never be replicated inside of a turing-based system. Quantum computers or future developments in physics may bring us there, but we aren't there with today's technology.
Nalvaros
You'd probably want to look up artificial intelligence, which I dont have any experience in.

However, based on what I know about programming I can see no way any existing programs can create an intelligent entity.

Right now, when we write a program, we are basically writing a set of instructions. Do this when that happens. I dont believe a set of instructions is capable of becoming self aware - certainly we could concievably program an incredibly complex program that had say, a response to a million different permutations of a situation, and based on the response it makes and the resultant input it gets, compared with a "desired" input it might switch to different responses in the future. However at the end of the day, while such a program might mimic intelligence realistically, it is at the end of the day merely following instructions. It isnt making a choice.
Russian
what if its instructions are to make a choice?


vis a vi Blue Junior?
Llywelyn
QUOTE(Russian @ Sep 2 2004, 09:01 AM)
what if its instructions are to make a choice?
vis a vi Blue Junior?



...and then what?

How does it make that choice?

Why?
Russian
for it to win more then one chess game it has to make different choices. Otherwise it would be predictable and easilly defeatable.


Dont know the mechanism behind it though.


Llywelyn
QUOTE(Russian @ Sep 2 2004, 09:10 AM)
for it to win more then one chess game it has to make different choices. Otherwise it would be predictable and easilly defeatable.
Dont know the mechanism behind it though.



The mechanisms that its uses (a-b searches, books, kill tables, etc) are very far removed from sentience or the set of techniques used by people.
Telum
QUOTE(zaragosa @ Sep 2 2004, 05:56 AM)
I personally think we're practically there. There's nothing in the human hardware that we can't reproduce (at a million times the efficiency, I might add). The only thing that makes humans difference is the enormous amount of ready background knowledge that we simply pick up as we go along (commonly called 'creativity' and 'inspiration' and such). The fuzzy logic (unlogical combination of information), once we figure out how humans use it, can easily be reproduced.




If we can reproduce everything with so much more efficiency, why arent there artificial kidneys or livers on the market?
Deus Ex Machina
QUOTE(Llywelyn @ Sep 2 2004, 04:08 AM)
There is a difference between the model and the system.
I will go out on a limb here and throw my meager vote behind Penrose in saying that intelligence is fundamentally non-computational in nature.  Which means no collection of computational circuits will ever be "intelligent."



Would it be possible for Jew to explain Penrose's arguement for those of us not familiar with it? Google isn't being too helpful, and Wikipedia leaves it at
QUOTE
Some (including Roger Penrose) attack the applicability of the Church-Turing thesis. Others say the mind is not completely physical. Roger Penrose's argument rests on the conception of hypercomputation being possible in our universe. Quantum mechanics and newtonian mechanics do not allow hypercomputation but it is thought that some strange space times would. However there seems to be agreement that our universe is not sufficiently convoluted to allow such hypercomputation.
Llywelyn
QUOTE(Deus Ex Machina @ Sep 2 2004, 04:33 PM)
Would it be possible for Jew to explain Penrose's arguement for those of us not familiar with it? Google isn't being too helpful, and Wikipedia leaves it at



In a nutshell he claims that the brain is functionally noncomputational and that while a computational system may be able to mimic intelligence, it cannot actually be intelligent. His claim is that there cannot be strong AI in a computational system.

Now, a Quantum Computer gets past some of his objections and I am less certain whether a quantum algorithm could be intelligent when executed (and not just simulated), but that is a point that is separate from whether it can be done in a set of circuits smile.gif
kindfluffysteve
the best way to program it is to let it program itself.

use a genetic algorithm.

this route is what I think will lead to digital sentience.
kindfluffysteve
people can talk about the 1's and 0's as just being fundamentally 1's and 0's

but this is just an unnessary way to think about it.

to think in this way is to avoid the idea of something being more than the sum of its parts.

why cant a collection of 1's and 0's be more than the some its parts?

the difference between data and information/knowledge: data is just meaningless 1's and 0's. knowledge is a network of data.

data structure are things that really do exist - the mean something, they describe something and yet they are strangely etheral. an individual bit means nothing - but its the organisation that matters.
zkajan
QUOTE(kindfluffysteve @ Sep 2 2004, 10:31 PM)
the best way to program it is to let it program itself.

use a genetic algorithm.

this route is what I think will lead to digital sentience.


yup, kfs has it. babies are born with a bunch of hardwired instincts (something is wrong with the system: cry, if hungry and tit is presented: suck, when feces accumulates in colon: push it out, etc....) and a big blank slate that they pick up on in the first couple of years of their lifefor the most part but learning never really stops.
Deus Ex Machina
QUOTE(Llywelyn @ Sep 2 2004, 06:03 PM)
In a nutshell he claims that the brain is functionally noncomputational and that while a computational system may be able to mimic intelligence, it cannot actually be intelligent.  His claim is that there cannot be strong AI in a computational system.

Now, a Quantum Computer gets past some of his objections and I am less certain whether a quantum algorithm could be intelligent when executed (and not just simulated), but that is a point that is separate from whether it can be done in a set of circuits smile.gif



That's certianly interesting. I can't say I agree with his conclusion, but I haven't seen his meathods (I'm assuming that they rely on quantum effects and/or something similar, by how you phrased your post), and now I have a name to look in to if I ever reach the bottom of my "to read" list.

Anyways, thanks.

On a side note, that Jew-you replacement thing is damned annoying.
zaragosa
QUOTE(Llywelyn @ Sep 2 2004, 12:08 PM)
We can make a system behave as if it is intelligent, vaguely, but it is it actually intelligent or is it just mimicking intelligent behavior.


What's the difference? Intelligence is, for all intents and purposes, a behaviour.

QUOTE(Telum @ Sep 3 2004, 12:21 AM)
If we can reproduce everything with so much more efficiency, why arent there artificial kidneys or livers on the market?


I didn't mean efficiency size-wise. Very efficient artificial livers and kidneys exist, but they tend to be rather bulky.
Forben
needs to create its own variables.

definition of its basic function will need be included. like if it can move something that is a part of the robot, let it know that that piece can 'move'...

understanding of some form of reflex/sense algorithm.

'expansion' slots, so to speak, a way to expand its physical limitations to a limit.

a 'location' of read only, must obey algorithms that require it to continue to like, not kill us cause were just a parasite on the earth or some such. this read only could be viewed sort of the dna type stuff.

beyond that would be a bit of the philosiphical view of what emotion is, or isn't and whether or not our emotion is emotion that something else that is not geared towards the same style of response/reflex/soul/curiosity 'standards' can be like to another 'species', so to speak. Yes it could figure out the right responses most likely, but beyond that?
Llywelyn
QUOTE(zaragosa @ Sep 3 2004, 03:43 AM)
What's the difference? Intelligence is, for all intents and purposes, a behaviour.



SimplY: There is a difference between the model and the system. Confusing the two is one of the fundamental fallacies.

I can calculate how billiard balls are going to behave on smooth table, I can display this output on the screen, but predicting it and displaying it is in no way equivalent to actually rolling the balls on the table.

Being able to predict or anticipate what a conscious entity would say is an entirely different arena then actually being self-aware.
zaragosa
QUOTE(Llywelyn @ Sep 3 2004, 01:32 PM)
I can calculate how billiard balls are going to behave on smooth table, I can display this output on the screen, but predicting it and displaying it is in no way equivalent to actually rolling the balls on the table.


But that's not what's going on. We have only the output to evaluate. To change the analogy, imagine that we are measuring some form of output of the game (say, the sound). Now, if we have two boxes, one with a real game being played inside, and one with a simulation being played, and we listen to both. Why would one be qualitatively different from the other?

Edit: To anticipate on a discussion that may follow: If you were to find out that a friend of yours was in fact (and had always been) a robot, would you consider yourself deceived for all the years you had known him/her?
Llywelyn
QUOTE(zaragosa @ Sep 3 2004, 05:56 AM)
But that's not what's going on. We have only the output to evaluate. To change the analogy, imagine that we are measuring some form of output of the game (say, the sound). Now, if we have two boxes, one with a real game being played inside, and one with a simulation being played, and we listen to both. Why would one be qualitatively different from the other?



That is exactly what is going on. The fact that the process is black boxed just means that it is even more difficult to know whether the system in question exists inside of the set of circuits and makes it even less likely.

Let's look at your example. From outside the box they may sound the same, but one is simply reproducing the sounds based on a programmed and highly algorithmic pattern. The other is producing them as a consequence of playing the game. In the first case the game itself doesn't have to be played anywhere, in the second it is being played in real time. That's a huge difference.

Why are they different? One is a model, one is an actual system. One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played.

Now, given a set of changing inputs and a set of outputs (rather than just analyzing a set of outputs), we can analyze it the same way we analyze a black boxed circuit and, as a matter of faith, I do not believe a computer can beat a set of human judges. Enter the Turing Test. Show me an AI which is close to passing an unrestricted Turing Test, since you seem to think we are "practically there" on creating a strong AI.

Last year the first place of the AI's in the Loebner Prize was Jabberwock. Not even close.

You:...Where do you live?
Jabb:...In the Great Realm of Imagination, care to meet me there?
You:...Sure, I'll meet you at half a figment around the bend from the Disney ride.
Jabb:...You think you can meet me? Not that I think you can't, guest


Ferran
You:...You're flawed.
Jabb:...Sure thing. Most of our suspicions of others are aroused by our knowledge of ourselves. But I just felt I had to make that point strongly.
zaragosa
QUOTE(Llywelyn @ Sep 3 2004, 03:55 PM)
That is exactly what is going on.  The fact that the process is black boxed just means that it is even more difficult to know whether the system in question exists inside of the set of circuits and makes it even less likely.

(...)

Why are they different?  One is a model, one is an actual system.  One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played. 


You're missing the point. I'm saying that intelligence is not in the process, but in the output itself (which is of course identical if the model is good). Consider: If you were to find out that a friend of yours was in fact (and had always been) a robot, would you consider yourself deceived for all the years you had known him/her?
Nalvaros
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.
Llywelyn
Zaragosa, you have claimed that we are "practically there."

Please show me where we have come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize.
Deus Ex Machina
QUOTE(Nalvaros @ Sep 5 2004, 01:39 AM)
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.



Sure, computers today aren't intelligent by any practical definition of the word, but if intelligience is in more than the output, how would we tell today if a computer program, which seemed to display all the attributes of real intelligence (self awareness, capable of learning, <insert other criteria here>) in it's output, is really intelligent or not? Assuming we understood perfectly all that went on inside of it's `brain', we wouldn't be able to compare it to our own (or any model of how intelligence should work on the lower level).
zaragosa
QUOTE(Llywelyn @ Sep 5 2004, 03:47 PM)
Zaragosa, you have claimed that we are "practically there."

Please show me where we have come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize.


practically, adv.
3: All but; nearly; almost
zaragosa
QUOTE(Nalvaros @ Sep 5 2004, 09:39 AM)
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent".


The output rendered by computers today isn't yet the same as that from beings we would normally classify as 'intelligent'.
Llywelyn
QUOTE(zaragosa @ Sep 5 2004, 10:36 AM)
practically, adv.
3: All but; nearly; almost



I quote myself:

QUOTE
Please show me where we have  come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize.


Emphasis added.
zaragosa
I never implied that the evolution towards AI was a gradual process. The hardware being relevant only to the degree that it can make fast calculations, the last hurdle is software. So, in my view, what's left is a conceptual, not a technological feat. That's why I say I think we're practically there.
zkajan
QUOTE(Nalvaros @ Sep 5 2004, 03:39 AM)
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.


i disagree, intelligence is output. even with humans, all output is a result of input at some point. doesn't mean stimulus->immediate response. hell, you could be walking down the street and remember when you were 5 year old something happened and this causes you to thing about that experience and some others and produce an output, and this thinking may be intentional or even hard-wired (the primitive brain is involved). so in that matter inteligence is a "black box" type scenario
acow
"Thinking" or "conscious"? Or are they the same?

Cause then you get into the whole realm of what consciousness is, and to my knowlege, no one really has a good definition of that...
zaragosa
Neither 'conscious', nor 'thinking' or 'intelligence' have ever been satisfactorily defined in my opinion. By diagnostically used definitions of intelligence (Wechsler tests etc.), even the earliest computers with some elementary programming and a functioning interface have us beat hands down.
Llywelyn
Familiar with Searle's Chinese Room?
Llywelyn
For those who don't know.

Baumgartner summarizes it:

QUOTE
He imagines himself locked in a room, in which there are various slips of paper with doodles on them, a slot through which people can pass slips of paper to him and through which he can pass them out; and a book of rules telling him how to respond to the doodles, which are identified by their shape. One rule, for example, instructs him that when squiggle-squiggle is passed in to him, he should pass squoggle-squoggle out. So far as the person in the room is concerned, the doodles are meaningless. But unbeknownst to him, they are Chinese characters, and the people outside the room, being Chinese, interpret them as such. When the rules happen to be such that the questions are paired with what the Chinese people outside recognize as a sensible answer, they will interpret the Chinese characters as meaningful answers. But the person inside the room knows nothing of this. He is instantiating a computer program -- that is, he is performing purely formal manipulations of uninterpreted patterns; the program is all syntax and has no semantics.
Llywelyn
QUOTE(zaragosa @ Sep 5 2004, 11:59 AM)
I never implied that the evolution towards AI was a gradual process. The hardware being relevant only to the degree that it can make fast calculations, the last hurdle is software. So, in my view, what's left is a conceptual, not a technological feat. That's why I say I think we're practically there.



In other words, we need a conceptual leap that we haven't made yet and which may not even be makable.

That is not, in my book, "practically" there by any stretch of the imagination.
zaragosa
QUOTE(Llywelyn @ Sep 10 2004, 11:15 AM)
Familiar with Searle's Chinese Room?


Yes. I could reformulate it like this: As soon as we could program intelligence into a computer, we could 'program' it into a contraption with water running through tubes and valves. Surely that cannot be intelligence!
If we are to take that as an argument, it is an emotional one at best.

QUOTE(Llywelyn @ Sep 10 2004, 11:31 AM)
In other words, we need a conceptual leap that we haven't made yet and which may not even be makable.

That is not, in my book, "practically" there by any stretch of  the imagination.


Conceptual leaps happen every day. Someone might have stumbled across AI a few months ago and might now be working on getting it published (or patented...). Since we are already where we need to be (and beyond) hardware-wise, and what is left is essentially sitting down and programming (and educating the thing), and with no reason to assume anything so supernatural that we could not reproduce it, I stand by 'practically there'. But you may call it what you like.
zaragosa
"You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!"
- Von Neumann
Llywelyn
QUOTE(zaragosa @ Sep 10 2004, 02:55 AM)
Yes. I could reformulate it like this: As soon as we could program intelligence into a computer, we could 'program' it into a contraption with water running through tubes and valves. Surely that cannot be intelligence!


Nope, you are confusing the basic argument with a later implementation which was to counteract a specific objection. Using that extension, however, Searle argues that the system could be internalized completely within the operator and the operator would still have no understanding of Chinese in order to pass the test.

As Searle points out for that situation: the operator does not need to understand Chinese in order to give the illusion of understanding it based on a set of dictionary responses (this is the fundamental mechanism that modern AIs use in an effort to pass the unrestricted Turing Test, for the record). That is the foundation of the Chinese Room argument, and it holds water (if you will pardon the pun) quite nicely--the operator neither knows nor understands any Chinese, will not learn Chinese simply by passing these symbols back and forth, but to the outside observer will appear fluent.

QUOTE
Conceptual leaps happen every day.


Requiring one to get from point A to point B without any evidence that we are approaching such is, at best, a matter of blind faith.

QUOTE
Someone might have stumbled across AI a few months ago and might now be working on getting it published (or patented...).


It is just a small leap to gravitons as well...

Yet we haven't found them yet. We might find them tomorrow, but we might never find them. They might not even exist. Saying that we have "practically found them" would be scientifically irresponsible, and saying that we will discover them "any day now" is a matter of pure and unadulterated faith.

QUOTE
Since we are already where we need to be (and beyond) hardware-wise, and what is left is essentially sitting down and programming (and educating the thing), and with no reason to assume anything so supernatural that we could not reproduce it, I stand by 'practically there'. But you may call it what you like.



Quantum Entanglement != supernatural
Hypercomputability != supernatural (Penrose's argument).
The theory that cognition is noncomputable != supernatural.

Simply, the statement that the applicability of the Church-Turing Thesis has been overstated does not require appealing to anything vaguely supernatural.
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2006 Invision Power Services, Inc.