Printable Version of Topic
Click here to view this topic in its original format
Utopia-Politics > Science > Create a thinking machine


Posted by: necrolyte Sep 1 2004, 11:17 PM
OK this is bordering in philosophy, so I might post something similair in the philosophy section, but this will deal more with the scientific side.

How would one program a computer to:
(1) be self-aware
(2) come to a "best possible" solution when no perfect solution is findable
(3) has a desire to learn
(4) can feel emotion
(5) sends its thoughts through logical loops which consider previous memories and thoughts
(6) can formulate opinions based on facts

Posted by: Ferran Sep 1 2004, 11:34 PM
(1) I would htink that the easiest way to make a computer self-aware is program it to create variables (output), and then act upon the variables it creates... (input)? Though, I could be way off.

(2) go to google and search for something. It'll show percentages next to the pages you find, indicating relevancy. to make a computer choose the "best possible" solution, have it choose the one closest to 100% bu using whatever method Google uses.

(3) Oo, that's tricky. You'd have to figure out how to make it desire something, in the first place!

(4) Something to do with dealing with input?

(5) Have it programmed to save input and output in reference to each other, so that when it deals with a certain kind of input, its output can be determined by the closest input that it had recieved before.

(6) I'm not gonna touch this one.

--- Of course, I'm no programmer, so I may be full of BS in regards to this... but most of it seems pretty simple, no?

Posted by: Sephiroth Sep 2 2004, 01:27 AM
I'm a Computer Science major and this is way beyond my skills.

Posted by: Telum Sep 2 2004, 01:28 AM
A computer that runs on binary circuits cant be self-aware. You need more than a collection of yes/no's to become aware.

Posted by: Sephiroth Sep 2 2004, 01:37 AM
Computers don't have the speed needed to run such a program. It wouldn't be as fast as the human brain.

Just for a comparison, imagine two set of animal lists memorized by a person and saved on a harddrive. In the time that the computer finds one type of feline, the human would have found four or five. Computers are linear while the brain contains many interconnected webs of information.

I don't think emotion would translate well to code.

Posted by: zkajan Sep 2 2004, 02:17 AM
i remember reading that animals make decisions based on "fuzzy logic", that is to say, things influence their decisions which shouldn't necessarely. for example in politicspeople will choose a leader they think has nice hair even though his hair won't have any effect on his policies or their effect on the people

Posted by: necrolyte Sep 2 2004, 03:02 AM
I use fuzzy logic when I'm trying to decide which kitten to buy... (ok horrible joke I know)


The human mind is in essence a super-complex biological computer. To recreate it, even with code, must be possible, as all thoughts which pass through our mind are coded. Our mind simply uses more complex codes.

It also uses logical circuits and networks, as we're saying.

So Telum, couldnt you use basic code as the foundation to construct more complex code? Say, how basic binary code can be used as the foundation for different codes? Not being a computer expert, excuse my ignorance if I'm missing something.

Sephiroth; with computer technology advancing at the rapid rate at which it is, aren't technological limitations really only a temporary thing?

This is all like evolution. Basic nervous systems cannot do much, and are not self-aware. They only react in programmed ways similair to how a computer acts. If you hit the H key, a H comes up on the word processor. If you stimulate a Jellyfishe's tenticle nerve, it retracts the tentacle to consume whatever the tentacle has caught.

emotion is the last required brick, naturally. Being the most abstract, it would probably require us to more fully understand the methods we used to create its precursors... the self-awareness and capacity to learn.

...Ferran, I'm thinking about your ideas now and I'm kind of tired so I'll respond later smile.gif

Posted by: Deus Ex Machina Sep 2 2004, 03:52 AM
QUOTE(Telum @ Sep 1 2004, 07:28 PM)
A computer that runs on binary circuits cant be self-aware.  You need more than a collection of yes/no's to become aware.
[right][snapback]237882[/snapback][/right]

On the yes/no level, that may be true. However, consider that your brain is a similar collection of binary circuits: if they get input X, they fire off. Otherwise, they don't (well, that may be a tad simplified...). It's once you start taking meaning out of the circuits (e.g. a string of hex into ASCII) that self-awareness et al. arises. On the switches level, the machine is still being a machine. However, the interations between the switches holds a separate level of data.

To attempt to answer necrolyte's question, the key wouldn't be to necessarily code in the behaviors you mentioned, but to create a system of organization of data which allows them to naturally arise.

Anywho, I just finished a book (Godel, Escher, Bach: An Eternal Golden Braid, by Douglas R Hofstadter) in which the author talks about AI, self-awareness, etc.. It's a good read, and old/famous enough so that a decently-sized library has a fairly good chance of carrying it.

Posted by: libvertaruan Sep 2 2004, 04:05 AM
QUOTE
To attempt to answer necrolyte's question, the key wouldn't be to necessarily code in the behaviors you mentioned, but to create a system of organization of data which allows them to naturally arise.


Its called bottom-up programming, and that is, I believe, how our brains are programmed to learn.

Posted by: Russian Sep 2 2004, 04:07 AM
QUOTE
How would one program a computer to:
(1) be self-aware


you narrow the scope. Do you want a computer to be 'human'? thats impossible.

How about an autopilot program? That can monitors its speed, its height, the weather its flying into and change its operations accordingly. Its as 'self-aware' as anything can get. And such programs do exist. You program algorithms for every possible situation, and then you rigulously test it to make sure you havent forgotten anything. Lots of manhours, lots of money later you have a program thats self-aware but only in a specific scope. It can still make faulty decisions.

QUOTE
(2) come to a "best possible" solution when no perfect solution is findable


we can do that. Simple actually. A program matched its input variables against the required conditions for it to take action. If the variables dont match up it takes another pre-programmed action. With programs practically anything is possible except independant thought. Ie; a computer program can't write another program, as of today.

QUOTE
3) has a desire to learn


replace the word desire with design and you have a database management system.

QUOTE
(4) can feel emotion


sure. Of course its possible. But whats the point? Theres actually been experiments on this, a robotic face was programmed to match the facial expressions of people it was talking to.

QUOTE
(5) sends its thoughts through logical loops which consider previous memories and thoughts


Been and done. Very simple.

QUOTE
6) can formulate opinions based on facts


replace term with, can make decisions based on evidence and we have the autopilot model above.

But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.

Posted by: necrolyte Sep 2 2004, 04:23 AM
I would want such a thing to be constructed to see if we can reproduce ourselves using pure technology devoid of biology. As an experiment if you will.

Posted by: libvertaruan Sep 2 2004, 04:25 AM
QUOTE
But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.



Russian, you are a fucking idiot to not understand what this is about.

Posted by: Deus Ex Machina Sep 2 2004, 05:50 AM
QUOTE(Russian @ Sep 1 2004, 10:07 PM)
you narrow the scope. Do you want a computer to be 'human'? thats impossible.

How about an autopilot program? That can monitors its speed, its height, the weather its flying into and change its operations accordingly. Its as 'self-aware' as anything can get. And such programs do exist. You program algorithms for every possible situation, and then you rigulously test it to make sure you havent forgotten anything. Lots of manhours, lots of money later you have a program thats self-aware but only in a specific scope. It can still make faulty decisions.


Why is it impossible for a computer system to mimic the human mind? It (the mind) certainly isn't made of base components much different than a computer (neurons basically amounting to chemical switches). Regardless, I thought people still gave credit to the Church-Turning thesis.

A key part of what I define self awareness is both the ability to `step back' and look at what you're doing, and change your behavior based on it. While a sophisticated program of the type you describe may be able to find patterns in what it's doing and optimize it's preformance accordingly, a human would likely be able to do such a thing up through higher and higher levels, ad infinum (within the limits of memory and time). In addition, a self-aware program would likely be able to combine behaviors in order to react to new situations (e.g.: situation X is similar to this new thing I'm seeing. Let's see if what I do in situation X will be of any help --alternatively-- this new situation is like nothing I've seen before. However, situation Y arose from somewhat similar conditions. Maybe my solution to situation Y will be of help. Et cetera.)


QUOTE
replace the word desire with design and you have a database management system.


There is a distince difference between actively seeking out information on a topic one had no prior knowlege of (me browsing through a library) and seeking out information based on written instructions (google spidering the internet).

QUOTE
sure. Of course its possible. But whats the point? Theres actually been experiments on this, a robotic face was programmed to match the facial expressions of people it was talking to.


Having a robot mimic facial expressions is useless unless there is another level of meaning behind the emotions.

QUOTE
But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.
[right][snapback]237997[/snapback][/right]

Being able to develope a truely intelligent computer system would be a great way to study and understand how our minds work and how they evolved. Plus it would be pretty damn cool.

Posted by: zaragosa Sep 2 2004, 09:56 AM
I personally think we're practically there. There's nothing in the human hardware that we can't reproduce (at a million times the efficiency, I might add). The only thing that makes humans difference is the enormous amount of ready background knowledge that we simply pick up as we go along (commonly called 'creativity' and 'inspiration' and such). The fuzzy logic (unlogical combination of information), once we figure out how humans use it, can easily be reproduced.

Posted by: Llywelyn Sep 2 2004, 10:08 AM
There is a difference between the model and the system.

We can make a system behave as if it is intelligent, vaguely, but it is it actually intelligent or is it just mimicking intelligent behavior.

I will go out on a limb here and throw my meager vote behind Penrose in saying that intelligence is fundamentally non-computational in nature. Which means no collection of computational circuits will ever be "intelligent."

EDIT:
This is not to say, for the record, that intelligence will never be artificially generated. Only that it will never be replicated inside of a turing-based system. Quantum computers or future developments in physics may bring us there, but we aren't there with today's technology.

Posted by: Nalvaros Sep 2 2004, 11:04 AM
You'd probably want to look up artificial intelligence, which I dont have any experience in.

However, based on what I know about programming I can see no way any existing programs can create an intelligent entity.

Right now, when we write a program, we are basically writing a set of instructions. Do this when that happens. I dont believe a set of instructions is capable of becoming self aware - certainly we could concievably program an incredibly complex program that had say, a response to a million different permutations of a situation, and based on the response it makes and the resultant input it gets, compared with a "desired" input it might switch to different responses in the future. However at the end of the day, while such a program might mimic intelligence realistically, it is at the end of the day merely following instructions. It isnt making a choice.

Posted by: Russian Sep 2 2004, 04:01 PM
what if its instructions are to make a choice?


vis a vi Blue Junior?

Posted by: Llywelyn Sep 2 2004, 04:05 PM
QUOTE(Russian @ Sep 2 2004, 09:01 AM)
what if its instructions are to make a choice?
vis a vi Blue Junior?
[right][snapback]238174[/snapback][/right]


...and then what?

How does it make that choice?

Why?

Posted by: Russian Sep 2 2004, 04:10 PM
for it to win more then one chess game it has to make different choices. Otherwise it would be predictable and easilly defeatable.


Dont know the mechanism behind it though.



Posted by: Llywelyn Sep 2 2004, 04:54 PM
QUOTE(Russian @ Sep 2 2004, 09:10 AM)
for it to win more then one chess game it has to make different choices. Otherwise it would be predictable and easilly defeatable.
Dont know the mechanism behind it though.
[right][snapback]238178[/snapback][/right]


The mechanisms that its uses (a-b searches, books, kill tables, etc) are very far removed from sentience or the set of techniques used by people.

Posted by: Telum Sep 2 2004, 10:21 PM
QUOTE(zaragosa @ Sep 2 2004, 05:56 AM)
I personally think we're practically there. There's nothing in the human hardware that we can't reproduce (at a million times the efficiency, I might add). The only thing that makes humans difference is the enormous amount of ready background knowledge that we simply pick up as we go along (commonly called 'creativity' and 'inspiration' and such). The fuzzy logic (unlogical combination of information), once we figure out how humans use it, can easily be reproduced.
[right][snapback]238090[/snapback][/right]



If we can reproduce everything with so much more efficiency, why arent there artificial kidneys or livers on the market?

Posted by: Deus Ex Machina Sep 2 2004, 11:33 PM
QUOTE(Llywelyn @ Sep 2 2004, 04:08 AM)
There is a difference between the model and the system.
I will go out on a limb here and throw my meager vote behind Penrose in saying that intelligence is fundamentally non-computational in nature.  Which means no collection of computational circuits will ever be "intelligent."
[right][snapback]238093[/snapback][/right]


Would it be possible for Jew to explain Penrose's arguement for those of us not familiar with it? Google isn't being too helpful, and Wikipedia leaves it at
QUOTE
Some (including Roger Penrose) attack the applicability of the Church-Turing thesis. Others say the mind is not completely physical. Roger Penrose's argument rests on the conception of hypercomputation being possible in our universe. Quantum mechanics and newtonian mechanics do not allow hypercomputation but it is thought that some strange space times would. However there seems to be agreement that our universe is not sufficiently convoluted to allow such hypercomputation.

Posted by: Llywelyn Sep 3 2004, 12:03 AM
QUOTE(Deus Ex Machina @ Sep 2 2004, 04:33 PM)
Would it be possible for Jew to explain Penrose's arguement for those of us not familiar with it? Google isn't being too helpful, and Wikipedia leaves it at
[right][snapback]238297[/snapback][/right]


In a nutshell he claims that the brain is functionally noncomputational and that while a computational system may be able to mimic intelligence, it cannot actually be intelligent. His claim is that there cannot be strong AI in a computational system.

Now, a Quantum Computer gets past some of his objections and I am less certain whether a quantum algorithm could be intelligent when executed (and not just simulated), but that is a point that is separate from whether it can be done in a set of circuits smile.gif

Posted by: kindfluffysteve Sep 3 2004, 02:31 AM
the best way to program it is to let it program itself.

use a genetic algorithm.

this route is what I think will lead to digital sentience.

Posted by: kindfluffysteve Sep 3 2004, 04:20 AM
people can talk about the 1's and 0's as just being fundamentally 1's and 0's

but this is just an unnessary way to think about it.

to think in this way is to avoid the idea of something being more than the sum of its parts.

why cant a collection of 1's and 0's be more than the some its parts?

the difference between data and information/knowledge: data is just meaningless 1's and 0's. knowledge is a network of data.

data structure are things that really do exist - the mean something, they describe something and yet they are strangely etheral. an individual bit means nothing - but its the organisation that matters.

Posted by: zkajan Sep 3 2004, 04:28 AM
QUOTE(kindfluffysteve @ Sep 2 2004, 10:31 PM)
the best way to program it is to let it program itself.

use a genetic algorithm.

this route is what I think will lead to digital sentience.
[right][snapback]238430[/snapback][/right]

yup, kfs has it. babies are born with a bunch of hardwired instincts (something is wrong with the system: cry, if hungry and tit is presented: suck, when feces accumulates in colon: push it out, etc....) and a big blank slate that they pick up on in the first couple of years of their lifefor the most part but learning never really stops.

Posted by: Deus Ex Machina Sep 3 2004, 04:59 AM
QUOTE(Llywelyn @ Sep 2 2004, 06:03 PM)
In a nutshell he claims that the brain is functionally noncomputational and that while a computational system may be able to mimic intelligence, it cannot actually be intelligent.  His claim is that there cannot be strong AI in a computational system.

Now, a Quantum Computer gets past some of his objections and I am less certain whether a quantum algorithm could be intelligent when executed (and not just simulated), but that is a point that is separate from whether it can be done in a set of circuits smile.gif
[right][snapback]238323[/snapback][/right]


That's certianly interesting. I can't say I agree with his conclusion, but I haven't seen his meathods (I'm assuming that they rely on quantum effects and/or something similar, by how you phrased your post), and now I have a name to look in to if I ever reach the bottom of my "to read" list.

Anyways, thanks.

On a side note, that Jew-you replacement thing is damned annoying.

Posted by: zaragosa Sep 3 2004, 10:43 AM
QUOTE(Llywelyn @ Sep 2 2004, 12:08 PM)
We can make a system behave as if it is intelligent, vaguely, but it is it actually intelligent or is it just mimicking intelligent behavior.
[right][snapback]238093[/snapback][/right]

What's the difference? Intelligence is, for all intents and purposes, a behaviour.

QUOTE(Telum @ Sep 3 2004, 12:21 AM)
If we can reproduce everything with so much more efficiency, why arent there artificial kidneys or livers on the market?
[right][snapback]238272[/snapback][/right]

I didn't mean efficiency size-wise. Very efficient artificial livers and kidneys exist, but they tend to be rather bulky.

Posted by: Forben Sep 3 2004, 11:18 AM
needs to create its own variables.

definition of its basic function will need be included. like if it can move something that is a part of the robot, let it know that that piece can 'move'...

understanding of some form of reflex/sense algorithm.

'expansion' slots, so to speak, a way to expand its physical limitations to a limit.

a 'location' of read only, must obey algorithms that require it to continue to like, not kill us cause were just a parasite on the earth or some such. this read only could be viewed sort of the dna type stuff.

beyond that would be a bit of the philosiphical view of what emotion is, or isn't and whether or not our emotion is emotion that something else that is not geared towards the same style of response/reflex/soul/curiosity 'standards' can be like to another 'species', so to speak. Yes it could figure out the right responses most likely, but beyond that?

Posted by: Llywelyn Sep 3 2004, 11:32 AM
QUOTE(zaragosa @ Sep 3 2004, 03:43 AM)
What's the difference? Intelligence is, for all intents and purposes, a behaviour.
[right][snapback]238537[/snapback][/right]


SimplY: There is a difference between the model and the system. Confusing the two is one of the fundamental fallacies.

I can calculate how billiard balls are going to behave on smooth table, I can display this output on the screen, but predicting it and displaying it is in no way equivalent to actually rolling the balls on the table.

Being able to predict or anticipate what a conscious entity would say is an entirely different arena then actually being self-aware.

Posted by: zaragosa Sep 3 2004, 12:56 PM
QUOTE(Llywelyn @ Sep 3 2004, 01:32 PM)
I can calculate how billiard balls are going to behave on smooth table, I can display this output on the screen, but predicting it and displaying it is in no way equivalent to actually rolling the balls on the table.
[right][snapback]238548[/snapback][/right]

But that's not what's going on. We have only the output to evaluate. To change the analogy, imagine that we are measuring some form of output of the game (say, the sound). Now, if we have two boxes, one with a real game being played inside, and one with a simulation being played, and we listen to both. Why would one be qualitatively different from the other?

Edit: To anticipate on a discussion that may follow: If you were to find out that a friend of yours was in fact (and had always been) a robot, would you consider yourself deceived for all the years you had known him/her?

Posted by: Llywelyn Sep 3 2004, 01:55 PM
QUOTE(zaragosa @ Sep 3 2004, 05:56 AM)
But that's not what's going on. We have only the output to evaluate. To change the analogy, imagine that we are measuring some form of output of the game (say, the sound). Now, if we have two boxes, one with a real game being played inside, and one with a simulation being played, and we listen to both. Why would one be qualitatively different from the other?
[right][snapback]238565[/snapback][/right]


That is exactly what is going on. The fact that the process is black boxed just means that it is even more difficult to know whether the system in question exists inside of the set of circuits and makes it even less likely.

Let's look at your example. From outside the box they may sound the same, but one is simply reproducing the sounds based on a programmed and highly algorithmic pattern. The other is producing them as a consequence of playing the game. In the first case the game itself doesn't have to be played anywhere, in the second it is being played in real time. That's a huge difference.

Why are they different? One is a model, one is an actual system. One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played.

Now, given a set of changing inputs and a set of outputs (rather than just analyzing a set of outputs), we can analyze it the same way we analyze a black boxed circuit and, as a matter of faith, I do not believe a computer can beat a set of human judges. Enter the Turing Test. Show me an AI which is close to passing an unrestricted Turing Test, since you seem to think we are "practically there" on creating a strong AI.

Last year the first place of the AI's in the Loebner Prize was http://www.abenteuermedien.de/jabberwock/. Not even close.

You:...Where do you live?
Jabb:...In the Great Realm of Imagination, care to meet me there?
You:...Sure, I'll meet you at half a figment around the bend from the Disney ride.
Jabb:...You think you can meet me? Not that I think you can't, guest



Posted by: Ferran Sep 3 2004, 11:43 PM
You:...You're flawed.
Jabb:...Sure thing. Most of our suspicions of others are aroused by our knowledge of ourselves. But I just felt I had to make that point strongly.

Posted by: zaragosa Sep 4 2004, 09:51 AM
QUOTE(Llywelyn @ Sep 3 2004, 03:55 PM)
That is exactly what is going on.  The fact that the process is black boxed just means that it is even more difficult to know whether the system in question exists inside of the set of circuits and makes it even less likely.

(...)

Why are they different?  One is a model, one is an actual system.  One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played. 
[right][snapback]238586[/snapback][/right]

You're missing the point. I'm saying that intelligence is not in the process, but in the output itself (which is of course identical if the model is good). Consider: If you were to find out that a friend of yours was in fact (and had always been) a robot, would you consider yourself deceived for all the years you had known him/her?

Posted by: Nalvaros Sep 5 2004, 07:39 AM
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.

Posted by: Llywelyn Sep 5 2004, 01:47 PM
Zaragosa, you have claimed that we are "practically there."

Please show me where we have come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize.

Posted by: Deus Ex Machina Sep 5 2004, 04:26 PM
QUOTE(Nalvaros @ Sep 5 2004, 01:39 AM)
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.
[right][snapback]239226[/snapback][/right]


Sure, computers today aren't intelligent by any practical definition of the word, but if intelligience is in more than the output, how would we tell today if a computer program, which seemed to display all the attributes of real intelligence (self awareness, capable of learning, <insert other criteria here>) in it's output, is really intelligent or not? Assuming we understood perfectly all that went on inside of it's `brain', we wouldn't be able to compare it to our own (or any model of how intelligence should work on the lower level).

Posted by: zaragosa Sep 5 2004, 05:36 PM
QUOTE(Llywelyn @ Sep 5 2004, 03:47 PM)
Zaragosa, you have claimed that we are "practically there."

Please show me where we have come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize.
[right][snapback]239264[/snapback][/right]

practically, adv.
3: All but; nearly; almost

Posted by: zaragosa Sep 5 2004, 05:38 PM
QUOTE(Nalvaros @ Sep 5 2004, 09:39 AM)
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent".
[right][snapback]239226[/snapback][/right]

The output rendered by computers today isn't yet the same as that from beings we would normally classify as 'intelligent'.

Posted by: Llywelyn Sep 5 2004, 06:28 PM
QUOTE(zaragosa @ Sep 5 2004, 10:36 AM)
practically, adv.
3: All but; nearly; almost
[right][snapback]239294[/snapback][/right]


I quote myself:

QUOTE
Please show me where we have  come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize.


Emphasis added.

Posted by: zaragosa Sep 5 2004, 06:59 PM
I never implied that the evolution towards AI was a gradual process. The hardware being relevant only to the degree that it can make fast calculations, the last hurdle is software. So, in my view, what's left is a conceptual, not a technological feat. That's why I say I think we're practically there.

Posted by: zkajan Sep 5 2004, 07:03 PM
QUOTE(Nalvaros @ Sep 5 2004, 03:39 AM)
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.
[right][snapback]239226[/snapback][/right]

i disagree, intelligence is output. even with humans, all output is a result of input at some point. doesn't mean stimulus->immediate response. hell, you could be walking down the street and remember when you were 5 year old something happened and this causes you to thing about that experience and some others and produce an output, and this thinking may be intentional or even hard-wired (the primitive brain is involved). so in that matter inteligence is a "black box" type scenario

Posted by: acow Sep 10 2004, 08:13 AM
"Thinking" or "conscious"? Or are they the same?

Cause then you get into the whole realm of what consciousness is, and to my knowlege, no one really has a good definition of that...

Posted by: zaragosa Sep 10 2004, 08:52 AM
Neither 'conscious', nor 'thinking' or 'intelligence' have ever been satisfactorily defined in my opinion. By diagnostically used definitions of intelligence (Wechsler tests etc.), even the earliest computers with some elementary programming and a functioning interface have us beat hands down.

Posted by: Llywelyn Sep 10 2004, 09:15 AM
Familiar with Searle's Chinese Room?

Posted by: Llywelyn Sep 10 2004, 09:22 AM
For those who don't know.

Baumgartner summarizes it:

QUOTE
He imagines himself locked in a room, in which there are various slips of paper with doodles on them, a slot through which people can pass slips of paper to him and through which he can pass them out; and a book of rules telling him how to respond to the doodles, which are identified by their shape. One rule, for example, instructs him that when squiggle-squiggle is passed in to him, he should pass squoggle-squoggle out. So far as the person in the room is concerned, the doodles are meaningless. But unbeknownst to him, they are Chinese characters, and the people outside the room, being Chinese, interpret them as such. When the rules happen to be such that the questions are paired with what the Chinese people outside recognize as a sensible answer, they will interpret the Chinese characters as meaningful answers. But the person inside the room knows nothing of this. He is instantiating a computer program -- that is, he is performing purely formal manipulations of uninterpreted patterns; the program is all syntax and has no semantics.

Posted by: Llywelyn Sep 10 2004, 09:31 AM
QUOTE(zaragosa @ Sep 5 2004, 11:59 AM)
I never implied that the evolution towards AI was a gradual process. The hardware being relevant only to the degree that it can make fast calculations, the last hurdle is software. So, in my view, what's left is a conceptual, not a technological feat. That's why I say I think we're practically there.
[right][snapback]239327[/snapback][/right]


In other words, we need a conceptual leap that we haven't made yet and which may not even be makable.

That is not, in my book, "practically" there by any stretch of the imagination.

Posted by: zaragosa Sep 10 2004, 09:55 AM
QUOTE(Llywelyn @ Sep 10 2004, 11:15 AM)
Familiar with Searle's Chinese Room?
[right][snapback]240726[/snapback][/right]

Yes. I could reformulate it like this: As soon as we could program intelligence into a computer, we could 'program' it into a contraption with water running through tubes and valves. Surely that cannot be intelligence!
If we are to take that as an argument, it is an emotional one at best.

QUOTE(Llywelyn @ Sep 10 2004, 11:31 AM)
In other words, we need a conceptual leap that we haven't made yet and which may not even be makable.

That is not, in my book, "practically" there by any stretch of  the imagination.
[right][snapback]240732[/snapback][/right]

Conceptual leaps happen every day. Someone might have stumbled across AI a few months ago and might now be working on getting it published (or patented...). Since we are already where we need to be (and beyond) hardware-wise, and what is left is essentially sitting down and programming (and educating the thing), and with no reason to assume anything so supernatural that we could not reproduce it, I stand by 'practically there'. But you may call it what you like.

Posted by: zaragosa Sep 10 2004, 10:07 AM
"You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!"
- Von Neumann

Posted by: Llywelyn Sep 10 2004, 10:58 AM
QUOTE(zaragosa @ Sep 10 2004, 02:55 AM)
Yes. I could reformulate it like this: As soon as we could program intelligence into a computer, we could 'program' it into a contraption with water running through tubes and valves. Surely that cannot be intelligence!


Nope, you are confusing the basic argument with a later implementation which was to counteract a specific objection. Using that extension, however, Searle argues that the system could be internalized completely within the operator and the operator would still have no understanding of Chinese in order to pass the test.

As Searle points out for that situation: the operator does not need to understand Chinese in order to give the illusion of understanding it based on a set of dictionary responses (this is the fundamental mechanism that modern AIs use in an effort to pass the unrestricted Turing Test, for the record). That is the foundation of the Chinese Room argument, and it holds water (if you will pardon the pun) quite nicely--the operator neither knows nor understands any Chinese, will not learn Chinese simply by passing these symbols back and forth, but to the outside observer will appear fluent.

QUOTE
Conceptual leaps happen every day.


Requiring one to get from point A to point B without any evidence that we are approaching such is, at best, a matter of blind faith.

QUOTE
Someone might have stumbled across AI a few months ago and might now be working on getting it published (or patented...).


It is just a small leap to gravitons as well...

Yet we haven't found them yet. We might find them tomorrow, but we might never find them. They might not even exist. Saying that we have "practically found them" would be scientifically irresponsible, and saying that we will discover them "any day now" is a matter of pure and unadulterated faith.

QUOTE
Since we are already where we need to be (and beyond) hardware-wise, and what is left is essentially sitting down and programming (and educating the thing), and with no reason to assume anything so supernatural that we could not reproduce it, I stand by 'practically there'. But you may call it what you like.
[right][snapback]240737[/snapback][/right]


Quantum Entanglement != supernatural
Hypercomputability != supernatural (Penrose's argument).
The theory that cognition is noncomputable != supernatural.

Simply, the statement that the applicability of the Church-Turing Thesis has been overstated does not require appealing to anything vaguely supernatural.

Posted by: zaragosa Sep 10 2004, 02:56 PM
QUOTE(Llywelyn @ Sep 10 2004, 12:58 PM)
That is the foundation of the Chinese Room argument, and it holds water (if you will pardon the pun) quite nicely--the operator neither knows nor understands any Chinese, will not learn Chinese simply by passing these symbols back and forth,

Why does that matter? I don't think anyone is trying to say that any individual part of the human brain possesses intelligence (or understanding).

QUOTE
Saying that we have "practically found them" would be scientifically irresponsible, and saying that we will discover them "any day now" is a matter of pure and unadulterated faith.

Which is why I've added qualifiers to every post in the line of "in my opinion," and "I (personally) think." I wouldn't dream of denying that this is a belief. One which I believe to be a reasonable one (for reasons I have laid out), but a belief nonetheless.

QUOTE
The theory that cognition is noncomputable != supernatural.
[right][snapback]240743[/snapback][/right]

If it can be sufficiently described, it can be simulated. If it cannot, what is the point of discussing it (beyond passing time)?

Posted by: Llywelyn Sep 10 2004, 09:57 PM
QUOTE(zaragosa @ Sep 10 2004, 07:56 AM)
Why does that matter? I don't think anyone is trying to say that any individual part of the human brain possesses intelligence (or understanding).


Searle already countered this objection when he claimed that the system could be internalized to the operator.

QUOTE
I wouldn't dream of denying that this is a belief. One which I believe to be a reasonable one (for reasons I have laid out), but a belief nonetheless.


The leader of ALICE bot project, a finalist in the upcoming 2004 Loebner Prize and winner in IIRC 2000 and 2001, has said that "believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." This is not exactly a glowing endorsement of the "right around the corner" theory. IIRC he has also said that ALICE, while significantly more advanced, is not that fundamentally different than the original ELIZA.


QUOTE
If it can be sufficiently described, it can be simulated.


Can consciousness be "sufficiently described"?

It also can't necessarily be computationally simulated unless it meets with certain criteria (such as being a discrete state machine). Hence where I talked about the applicability of the Church-Turing thesis.

Posted by: kindfluffysteve Sep 11 2004, 01:11 AM
what is knowledge?

what is meaning?




what is the difference between Data and Information.
Data is useless 1's and 0's - un ordered - perhaps in a stream - with patterns within but unresolved. - raw stuff.

Information is organised data. A network of facts.

Organisation adds meaning because organisation transforms raw 1's and 0's into something more: more than the sum of its parts.

Posted by: zaragosa Sep 11 2004, 06:04 AM
QUOTE(Llywelyn @ Sep 10 2004, 11:57 PM)
Searle already countered this objection when he claimed that the system could be internalized to the operator.

At which point the operator, possessing the entire system, would be indistinguishable from a human, which was the goal.

QUOTE
Can consciousness be "sufficiently described"?
[right][snapback]240836[/snapback][/right]

Until otherwise demonstrated, I don't see why not. I'm working on it wink.gif

Posted by: Sephiroth Sep 12 2004, 06:39 PM
Aren't there some kind of artificial intelligence with computer games anyway?

Posted by: zaragosa Sep 12 2004, 07:19 PM
In a limited way, just like in chess, yes.

Posted by: Sephiroth Sep 12 2004, 07:53 PM
Maybe we can build on that.

Posted by: Deus Ex Machina Sep 12 2004, 09:31 PM
QUOTE(Sephiroth @ Sep 12 2004, 01:53 PM)
Maybe we can build on that.
[right][snapback]241248[/snapback][/right]

IMO, the way chess (et al.) AI works is much too simplistic to be used as a starting point for real AI. IIRC, chess AI is just a combination of heuristics and look-ahead stuff.

Posted by: zaragosa Sep 12 2004, 09:35 PM
Well, humans use heuristics, too, we're just better at them. Computers get much of their playing strength from number crunching and encyclopaedic knowledge. For now, we haven't gotten computers to understand more abstract chess concepts like power bases and field stress...

Posted by: zaragosa Sep 13 2004, 07:12 PM
But then again, I used to teach chess, and I didn't get some kids to understand those either...

Posted by: libvertaruan Sep 15 2004, 06:49 PM
QUOTE(Sephiroth @ Sep 12 2004, 03:53 PM)
Maybe we can build on that.
[right][snapback]241248[/snapback][/right]


No. That would be far more complicated.

http://www.paulgraham.com/progbot.html

It would be far easier and less complicated to use bottom-up programming and give it the ability to program itself, however that could be done. Do our experiences not program us/do we not use our experiences to program ourselves (unconsciously)?

Posted by: Deus Ex Machina Sep 18 2004, 04:55 AM
QUOTE(zaragosa @ Sep 12 2004, 03:35 PM)
Well, humans use heuristics, too, we're just better at them. Computers get much of their playing strength from number crunching and encyclopaedic knowledge. For now, we haven't gotten computers to understand more abstract chess concepts like power bases and field stress...
[right][snapback]241275[/snapback][/right]

I was under the impression that that's what heuristics were: the ideas of power bases et al (whatever the heck they are) compressed/simplified into a few (or a lot of) rules that work fairly well under most situations.

Posted by: zaragosa Sep 18 2004, 06:27 PM
Yes, those are heuristics, and computers have a very hard time with them because it's very hard to formulate them analytically (often because we don't know what the 'simple rules' are).

Posted by: Forben Sep 22 2004, 11:09 AM
couldn't we set up a genetic algorithm so that instead of having a base formula, you have a limitation to the max and min responses (forget the mathmatical theory that its called, but basically f= x whatever with limit of + or 1 7 or something like that), and technically have no formula to go from the question to the answer?

guess that a partial chaos theory type thing might achieve parts of the individuality.

computers started with an optional base 10 instead of a base 2 computation. The problem with the base 10 was that the old technology at the time was too hard to figure determine whether the electricity was not on or off. I think the standard 2 base would not allow for the 'black and white' issues that it would have to deal with.

society also dictates specific learned responses to certain stimuli...

Posted by: zaragosa Sep 22 2004, 02:44 PM
QUOTE(Forben @ Sep 22 2004, 01:09 PM)
couldn't we set up a genetic algorithm so that instead of having a base formula, you have a limitation to the max and min responses (forget the mathmatical theory that its called, but basically f= x whatever with limit of + or 1 7 or something like that),  and technically have no formula to go from the question to the answer?

A sine wave or Walsh wave could do that, but what would you do with such a function?

QUOTE
guess that a partial chaos theory type thing might achieve parts of the individuality.

My guess, too. Lots of 'individuality' in computer simulated responses can be generated with a small ideosyncratic bias in a random pattern added to whatever you're doing.

QUOTE
society also dictates specific learned responses to certain stimuli...
[right][snapback]244224[/snapback][/right]

Yup. and there's so much of that, in fact, that it's not feasible to hardcode a computer with that information, so any AI we build will likely have to be educated.

Posted by: gnuneo Jun 1 2006, 11:15 PM
intelligence =/= consciousness

QUOTE
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.


but that is exactly what intelligence is - what they are not is *conscious*.

QUOTE(Llyw)
Why are they different? One is a model, one is an actual system. One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played.


QUOTE
Sure, computers today aren't intelligent by any practical definition of the word, but if intelligience is in more than the output, how would we tell today if a computer program, which seemed to display all the attributes of real intelligence (self awareness, capable of learning, <insert other criteria here>) in it's output, is really intelligent or not? Assuming we understood perfectly all that went on inside of it's `brain', we wouldn't be able to compare it to our own (or any model of how intelligence should work on the lower level).


problem of output v actual consciousness - we *cannot know* others actual consciousness, we can only assume that others *are* conscious. This is a meta problem of being, and not just restricted to AC study. However despite this necessary caveat, it also to some degree goes without saying that a program that merely responds 'correctly' v one that analyses with volition, is inferior with respects to AC.

QUOTE(zara)
The output rendered by computers today isn't yet the same as that from beings we would normally classify as 'intelligent'.


actually, yes they are - check out 'IQ tests' - on many basic supposedly IQ tests a computer will find the answer well before any human could (sudoku, logic problems etc).

QUOTE
what is knowledge?

what is meaning?




what is the difference between Data and Information.
Data is useless 1's and 0's - un ordered - perhaps in a stream - with patterns within but unresolved. - raw stuff.

Information is organised data. A network of facts.

Organisation adds meaning because organisation transforms raw 1's and 0's into something more: more than the sum of its parts.


indeed - it takes a consciousness to put meaning into data - current programs are merely data processing - they cannot put meaning (Quality? wink.gif) into items. Is that emotion?

QUOTE(zara)
Until otherwise demonstrated, I don't see why not. I'm working on it [consciousness]  wink.gif


please define "consciousness".

Oh, such joy laugh.gif laugh.gif laugh.gif


Powered by Invision Power Board (http://www.invisionboard.com)
© Invision Power Services (http://www.invisionpower.com)