" width="8" height="8"/> Create a thinking machine, how to program it?
Utopia-Politics | HelpSearchMembersCalendar |
Welcome Guest ( Log In | Register ) |
necrolyte |
![]()
Post
#1
|
Hexakosioihexekontahexaphobe ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 9,912 Joined: 21-February 03 Member No.: 271 |
OK this is bordering in philosophy, so I might post something similair in the philosophy section, but this will deal more with the scientific side.
How would one program a computer to: (1) be self-aware (2) come to a "best possible" solution when no perfect solution is findable (3) has a desire to learn (4) can feel emotion (5) sends its thoughts through logical loops which consider previous memories and thoughts (6) can formulate opinions based on facts |
![]() |
![]() ![]() ![]() |
zaragosa |
![]()
Post
#31
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
QUOTE(Llywelyn @ Sep 3 2004, 01:32 PM) I can calculate how billiard balls are going to behave on smooth table, I can display this output on the screen, but predicting it and displaying it is in no way equivalent to actually rolling the balls on the table. But that's not what's going on. We have only the output to evaluate. To change the analogy, imagine that we are measuring some form of output of the game (say, the sound). Now, if we have two boxes, one with a real game being played inside, and one with a simulation being played, and we listen to both. Why would one be qualitatively different from the other? Edit: To anticipate on a discussion that may follow: If you were to find out that a friend of yours was in fact (and had always been) a robot, would you consider yourself deceived for all the years you had known him/her? This post has been edited by zaragosa: Sep 3 2004, 01:06 PM |
![]() |
Llywelyn |
![]()
Post
#32
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
QUOTE(zaragosa @ Sep 3 2004, 05:56 AM) But that's not what's going on. We have only the output to evaluate. To change the analogy, imagine that we are measuring some form of output of the game (say, the sound). Now, if we have two boxes, one with a real game being played inside, and one with a simulation being played, and we listen to both. Why would one be qualitatively different from the other? That is exactly what is going on. The fact that the process is black boxed just means that it is even more difficult to know whether the system in question exists inside of the set of circuits and makes it even less likely. Let's look at your example. From outside the box they may sound the same, but one is simply reproducing the sounds based on a programmed and highly algorithmic pattern. The other is producing them as a consequence of playing the game. In the first case the game itself doesn't have to be played anywhere, in the second it is being played in real time. That's a huge difference. Why are they different? One is a model, one is an actual system. One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played. Now, given a set of changing inputs and a set of outputs (rather than just analyzing a set of outputs), we can analyze it the same way we analyze a black boxed circuit and, as a matter of faith, I do not believe a computer can beat a set of human judges. Enter the Turing Test. Show me an AI which is close to passing an unrestricted Turing Test, since you seem to think we are "practically there" on creating a strong AI. Last year the first place of the AI's in the Loebner Prize was Jabberwock. Not even close. You:...Where do you live? Jabb:...In the Great Realm of Imagination, care to meet me there? You:...Sure, I'll meet you at half a figment around the bend from the Disney ride. Jabb:...You think you can meet me? Not that I think you can't, guest |
![]() |
Ferran |
![]()
Post
#33
|
EMO BLANK STARE ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: JFTD Posts: 3,025 Joined: 16-June 02 From: California (that's spanish, by the way) Member No.: 6 |
You:...You're flawed.
Jabb:...Sure thing. Most of our suspicions of others are aroused by our knowledge of ourselves. But I just felt I had to make that point strongly. |
![]() |
zaragosa |
![]()
Post
#34
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
QUOTE(Llywelyn @ Sep 3 2004, 03:55 PM) That is exactly what is going on. The fact that the process is black boxed just means that it is even more difficult to know whether the system in question exists inside of the set of circuits and makes it even less likely. (...) Why are they different? One is a model, one is an actual system. One is using a set of algorithms to anticipate and produce sounds in an attempt to mimic the game, the other actually involves a game being played. You're missing the point. I'm saying that intelligence is not in the process, but in the output itself (which is of course identical if the model is good). Consider: If you were to find out that a friend of yours was in fact (and had always been) a robot, would you consider yourself deceived for all the years you had known him/her? |
![]() |
Nalvaros |
![]()
Post
#35
|
All shots and nothing ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 3,777 Joined: 20-August 02 Member No.: 147 |
I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent.
This post has been edited by Nalvaros: Sep 5 2004, 07:40 AM |
![]() |
Llywelyn |
![]()
Post
#36
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
Zaragosa, you have claimed that we are "practically there."
Please show me where we have come close to having a computer "win" an unrestricted Turing Test such as the one used in the Loebner Prize. |
![]() |
Deus Ex Machina |
![]()
Post
#37
|
age ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 1,205 Joined: 24-November 03 From: Suburb of Denver Member No.: 569 |
QUOTE(Nalvaros @ Sep 5 2004, 01:39 AM) I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent. Sure, computers today aren't intelligent by any practical definition of the word, but if intelligience is in more than the output, how would we tell today if a computer program, which seemed to display all the attributes of real intelligence (self awareness, capable of learning, <insert other criteria here>) in it's output, is really intelligent or not? Assuming we understood perfectly all that went on inside of it's `brain', we wouldn't be able to compare it to our own (or any model of how intelligence should work on the lower level). |
![]() |
zaragosa |
![]()
Post
#38
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
|
![]() |
zaragosa |
![]()
Post
#39
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
|
![]() |
Llywelyn |
![]()
Post
#40
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
|
![]() |
zaragosa |
![]()
Post
#41
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
I never implied that the evolution towards AI was a gradual process. The hardware being relevant only to the degree that it can make fast calculations, the last hurdle is software. So, in my view, what's left is a conceptual, not a technological feat. That's why I say I think we're practically there.
|
![]() |
zkajan |
![]()
Post
#42
|
Bosnian MOFO ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 2,685 Joined: 6-January 04 From: New England Member No.: 603 |
QUOTE(Nalvaros @ Sep 5 2004, 03:39 AM) I must disagree. Intelligence cannot simply be the output. Otherwise, we could say that computers today are "intelligent". Same with calculators. We throw in some numbers, and they return answers. We could do the same with a mathematician working with pen and paper. Both will return an answer (Presumebaly correct), but its a far cry to say that calculators are intelligent. i disagree, intelligence is output. even with humans, all output is a result of input at some point. doesn't mean stimulus->immediate response. hell, you could be walking down the street and remember when you were 5 year old something happened and this causes you to thing about that experience and some others and produce an output, and this thinking may be intentional or even hard-wired (the primitive brain is involved). so in that matter inteligence is a "black box" type scenario |
![]() |
acow |
![]()
Post
#43
|
Stick it to the man... ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: JFTD Posts: 6,063 Joined: 16-June 02 From: Sydney, Australia Member No.: 30 |
"Thinking" or "conscious"? Or are they the same?
Cause then you get into the whole realm of what consciousness is, and to my knowlege, no one really has a good definition of that... |
![]() |
zaragosa |
![]()
Post
#44
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
Neither 'conscious', nor 'thinking' or 'intelligence' have ever been satisfactorily defined in my opinion. By diagnostically used definitions of intelligence (Wechsler tests etc.), even the earliest computers with some elementary programming and a functioning interface have us beat hands down.
|
![]() |
Llywelyn |
![]()
Post
#45
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
Familiar with Searle's Chinese Room?
|
![]() |
Llywelyn |
![]()
Post
#46
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
For those who don't know.
Baumgartner summarizes it: QUOTE He imagines himself locked in a room, in which there are various slips of paper with doodles on them, a slot through which people can pass slips of paper to him and through which he can pass them out; and a book of rules telling him how to respond to the doodles, which are identified by their shape. One rule, for example, instructs him that when squiggle-squiggle is passed in to him, he should pass squoggle-squoggle out. So far as the person in the room is concerned, the doodles are meaningless. But unbeknownst to him, they are Chinese characters, and the people outside the room, being Chinese, interpret them as such. When the rules happen to be such that the questions are paired with what the Chinese people outside recognize as a sensible answer, they will interpret the Chinese characters as meaningful answers. But the person inside the room knows nothing of this. He is instantiating a computer program -- that is, he is performing purely formal manipulations of uninterpreted patterns; the program is all syntax and has no semantics. |
![]() |
Llywelyn |
![]()
Post
#47
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
QUOTE(zaragosa @ Sep 5 2004, 11:59 AM) I never implied that the evolution towards AI was a gradual process. The hardware being relevant only to the degree that it can make fast calculations, the last hurdle is software. So, in my view, what's left is a conceptual, not a technological feat. That's why I say I think we're practically there. In other words, we need a conceptual leap that we haven't made yet and which may not even be makable. That is not, in my book, "practically" there by any stretch of the imagination. |
![]() |
zaragosa |
![]()
Post
#48
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
QUOTE(Llywelyn @ Sep 10 2004, 11:15 AM) Yes. I could reformulate it like this: As soon as we could program intelligence into a computer, we could 'program' it into a contraption with water running through tubes and valves. Surely that cannot be intelligence! If we are to take that as an argument, it is an emotional one at best. QUOTE(Llywelyn @ Sep 10 2004, 11:31 AM) In other words, we need a conceptual leap that we haven't made yet and which may not even be makable. That is not, in my book, "practically" there by any stretch of the imagination. Conceptual leaps happen every day. Someone might have stumbled across AI a few months ago and might now be working on getting it published (or patented...). Since we are already where we need to be (and beyond) hardware-wise, and what is left is essentially sitting down and programming (and educating the thing), and with no reason to assume anything so supernatural that we could not reproduce it, I stand by 'practically there'. But you may call it what you like. |
![]() |
zaragosa |
![]()
Post
#49
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
"You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!"
- Von Neumann |
![]() |
Llywelyn |
![]()
Post
#50
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
QUOTE(zaragosa @ Sep 10 2004, 02:55 AM) Yes. I could reformulate it like this: As soon as we could program intelligence into a computer, we could 'program' it into a contraption with water running through tubes and valves. Surely that cannot be intelligence! Nope, you are confusing the basic argument with a later implementation which was to counteract a specific objection. Using that extension, however, Searle argues that the system could be internalized completely within the operator and the operator would still have no understanding of Chinese in order to pass the test. As Searle points out for that situation: the operator does not need to understand Chinese in order to give the illusion of understanding it based on a set of dictionary responses (this is the fundamental mechanism that modern AIs use in an effort to pass the unrestricted Turing Test, for the record). That is the foundation of the Chinese Room argument, and it holds water (if you will pardon the pun) quite nicely--the operator neither knows nor understands any Chinese, will not learn Chinese simply by passing these symbols back and forth, but to the outside observer will appear fluent. QUOTE Conceptual leaps happen every day. Requiring one to get from point A to point B without any evidence that we are approaching such is, at best, a matter of blind faith. QUOTE Someone might have stumbled across AI a few months ago and might now be working on getting it published (or patented...). It is just a small leap to gravitons as well... Yet we haven't found them yet. We might find them tomorrow, but we might never find them. They might not even exist. Saying that we have "practically found them" would be scientifically irresponsible, and saying that we will discover them "any day now" is a matter of pure and unadulterated faith. QUOTE Since we are already where we need to be (and beyond) hardware-wise, and what is left is essentially sitting down and programming (and educating the thing), and with no reason to assume anything so supernatural that we could not reproduce it, I stand by 'practically there'. But you may call it what you like. Quantum Entanglement != supernatural Hypercomputability != supernatural (Penrose's argument). The theory that cognition is noncomputable != supernatural. Simply, the statement that the applicability of the Church-Turing Thesis has been overstated does not require appealing to anything vaguely supernatural. This post has been edited by Llywelyn: Sep 10 2004, 11:00 AM |
![]() |
zaragosa |
![]()
Post
#51
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
QUOTE(Llywelyn @ Sep 10 2004, 12:58 PM) That is the foundation of the Chinese Room argument, and it holds water (if you will pardon the pun) quite nicely--the operator neither knows nor understands any Chinese, will not learn Chinese simply by passing these symbols back and forth, Why does that matter? I don't think anyone is trying to say that any individual part of the human brain possesses intelligence (or understanding). QUOTE Saying that we have "practically found them" would be scientifically irresponsible, and saying that we will discover them "any day now" is a matter of pure and unadulterated faith. Which is why I've added qualifiers to every post in the line of "in my opinion," and "I (personally) think." I wouldn't dream of denying that this is a belief. One which I believe to be a reasonable one (for reasons I have laid out), but a belief nonetheless. QUOTE If it can be sufficiently described, it can be simulated. If it cannot, what is the point of discussing it (beyond passing time)? This post has been edited by zaragosa: Sep 10 2004, 02:58 PM |
![]() |
Llywelyn |
![]()
Post
#52
|
Mezameru Kotonaikedo ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 5,735 Joined: 26-June 02 From: New Orleans, LA Member No.: 64 |
QUOTE(zaragosa @ Sep 10 2004, 07:56 AM) Why does that matter? I don't think anyone is trying to say that any individual part of the human brain possesses intelligence (or understanding). Searle already countered this objection when he claimed that the system could be internalized to the operator. QUOTE I wouldn't dream of denying that this is a belief. One which I believe to be a reasonable one (for reasons I have laid out), but a belief nonetheless. The leader of ALICE bot project, a finalist in the upcoming 2004 Loebner Prize and winner in IIRC 2000 and 2001, has said that "believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning." This is not exactly a glowing endorsement of the "right around the corner" theory. IIRC he has also said that ALICE, while significantly more advanced, is not that fundamentally different than the original ELIZA. QUOTE If it can be sufficiently described, it can be simulated. Can consciousness be "sufficiently described"? It also can't necessarily be computationally simulated unless it meets with certain criteria (such as being a discrete state machine). Hence where I talked about the applicability of the Church-Turing thesis. |
![]() |
kindfluffysteve |
![]()
Post
#53
|
speaker of ideas that terrify the right on this board. ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 2,143 Joined: 29-June 02 From: LANCASTRIA Member No.: 72 |
what is knowledge?
what is meaning? what is the difference between Data and Information. Data is useless 1's and 0's - un ordered - perhaps in a stream - with patterns within but unresolved. - raw stuff. Information is organised data. A network of facts. Organisation adds meaning because organisation transforms raw 1's and 0's into something more: more than the sum of its parts. |
![]() |
zaragosa |
![]()
Post
#54
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
QUOTE(Llywelyn @ Sep 10 2004, 11:57 PM) Searle already countered this objection when he claimed that the system could be internalized to the operator. At which point the operator, possessing the entire system, would be indistinguishable from a human, which was the goal. QUOTE Until otherwise demonstrated, I don't see why not. I'm working on it ;) |
![]() |
Sephiroth |
![]()
Post
#55
|
I do have a fucking life. ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 2,271 Joined: 16-January 04 From: AppState Member No.: 624 |
Aren't there some kind of artificial intelligence with computer games anyway?
|
![]() |
zaragosa |
![]()
Post
#56
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
In a limited way, just like in chess, yes.
|
![]() |
Sephiroth |
![]()
Post
#57
|
I do have a fucking life. ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 2,271 Joined: 16-January 04 From: AppState Member No.: 624 |
Maybe we can build on that.
|
![]() |
Deus Ex Machina |
![]()
Post
#58
|
age ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 1,205 Joined: 24-November 03 From: Suburb of Denver Member No.: 569 |
|
![]() |
zaragosa |
![]()
Post
#59
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
Well, humans use heuristics, too, we're just better at them. Computers get much of their playing strength from number crunching and encyclopaedic knowledge. For now, we haven't gotten computers to understand more abstract chess concepts like power bases and field stress...
|
![]() |
zaragosa |
![]()
Post
#60
|
False Mirror ![]() ![]() ![]() ![]() ![]() ![]() ![]() Group: Members Posts: 4,038 Joined: 25-June 02 From: Brussels, Belgium Member No.: 62 |
But then again, I used to teach chess, and I didn't get some kids to understand those either...
|
![]() |
![]() ![]() ![]() |
Lo-Fi Version | Time is now: 15th June 2006 - 04:30 AM |