Welcome Guest ( Log In | Register )

3 Pages  1 2 3 > 
Reply to this topicStart new topicStart Poll

" width="8" height="8"/> Create a thinking machine, how to program it?
Outline · [ Standard ] · Linear+
necrolyte
post Sep 1 2004, 11:17 PM
Post #1


Hexakosioihexekontahexaphobe
********

Group: Members
Posts: 9,912
Joined: 21-February 03
Member No.: 271



OK this is bordering in philosophy, so I might post something similair in the philosophy section, but this will deal more with the scientific side.

How would one program a computer to:
(1) be self-aware
(2) come to a "best possible" solution when no perfect solution is findable
(3) has a desire to learn
(4) can feel emotion
(5) sends its thoughts through logical loops which consider previous memories and thoughts
(6) can formulate opinions based on facts
Top
User is offlinePMEmail Poster
Quote Post
Ferran
post Sep 1 2004, 11:34 PM
Post #2


EMO BLANK STARE
*******

Group: JFTD
Posts: 3,025
Joined: 16-June 02
From: California (that's spanish, by the way)
Member No.: 6



(1) I would htink that the easiest way to make a computer self-aware is program it to create variables (output), and then act upon the variables it creates... (input)? Though, I could be way off.

(2) go to google and search for something. It'll show percentages next to the pages you find, indicating relevancy. to make a computer choose the "best possible" solution, have it choose the one closest to 100% bu using whatever method Google uses.

(3) Oo, that's tricky. You'd have to figure out how to make it desire something, in the first place!

(4) Something to do with dealing with input?

(5) Have it programmed to save input and output in reference to each other, so that when it deals with a certain kind of input, its output can be determined by the closest input that it had recieved before.

(6) I'm not gonna touch this one.

--- Of course, I'm no programmer, so I may be full of BS in regards to this... but most of it seems pretty simple, no?
Top
User is offlinePMEmail Poster
Quote Post
Sephiroth
post Sep 2 2004, 01:27 AM
Post #3


I do have a fucking life.
*******

Group: Members
Posts: 2,271
Joined: 16-January 04
From: AppState
Member No.: 624



I'm a Computer Science major and this is way beyond my skills.
Top
User is offlinePMEmail Poster
Quote Post
Telum
post Sep 2 2004, 01:28 AM
Post #4


Admin
********

Group: Moderators
Posts: 7,429
Joined: 20-December 02
Member No.: 224



A computer that runs on binary circuits cant be self-aware. You need more than a collection of yes/no's to become aware.
Top
User is online!PM
Quote Post
Sephiroth
post Sep 2 2004, 01:37 AM
Post #5


I do have a fucking life.
*******

Group: Members
Posts: 2,271
Joined: 16-January 04
From: AppState
Member No.: 624



Computers don't have the speed needed to run such a program. It wouldn't be as fast as the human brain.

Just for a comparison, imagine two set of animal lists memorized by a person and saved on a harddrive. In the time that the computer finds one type of feline, the human would have found four or five. Computers are linear while the brain contains many interconnected webs of information.

I don't think emotion would translate well to code.
Top
User is offlinePMEmail Poster
Quote Post
zkajan
post Sep 2 2004, 02:17 AM
Post #6


Bosnian MOFO
*******

Group: Members
Posts: 2,685
Joined: 6-January 04
From: New England
Member No.: 603



i remember reading that animals make decisions based on "fuzzy logic", that is to say, things influence their decisions which shouldn't necessarely. for example in politicspeople will choose a leader they think has nice hair even though his hair won't have any effect on his policies or their effect on the people
Top
User is offlinePM
Quote Post
necrolyte
post Sep 2 2004, 03:02 AM
Post #7


Hexakosioihexekontahexaphobe
********

Group: Members
Posts: 9,912
Joined: 21-February 03
Member No.: 271



I use fuzzy logic when I'm trying to decide which kitten to buy... (ok horrible joke I know)


The human mind is in essence a super-complex biological computer. To recreate it, even with code, must be possible, as all thoughts which pass through our mind are coded. Our mind simply uses more complex codes.

It also uses logical circuits and networks, as we're saying.

So Telum, couldnt you use basic code as the foundation to construct more complex code? Say, how basic binary code can be used as the foundation for different codes? Not being a computer expert, excuse my ignorance if I'm missing something.

Sephiroth; with computer technology advancing at the rapid rate at which it is, aren't technological limitations really only a temporary thing?

This is all like evolution. Basic nervous systems cannot do much, and are not self-aware. They only react in programmed ways similair to how a computer acts. If you hit the H key, a H comes up on the word processor. If you stimulate a Jellyfishe's tenticle nerve, it retracts the tentacle to consume whatever the tentacle has caught.

emotion is the last required brick, naturally. Being the most abstract, it would probably require us to more fully understand the methods we used to create its precursors... the self-awareness and capacity to learn.

...Ferran, I'm thinking about your ideas now and I'm kind of tired so I'll respond later :)
Top
User is offlinePMEmail Poster
Quote Post
Deus Ex Machina
post Sep 2 2004, 03:52 AM
Post #8


age
******

Group: Members
Posts: 1,204
Joined: 24-November 03
From: Suburb of Denver
Member No.: 569



QUOTE(Telum @ Sep 1 2004, 07:28 PM)
A computer that runs on binary circuits cant be self-aware.  You need more than a collection of yes/no's to become aware.
*


On the yes/no level, that may be true. However, consider that your brain is a similar collection of binary circuits: if they get input X, they fire off. Otherwise, they don't (well, that may be a tad simplified...). It's once you start taking meaning out of the circuits (e.g. a string of hex into ASCII) that self-awareness et al. arises. On the switches level, the machine is still being a machine. However, the interations between the switches holds a separate level of data.

To attempt to answer necrolyte's question, the key wouldn't be to necessarily code in the behaviors you mentioned, but to create a system of organization of data which allows them to naturally arise.

Anywho, I just finished a book (Godel, Escher, Bach: An Eternal Golden Braid, by Douglas R Hofstadter) in which the author talks about AI, self-awareness, etc.. It's a good read, and old/famous enough so that a decently-sized library has a fairly good chance of carrying it.
Top
User is offlinePMEmail Poster
Quote Post
libvertaruan
post Sep 2 2004, 04:05 AM
Post #9


my real name is Brunstgnägg
********

Group: Moderators
Posts: 9,449
Joined: 18-August 02
From: Jawja
Member No.: 125



QUOTE
To attempt to answer necrolyte's question, the key wouldn't be to necessarily code in the behaviors you mentioned, but to create a system of organization of data which allows them to naturally arise.


Its called bottom-up programming, and that is, I believe, how our brains are programmed to learn.
Top
User is offlinePMEmail Poster
Quote Post
Russian
post Sep 2 2004, 04:07 AM
Post #10


THE LOVE BELOW
*******

Group: JFTD
Posts: 4,927
Joined: 16-August 02
From: Terra Nullius -00- Status: desperately seeking attention
Member No.: 98



QUOTE
How would one program a computer to:
(1) be self-aware


you narrow the scope. Do you want a computer to be 'human'? thats impossible.

How about an autopilot program? That can monitors its speed, its height, the weather its flying into and change its operations accordingly. Its as 'self-aware' as anything can get. And such programs do exist. You program algorithms for every possible situation, and then you rigulously test it to make sure you havent forgotten anything. Lots of manhours, lots of money later you have a program thats self-aware but only in a specific scope. It can still make faulty decisions.

QUOTE
(2) come to a "best possible" solution when no perfect solution is findable


we can do that. Simple actually. A program matched its input variables against the required conditions for it to take action. If the variables dont match up it takes another pre-programmed action. With programs practically anything is possible except independant thought. Ie; a computer program can't write another program, as of today.

QUOTE
3) has a desire to learn


replace the word desire with design and you have a database management system.

QUOTE
(4) can feel emotion


sure. Of course its possible. But whats the point? Theres actually been experiments on this, a robotic face was programmed to match the facial expressions of people it was talking to.

QUOTE
(5) sends its thoughts through logical loops which consider previous memories and thoughts


Been and done. Very simple.

QUOTE
6) can formulate opinions based on facts


replace term with, can make decisions based on evidence and we have the autopilot model above.

But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.
Top
User is offlinePM
Quote Post
necrolyte
post Sep 2 2004, 04:23 AM
Post #11


Hexakosioihexekontahexaphobe
********

Group: Members
Posts: 9,912
Joined: 21-February 03
Member No.: 271



I would want such a thing to be constructed to see if we can reproduce ourselves using pure technology devoid of biology. As an experiment if you will.
Top
User is offlinePMEmail Poster
Quote Post
libvertaruan
post Sep 2 2004, 04:25 AM
Post #12


my real name is Brunstgnägg
********

Group: Moderators
Posts: 9,449
Joined: 18-August 02
From: Jawja
Member No.: 125



QUOTE
But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.



Russian, you are a fucking idiot to not understand what this is about.
Top
User is offlinePMEmail Poster
Quote Post
Deus Ex Machina
post Sep 2 2004, 05:50 AM
Post #13


age
******

Group: Members
Posts: 1,204
Joined: 24-November 03
From: Suburb of Denver
Member No.: 569



QUOTE(Russian @ Sep 1 2004, 10:07 PM)
you narrow the scope. Do you want a computer to be 'human'? thats impossible.

How about an autopilot program? That can monitors its speed, its height, the weather its flying into and change its operations accordingly. Its as 'self-aware' as anything can get. And such programs do exist. You program algorithms for every possible situation, and then you rigulously test it to make sure you havent forgotten anything. Lots of manhours, lots of money later you have a program thats self-aware but only in a specific scope. It can still make faulty decisions.


Why is it impossible for a computer system to mimic the human mind? It (the mind) certainly isn't made of base components much different than a computer (neurons basically amounting to chemical switches). Regardless, I thought people still gave credit to the Church-Turning thesis.

A key part of what I define self awareness is both the ability to `step back' and look at what you're doing, and change your behavior based on it. While a sophisticated program of the type you describe may be able to find patterns in what it's doing and optimize it's preformance accordingly, a human would likely be able to do such a thing up through higher and higher levels, ad infinum (within the limits of memory and time). In addition, a self-aware program would likely be able to combine behaviors in order to react to new situations (e.g.: situation X is similar to this new thing I'm seeing. Let's see if what I do in situation X will be of any help --alternatively-- this new situation is like nothing I've seen before. However, situation Y arose from somewhat similar conditions. Maybe my solution to situation Y will be of help. Et cetera.)


QUOTE
replace the word desire with design and you have a database management system.


There is a distince difference between actively seeking out information on a topic one had no prior knowlege of (me browsing through a library) and seeking out information based on written instructions (google spidering the internet).

QUOTE
sure. Of course its possible. But whats the point? Theres actually been experiments on this, a robotic face was programmed to match the facial expressions of people it was talking to.


Having a robot mimic facial expressions is useless unless there is another level of meaning behind the emotions.

QUOTE
But why would you want a computer with 'opnions' and 'emotions'? If you want a friend go and look for one in the streets.
*


Being able to develope a truely intelligent computer system would be a great way to study and understand how our minds work and how they evolved. Plus it would be pretty damn cool.
Top
User is offlinePMEmail Poster
Quote Post
zaragosa
post Sep 2 2004, 09:56 AM
Post #14


False Mirror
*******

Group: Members
Posts: 4,038
Joined: 25-June 02
From: Brussels, Belgium
Member No.: 62



I personally think we're practically there. There's nothing in the human hardware that we can't reproduce (at a million times the efficiency, I might add). The only thing that makes humans difference is the enormous amount of ready background knowledge that we simply pick up as we go along (commonly called 'creativity' and 'inspiration' and such). The fuzzy logic (unlogical combination of information), once we figure out how humans use it, can easily be reproduced.
Top
User is offlinePM
Quote Post
Llywelyn
post Sep 2 2004, 10:08 AM
Post #15


Mezameru Kotonaikedo
********

Group: Members
Posts: 5,735
Joined: 26-June 02
From: New Orleans, LA
Member No.: 64



There is a difference between the model and the system.

We can make a system behave as if it is intelligent, vaguely, but it is it actually intelligent or is it just mimicking intelligent behavior.

I will go out on a limb here and throw my meager vote behind Penrose in saying that intelligence is fundamentally non-computational in nature. Which means no collection of computational circuits will ever be "intelligent."

EDIT:
This is not to say, for the record, that intelligence will never be artificially generated. Only that it will never be replicated inside of a turing-based system. Quantum computers or future developments in physics may bring us there, but we aren't there with today's technology.

This post has been edited by Llywelyn: Sep 2 2004, 10:12 AM
Top
User is offlinePMEmail Poster
Quote Post
Nalvaros
post Sep 2 2004, 11:04 AM
Post #16


All shots and nothing
*******

Group: Members
Posts: 3,776
Joined: 20-August 02
Member No.: 147



You'd probably want to look up artificial intelligence, which I dont have any experience in.

However, based on what I know about programming I can see no way any existing programs can create an intelligent entity.

Right now, when we write a program, we are basically writing a set of instructions. Do this when that happens. I dont believe a set of instructions is capable of becoming self aware - certainly we could concievably program an incredibly complex program that had say, a response to a million different permutations of a situation, and based on the response it makes and the resultant input it gets, compared with a "desired" input it might switch to different responses in the future. However at the end of the day, while such a program might mimic intelligence realistically, it is at the end of the day merely following instructions. It isnt making a choice.
Top
User is online!PMEmail Poster
Quote Post
Russian
post Sep 2 2004, 04:01 PM
Post #17


THE LOVE BELOW
*******

Group: JFTD
Posts: 4,927
Joined: 16-August 02
From: Terra Nullius -00- Status: desperately seeking attention
Member No.: 98



what if its instructions are to make a choice?


vis a vi Blue Junior?
Top
User is offlinePM
Quote Post
Llywelyn
post Sep 2 2004, 04:05 PM
Post #18


Mezameru Kotonaikedo
********

Group: Members
Posts: 5,735
Joined: 26-June 02
From: New Orleans, LA
Member No.: 64



QUOTE(Russian @ Sep 2 2004, 09:01 AM)
what if its instructions are to make a choice?
vis a vi Blue Junior?
*



...and then what?

How does it make that choice?

Why?
Top
User is offlinePMEmail Poster
Quote Post
Russian
post Sep 2 2004, 04:10 PM
Post #19


THE LOVE BELOW
*******

Group: JFTD
Posts: 4,927
Joined: 16-August 02
From: Terra Nullius -00- Status: desperately seeking attention
Member No.: 98



for it to win more then one chess game it has to make different choices. Otherwise it would be predictable and easilly defeatable.


Dont know the mechanism behind it though.


Top
User is offlinePM
Quote Post
Llywelyn
post Sep 2 2004, 04:54 PM
Post #20


Mezameru Kotonaikedo
********

Group: Members
Posts: 5,735
Joined: 26-June 02
From: New Orleans, LA
Member No.: 64



QUOTE(Russian @ Sep 2 2004, 09:10 AM)
for it to win more then one chess game it has to make different choices. Otherwise it would be predictable and easilly defeatable.
Dont know the mechanism behind it though.
*



The mechanisms that its uses (a-b searches, books, kill tables, etc) are very far removed from sentience or the set of techniques used by people.
Top
User is offlinePMEmail Poster
Quote Post
Telum
post Sep 2 2004, 10:21 PM
Post #21


Admin
********

Group: Moderators
Posts: 7,429
Joined: 20-December 02
Member No.: 224



QUOTE(zaragosa @ Sep 2 2004, 05:56 AM)
I personally think we're practically there. There's nothing in the human hardware that we can't reproduce (at a million times the efficiency, I might add). The only thing that makes humans difference is the enormous amount of ready background knowledge that we simply pick up as we go along (commonly called 'creativity' and 'inspiration' and such). The fuzzy logic (unlogical combination of information), once we figure out how humans use it, can easily be reproduced.
*




If we can reproduce everything with so much more efficiency, why arent there artificial kidneys or livers on the market?
Top
User is online!PM
Quote Post
Deus Ex Machina
post Sep 2 2004, 11:33 PM
Post #22


age
******

Group: Members
Posts: 1,204
Joined: 24-November 03
From: Suburb of Denver
Member No.: 569



QUOTE(Llywelyn @ Sep 2 2004, 04:08 AM)
There is a difference between the model and the system.
I will go out on a limb here and throw my meager vote behind Penrose in saying that intelligence is fundamentally non-computational in nature.  Which means no collection of computational circuits will ever be "intelligent."
*



Would it be possible for Jew to explain Penrose's arguement for those of us not familiar with it? Google isn't being too helpful, and Wikipedia leaves it at
QUOTE
Some (including Roger Penrose) attack the applicability of the Church-Turing thesis. Others say the mind is not completely physical. Roger Penrose's argument rests on the conception of hypercomputation being possible in our universe. Quantum mechanics and newtonian mechanics do not allow hypercomputation but it is thought that some strange space times would. However there seems to be agreement that our universe is not sufficiently convoluted to allow such hypercomputation.


This post has been edited by Deus Ex Machina: Sep 2 2004, 11:34 PM
Top
User is offlinePMEmail Poster
Quote Post
Llywelyn
post Sep 3 2004, 12:03 AM
Post #23


Mezameru Kotonaikedo
********

Group: Members
Posts: 5,735
Joined: 26-June 02
From: New Orleans, LA
Member No.: 64



QUOTE(Deus Ex Machina @ Sep 2 2004, 04:33 PM)
Would it be possible for Jew to explain Penrose's arguement for those of us not familiar with it? Google isn't being too helpful, and Wikipedia leaves it at
*



In a nutshell he claims that the brain is functionally noncomputational and that while a computational system may be able to mimic intelligence, it cannot actually be intelligent. His claim is that there cannot be strong AI in a computational system.

Now, a Quantum Computer gets past some of his objections and I am less certain whether a quantum algorithm could be intelligent when executed (and not just simulated), but that is a point that is separate from whether it can be done in a set of circuits :)
Top
User is offlinePMEmail Poster
Quote Post
kindfluffysteve
post Sep 3 2004, 02:31 AM
Post #24


speaker of ideas that terrify the right on this board.
*******

Group: Members
Posts: 2,143
Joined: 29-June 02
From: LANCASTRIA
Member No.: 72



the best way to program it is to let it program itself.

use a genetic algorithm.

this route is what I think will lead to digital sentience.
Top
User is offlinePMEmail Poster
Quote Post
kindfluffysteve
post Sep 3 2004, 04:20 AM
Post #25


speaker of ideas that terrify the right on this board.
*******

Group: Members
Posts: 2,143
Joined: 29-June 02
From: LANCASTRIA
Member No.: 72



people can talk about the 1's and 0's as just being fundamentally 1's and 0's

but this is just an unnessary way to think about it.

to think in this way is to avoid the idea of something being more than the sum of its parts.

why cant a collection of 1's and 0's be more than the some its parts?

the difference between data and information/knowledge: data is just meaningless 1's and 0's. knowledge is a network of data.

data structure are things that really do exist - the mean something, they describe something and yet they are strangely etheral. an individual bit means nothing - but its the organisation that matters.
Top
User is offlinePMEmail Poster
Quote Post
zkajan
post Sep 3 2004, 04:28 AM
Post #26


Bosnian MOFO
*******

Group: Members
Posts: 2,685
Joined: 6-January 04
From: New England
Member No.: 603



QUOTE(kindfluffysteve @ Sep 2 2004, 10:31 PM)
the best way to program it is to let it program itself.

use a genetic algorithm.

this route is what I think will lead to digital sentience.
*


yup, kfs has it. babies are born with a bunch of hardwired instincts (something is wrong with the system: cry, if hungry and tit is presented: suck, when feces accumulates in colon: push it out, etc....) and a big blank slate that they pick up on in the first couple of years of their lifefor the most part but learning never really stops.
Top
User is offlinePM
Quote Post
Deus Ex Machina
post Sep 3 2004, 04:59 AM
Post #27


age
******

Group: Members
Posts: 1,204
Joined: 24-November 03
From: Suburb of Denver
Member No.: 569



QUOTE(Llywelyn @ Sep 2 2004, 06:03 PM)
In a nutshell he claims that the brain is functionally noncomputational and that while a computational system may be able to mimic intelligence, it cannot actually be intelligent.  His claim is that there cannot be strong AI in a computational system.

Now, a Quantum Computer gets past some of his objections and I am less certain whether a quantum algorithm could be intelligent when executed (and not just simulated), but that is a point that is separate from whether it can be done in a set of circuits :)
*



That's certianly interesting. I can't say I agree with his conclusion, but I haven't seen his meathods (I'm assuming that they rely on quantum effects and/or something similar, by how you phrased your post), and now I have a name to look in to if I ever reach the bottom of my "to read" list.

Anyways, thanks.

On a side note, that Jew-you replacement thing is damned annoying.
Top
User is offlinePMEmail Poster
Quote Post
zaragosa
post Sep 3 2004, 10:43 AM
Post #28


False Mirror
*******

Group: Members
Posts: 4,038
Joined: 25-June 02
From: Brussels, Belgium
Member No.: 62



QUOTE(Llywelyn @ Sep 2 2004, 12:08 PM)
We can make a system behave as if it is intelligent, vaguely, but it is it actually intelligent or is it just mimicking intelligent behavior.
*


What's the difference? Intelligence is, for all intents and purposes, a behaviour.

QUOTE(Telum @ Sep 3 2004, 12:21 AM)
If we can reproduce everything with so much more efficiency, why arent there artificial kidneys or livers on the market?
*


I didn't mean efficiency size-wise. Very efficient artificial livers and kidneys exist, but they tend to be rather bulky.
Top
User is offlinePM
Quote Post
Forben
post Sep 3 2004, 11:18 AM
Post #29


I am become Death
*****

Group: Members
Posts: 621
Joined: 19-August 02
Member No.: 136



needs to create its own variables.

definition of its basic function will need be included. like if it can move something that is a part of the robot, let it know that that piece can 'move'...

understanding of some form of reflex/sense algorithm.

'expansion' slots, so to speak, a way to expand its physical limitations to a limit.

a 'location' of read only, must obey algorithms that require it to continue to like, not kill us cause were just a parasite on the earth or some such. this read only could be viewed sort of the dna type stuff.

beyond that would be a bit of the philosiphical view of what emotion is, or isn't and whether or not our emotion is emotion that something else that is not geared towards the same style of response/reflex/soul/curiosity 'standards' can be like to another 'species', so to speak. Yes it could figure out the right responses most likely, but beyond that?
Top
User is offlinePMEmail Poster
Quote Post
Llywelyn
post Sep 3 2004, 11:32 AM
Post #30


Mezameru Kotonaikedo
********

Group: Members
Posts: 5,735
Joined: 26-June 02
From: New Orleans, LA
Member No.: 64



QUOTE(zaragosa @ Sep 3 2004, 03:43 AM)
What's the difference? Intelligence is, for all intents and purposes, a behaviour.
*



SimplY: There is a difference between the model and the system. Confusing the two is one of the fundamental fallacies.

I can calculate how billiard balls are going to behave on smooth table, I can display this output on the screen, but predicting it and displaying it is in no way equivalent to actually rolling the balls on the table.

Being able to predict or anticipate what a conscious entity would say is an entirely different arena then actually being self-aware.
Top
User is offlinePMEmail Poster
Quote Post

3 Pages  1 2 3 >
Reply to this topicTopic OptionsStart new topic

 


Lo-Fi Version
Time is now: 15th June 2006 - 03:26 AM