Artificial Intelligence Thread
Moderator: crazyankan
Artificial Intelligence Thread
This thread is on Gaia online, which is the most active forum I could find. I would be honored if you guys could visit it and comment some time.
http://www.gaiaonline.com/forum/extende ... .54699007/
http://www.gaiaonline.com/forum/extende ... .54699007/
Re: Artificial Intelligence Thread
That was the weirdest thing I've ever seen and I won't even look at it anymore. It is even a way weirder than Ripper's site!
2Ripper: Take a look at the "Comments and suggestions" section.
2Ripper: Take a look at the "Comments and suggestions" section.
ac8dad43d497508fe83d143ee096c252
- Sergio Nova
- Künstler
- Posts: 2890
- Joined: Fri Jul 06, 2007 3:08 pm
- Location: São Paulo or Valles Marineris
Re: Artificial Intelligence Thread
I talked to many people about this and my view is that robots can be called "humans" at some point in the future, and that humans could be called robots. In fact, I think that every living thing could be called a robot and vice versa. Also, currently, I believe that an average computer is already smarter than a normal person.
Int29AH is right. I have already wasted my time with stupid things, but this one takes the trophy.
Int29AH is right. I have already wasted my time with stupid things, but this one takes the trophy.
Re: Artificial Intelligence Thread
Why is it stupid? Technically we are machines designed by nature or "God" with sentience.
Last I checked machines with sentience are called androids or robots.
Last I checked machines with sentience are called androids or robots.
Re: Artificial Intelligence Thread
How to say it correctlier... I don't know, I'll say it as it is:
Who are you taking us for?
2 Sergio:
What did you expect? That site's goal audience is children, it's a social network for them.

PS.
I predict: Shit will pour now.

Who are you taking us for?

2 Sergio:
What did you expect? That site's goal audience is children, it's a social network for them.



PS.
I predict: Shit will pour now.



ac8dad43d497508fe83d143ee096c252
- Sergio Nova
- Künstler
- Posts: 2890
- Joined: Fri Jul 06, 2007 3:08 pm
- Location: São Paulo or Valles Marineris
Re: Artificial Intelligence Thread
Recently, I have been thinking about building a statue to celebrate human stupidity. I think I am going to use such a "philisophical" text as moto.Int 29Ah wrote: 2 Sergio:
What did you expect? That site's goal audience is children, it's a social network for them.![]()
![]()
![]()
On the other hands, the so said artificial intelligennce will not defenestrate time that way.
Re: Artificial Intelligence Thread
Ugh. I'm saying that A.I. could be mistaken for sentience sooner or later, is it wrong? Should I have posted it on a more adult oriented site?
I'm mistook you people for people who could say why or correct me. Of course, I was probably wrong, but that's what I did.
As for the robots and humans, what in that sentence is stupid? A person is a "robot" made of organic materials, no? Maybe I'm wrong, but you could have enlightened me as to why.
And technically, people of this generation are too stupid to remember anything. And human stupidity isn't new, I believe there is already a monument to it with more mottos than anyone would care to remember, I think it's called the internet, or something.
I'm mistook you people for people who could say why or correct me. Of course, I was probably wrong, but that's what I did.
As for the robots and humans, what in that sentence is stupid? A person is a "robot" made of organic materials, no? Maybe I'm wrong, but you could have enlightened me as to why.
And technically, people of this generation are too stupid to remember anything. And human stupidity isn't new, I believe there is already a monument to it with more mottos than anyone would care to remember, I think it's called the internet, or something.
Re: Artificial Intelligence Thread
2 Sergio:
If you've seen only the forum there, you haven't seen the weirdest thing ever - just browse that site, not the forum.
That statue of stupidity is at that site's main page, it's big and red and it cries to you "PRESS ME". Take a look.
If you've seen only the forum there, you haven't seen the weirdest thing ever - just browse that site, not the forum.
That statue of stupidity is at that site's main page, it's big and red and it cries to you "PRESS ME". Take a look.
ac8dad43d497508fe83d143ee096c252
- Sergio Nova
- Künstler
- Posts: 2890
- Joined: Fri Jul 06, 2007 3:08 pm
- Location: São Paulo or Valles Marineris
Re: Artificial Intelligence Thread
All right, I apologize. I intended to make a joke, but it is obvious that I was unhappy.
Supposing it is really possible, one day, to create artificial intelligence even more intelligent than we humans ourselves, you must agree that we would have to be naïve, to say the least, in order to do make them.
As to computer, no, they are not more intelligent than ordinary humans. They simply follow instructions. They are never confused simply because they do not think. A thinking being has differents things in mind when he is taking decisions. Computers do not.
Just remembering the film: the script was written by Kubirk, and it was supposed to be filmes by Kubrik only. I do like Spielberg, but in his own universe. Trying to film Kubrik, he made an infantile and absurd tale, even because Kubrik was unique.
Supposing it is really possible, one day, to create artificial intelligence even more intelligent than we humans ourselves, you must agree that we would have to be naïve, to say the least, in order to do make them.
As to computer, no, they are not more intelligent than ordinary humans. They simply follow instructions. They are never confused simply because they do not think. A thinking being has differents things in mind when he is taking decisions. Computers do not.
Just remembering the film: the script was written by Kubirk, and it was supposed to be filmes by Kubrik only. I do like Spielberg, but in his own universe. Trying to film Kubrik, he made an infantile and absurd tale, even because Kubrik was unique.
Re: Artificial Intelligence Thread
Stupid thing is a group of people will actually try to make a A.I. as smart as us, but will they ever succeed?
The computer thing was hyperbole, so that wasn't too serious.
And yes, I do agree Gaia is aimed for kids. Some smart people do go there daily though. Not enough to make it my first place to have a conversation though.
The computer thing was hyperbole, so that wasn't too serious.
And yes, I do agree Gaia is aimed for kids. Some smart people do go there daily though. Not enough to make it my first place to have a conversation though.
Re: Artificial Intelligence Thread
I haven't seen the A.I. movie, but I'll comment what I think about this topic.
Ok, let's start from the beginning.
Have you heard about a Turing test? If not, then you have nothing to discuss about AI.
Ok, if AI passes the test - it will show that it can EMULATE human on a level enough to be undistinguishable from an average human being. But that's only an emulation.
An example for you: Deep Blue has won over Gary Kasparov in a chess match. Is it more intelligent than him? NO. It just calculates chess combinations faster, it DOESN'T think WHY it calculates it.
Even if someday AI will be created, we WILL NEVER KNOW that it's created, due to impossibility of telling it apart.
What I'm trying to say: if the AI will ever be created, it will be NOTHING LIKE HUMANS AT ALL. I mean that even if it will have consciousness (which is a question also), it will be ridiculous to emulate human for it.
It's like teaching a human to emulate a monkey.
Maybe, I haven't put it clear in this post, because I'm pissed off currently. If I'll have any additions/explanations I'll post them further.
Ok, let's start from the beginning.
Have you heard about a Turing test? If not, then you have nothing to discuss about AI.
Ok, if AI passes the test - it will show that it can EMULATE human on a level enough to be undistinguishable from an average human being. But that's only an emulation.
An example for you: Deep Blue has won over Gary Kasparov in a chess match. Is it more intelligent than him? NO. It just calculates chess combinations faster, it DOESN'T think WHY it calculates it.
Even if someday AI will be created, we WILL NEVER KNOW that it's created, due to impossibility of telling it apart.
What I'm trying to say: if the AI will ever be created, it will be NOTHING LIKE HUMANS AT ALL. I mean that even if it will have consciousness (which is a question also), it will be ridiculous to emulate human for it.
It's like teaching a human to emulate a monkey.
Maybe, I haven't put it clear in this post, because I'm pissed off currently. If I'll have any additions/explanations I'll post them further.
ac8dad43d497508fe83d143ee096c252
- Sergio Nova
- Künstler
- Posts: 2890
- Joined: Fri Jul 06, 2007 3:08 pm
- Location: São Paulo or Valles Marineris
Re: Artificial Intelligence Thread
Well, the text written there says it all.Int 29Ah wrote:2 Sergio:
If you've seen only the forum there, you haven't seen the weirdest thing ever - just browse that site, not the forum.
That statue of stupidity is at that site's main page, it's big and red and it cries to you "PRESS ME". Take a look.

Re: Artificial Intelligence Thread
Well, this discussion has gone into a flood, as usual.
Now, I have finally cooled enough to continue this conversation after being overpissed by it's start...
Also, I have to apologize for such an attitude, because I usually become completely pissed off, when I'm asked something about a problem I know a lot about. I have to explain my thoughts, but I just can't get them together due to being in a mood.
So, I'll explain my opinion given in the quick-tempered previous post now.
What I was talking about was a Turing test - a famous mental experiment, which can POTENTIALLY tell AI apart from human intelligence.
For those, who don't know about it: 3 participants are communicating anonymously, 2 of them human, and one is an AI.
One of the humans is a "supervisor".
The test is just two separate conversations - between a supervisor and a human, and between a supervisor and AI.
The result is simple - if supervisor can tell who is who (remember, they're anonymous), and he does it right, then the test is failed.
So, the AI's goal at this test is to make supervisor think that it is a HUMAN and that the other participant is an AI - it has to fool a supervisor, it's the only way of winning this game.
However, no AI has ever won...
So, let's imagine that in a distant future AI has finally passed the test and is considered intelligent enough to interact with humans as an even match.
But the question is - does it have consciousness? That's what I've meant to say when I've made an example of Gary Kasparov and Deep Blue in the previous post, this problem is called a "chinese room".
I'll explain it further:
You don't know Chinese, imagine that you're in a big room with no doors and windows, which is full of cards with chinese hieroglyphs on them. Also, it's written on the walls how to make any CORRECT WORD OR SENTENCE out of that hieroglyphs, so you can combine them keeping up to the instruction, producing some Chinese words or even sentences. But you still DON'T KNOW WHAT THEIR MEANING IS!
You know that the grammatical construction you've just made is GRAMMATICALLY CORRECT, but you don't know it's MEANING!
That's the key problem about the consciousness, so if the AI passes a Turing test, it just only can emulate humans, nothing more. It won't even question itself "Why I'm doing this? o_0" - it, most likely, won't have consciousness.
I can go on about this for a long time, because I've read a lot about this problem quite a time ago, but still.
I'll tell you one thing more.
EVERY computer/AI/something that pretends to be an AI, is built using a Von Neumann logic, or it's modifications. It is built into the principles of an end automation, which describes a perfect machine, which performs an ALGORYTHM.
Everything above this level is just an EMULATION, a high-level ABSTRACTION, which is still maintained by an operation stack in the lowest level.
It's started to get complicated in the eye of the beholder... I'll try to simplify it.
A simple example: there is a thing called a Blue brain project, which has a purpose of simulating a human hyppocamp (or even a whole brain - I don't remember) in the year 2020.
YEAR 2020! Taking in account the system's enomorous performance (they are using a supercomputer for that), they've only managed to emulate a PART of A SNAIL BRAIN!!! Why? Because they're EMULATING IT using an unfit architecture.
Even taking in account a Moore's law and a law of the accelerating returns, which gives us about 10billion times faster computers in the year 2020, I really doubt they'll ever succeed.
I PREDICT: THEY WON'T. I can be wrong, but I'm still speculating on it...
This could be going to be a huge post, but it's just uninteresting for me to explain to you things everyone can freely read in a wikipedia. If you want to - you will read them yourself, like I did.
PS.
I'll just state an opinion about this problem I agree with (you can take it for a mine opinion in the current topic). It's Raymond Kurzweil's opinion, which I've found brilliant and the only one reasonable.
The only possible AI ever is a human with an uploaded consciousness. Nothing more.

Now, I have finally cooled enough to continue this conversation after being overpissed by it's start...
Also, I have to apologize for such an attitude, because I usually become completely pissed off, when I'm asked something about a problem I know a lot about. I have to explain my thoughts, but I just can't get them together due to being in a mood.
So, I'll explain my opinion given in the quick-tempered previous post now.
What I was talking about was a Turing test - a famous mental experiment, which can POTENTIALLY tell AI apart from human intelligence.
For those, who don't know about it: 3 participants are communicating anonymously, 2 of them human, and one is an AI.
One of the humans is a "supervisor".
The test is just two separate conversations - between a supervisor and a human, and between a supervisor and AI.
The result is simple - if supervisor can tell who is who (remember, they're anonymous), and he does it right, then the test is failed.
So, the AI's goal at this test is to make supervisor think that it is a HUMAN and that the other participant is an AI - it has to fool a supervisor, it's the only way of winning this game.
However, no AI has ever won...
So, let's imagine that in a distant future AI has finally passed the test and is considered intelligent enough to interact with humans as an even match.
But the question is - does it have consciousness? That's what I've meant to say when I've made an example of Gary Kasparov and Deep Blue in the previous post, this problem is called a "chinese room".
I'll explain it further:
You don't know Chinese, imagine that you're in a big room with no doors and windows, which is full of cards with chinese hieroglyphs on them. Also, it's written on the walls how to make any CORRECT WORD OR SENTENCE out of that hieroglyphs, so you can combine them keeping up to the instruction, producing some Chinese words or even sentences. But you still DON'T KNOW WHAT THEIR MEANING IS!
You know that the grammatical construction you've just made is GRAMMATICALLY CORRECT, but you don't know it's MEANING!
That's the key problem about the consciousness, so if the AI passes a Turing test, it just only can emulate humans, nothing more. It won't even question itself "Why I'm doing this? o_0" - it, most likely, won't have consciousness.
I can go on about this for a long time, because I've read a lot about this problem quite a time ago, but still.
I'll tell you one thing more.
EVERY computer/AI/something that pretends to be an AI, is built using a Von Neumann logic, or it's modifications. It is built into the principles of an end automation, which describes a perfect machine, which performs an ALGORYTHM.
Everything above this level is just an EMULATION, a high-level ABSTRACTION, which is still maintained by an operation stack in the lowest level.
It's started to get complicated in the eye of the beholder... I'll try to simplify it.
A simple example: there is a thing called a Blue brain project, which has a purpose of simulating a human hyppocamp (or even a whole brain - I don't remember) in the year 2020.
YEAR 2020! Taking in account the system's enomorous performance (they are using a supercomputer for that), they've only managed to emulate a PART of A SNAIL BRAIN!!! Why? Because they're EMULATING IT using an unfit architecture.
Even taking in account a Moore's law and a law of the accelerating returns, which gives us about 10billion times faster computers in the year 2020, I really doubt they'll ever succeed.
I PREDICT: THEY WON'T. I can be wrong, but I'm still speculating on it...
This could be going to be a huge post, but it's just uninteresting for me to explain to you things everyone can freely read in a wikipedia. If you want to - you will read them yourself, like I did.
PS.
I'll just state an opinion about this problem I agree with (you can take it for a mine opinion in the current topic). It's Raymond Kurzweil's opinion, which I've found brilliant and the only one reasonable.
The only possible AI ever is a human with an uploaded consciousness. Nothing more.
Last edited by moooV on Wed Sep 30, 2009 10:17 am, edited 1 time in total.
ac8dad43d497508fe83d143ee096c252
- Burning Angel
- GIB
- Posts: 296
- Joined: Sun Aug 24, 2008 4:28 pm
Re: Artificial Intelligence Thread
But the Turing test seems to be more theoretical or philosophical than practical.
The machine that tries to pass for human can just be programmed with human responses and not have a real intelligence, since the Turing test only examines whether or not the computer behaves like a human being, not if the computer behaves intelligently.
Even Alan Turing himself had a hard time describing "thinking" and applying it to computers.
Although you can argue that some human behavior is just the product of a natural stimuli-response program, and that humans are intelectually and psychologically conditioned (or programmed) to have certain responses to specific situations. This is called Human Psychology btw.
The machine that tries to pass for human can just be programmed with human responses and not have a real intelligence, since the Turing test only examines whether or not the computer behaves like a human being, not if the computer behaves intelligently.
Even Alan Turing himself had a hard time describing "thinking" and applying it to computers.
Although you can argue that some human behavior is just the product of a natural stimuli-response program, and that humans are intelectually and psychologically conditioned (or programmed) to have certain responses to specific situations. This is called Human Psychology btw.
Re: Artificial Intelligence Thread
Do you know Pluto?
Realy great Manga about AI!
Realy great Manga about AI!
Re: Artificial Intelligence Thread
This debate is really interesting compared to the forum link previously mentioned!
I just would like to mention some ideas about A.I.
The term describes only a state of thinking of machines made by humans, this is why it is "Artificial". Human can only, in a certain way, create artificial intelligence based on his own way of thinking. If you think about the intelligence or the soul, it is only based on experience. From the beginning, the baby has a brain full of neurons not connected to each other. He will learn, think, understand, by acquiring experience that will connect the neurons and create a network.
This is quite the same when programming a neural network to resolve problems. The machine acquire experience in order to resolve problems, from the simplest to the most complicated.
By increasing the amount of memory, meaning that you increase the number of "neural connections", you improve the "intelligence" of the machine, the number of problems that can be resolved.
If you implement the possibility for the machine to put in memory events by itself, then you create a perfect A.I. This is not so simple because of the size of memory, but you can do it.
I just would like to mention some ideas about A.I.
The term describes only a state of thinking of machines made by humans, this is why it is "Artificial". Human can only, in a certain way, create artificial intelligence based on his own way of thinking. If you think about the intelligence or the soul, it is only based on experience. From the beginning, the baby has a brain full of neurons not connected to each other. He will learn, think, understand, by acquiring experience that will connect the neurons and create a network.
This is quite the same when programming a neural network to resolve problems. The machine acquire experience in order to resolve problems, from the simplest to the most complicated.
By increasing the amount of memory, meaning that you increase the number of "neural connections", you improve the "intelligence" of the machine, the number of problems that can be resolved.
If you implement the possibility for the machine to put in memory events by itself, then you create a perfect A.I. This is not so simple because of the size of memory, but you can do it.
- Burning Angel
- GIB
- Posts: 296
- Joined: Sun Aug 24, 2008 4:28 pm
Re: Artificial Intelligence Thread
I think it's dificult to talk about "intelligence" since it is such an ambigious term. There are dozens of possible definitions for "intelligence". Many individuals have given different definitions about what is "intelligence". Biologists, psychologists and even mathematicians such as Alfred Binet, Francis Galton, David Wechsler, Cyril Burt, Howard Gardner, Linda Gottfredson, Robert Sternberg, Marcus Hutter,..... The list goes on and on.
To even attempt to have this discussion, we would need to have a decent definition of "intelligence". My definition of "intelligence" is the ability of the mind/brain to learn, and to use the acquired information to reach a certain goal. It is the ability to learn about, learn from, understand, and interact with one’s environment or situation.
After this, we would need to talk about a group of smaller, but similar or related abilities such as the capability to adapt to changes in the environment, the capability of reason and abstract thought, the capability for planning and problem-solving, the capability for communication and language, and the capability for creative and original thoughts.
We could even talk about the complexity of emotions, how they are produced, and how they are related to intelligence. Or we can talk about how emotions are the consequence of intelligence.
Here's a thought: can an intelligent A.I. have emotions?
Btw, if we were going to talk about souls, we would need to make a new thread about it. We would need to talk about all the philosophical implications involved with the existance or non-existence of souls. Not to mention the fact that there is no scientific proof to support the claim of the existence for a soul, in the first place. We can talk about conciousness, but not souls since there is no evidence to support their existence.
To even attempt to have this discussion, we would need to have a decent definition of "intelligence". My definition of "intelligence" is the ability of the mind/brain to learn, and to use the acquired information to reach a certain goal. It is the ability to learn about, learn from, understand, and interact with one’s environment or situation.
After this, we would need to talk about a group of smaller, but similar or related abilities such as the capability to adapt to changes in the environment, the capability of reason and abstract thought, the capability for planning and problem-solving, the capability for communication and language, and the capability for creative and original thoughts.
We could even talk about the complexity of emotions, how they are produced, and how they are related to intelligence. Or we can talk about how emotions are the consequence of intelligence.
Here's a thought: can an intelligent A.I. have emotions?
Btw, if we were going to talk about souls, we would need to make a new thread about it. We would need to talk about all the philosophical implications involved with the existance or non-existence of souls. Not to mention the fact that there is no scientific proof to support the claim of the existence for a soul, in the first place. We can talk about conciousness, but not souls since there is no evidence to support their existence.
Re: Artificial Intelligence Thread
2 Burnin Angel: I totally agree with you about the definition of the intelligence.
But to answer the question if A.I. can have emotions I think this is in the question, artificial is not emotional, inteligence is not related to the emotion. The emotion is the notion of non sense driven by the subconscience.
I think they are decorrelated to each other because of the complexity of an emotion. That takes into account our own experience during years, our own education that can gather billions of events. If you think about that, the thing named "soul" is the same, just a way different to someone else to answer a problematic.
Maybe A.I. can have feelings with a lot of educational learnings and experiences finally!
But to answer the question if A.I. can have emotions I think this is in the question, artificial is not emotional, inteligence is not related to the emotion. The emotion is the notion of non sense driven by the subconscience.
I think they are decorrelated to each other because of the complexity of an emotion. That takes into account our own experience during years, our own education that can gather billions of events. If you think about that, the thing named "soul" is the same, just a way different to someone else to answer a problematic.
Maybe A.I. can have feelings with a lot of educational learnings and experiences finally!
Re: Artificial Intelligence Thread
Let's continue this undeservedly abandoned topic.
There is a phenomenon called a Mowgli Syndrome called after a main hero of Redyard Kiplings's poem "A Jungle book", Mowgli.
This phenomenon is about feral children, who were raised by wild animals.
The point is - when such a child is discovered and captured, it still retains all his habits and INSTINCTS (
) he has gained in the wild life. They are completely UNADUCATABLE, as far as I know (I'm a dilettante in such question), parts of their brain which are responsible for learning and psychical development aren't even formed properly.
They are no subject for psychology, because they don't even have a proper personality to be recognised as a person. Basically, they are animals, who just look like humans.
What I'm trying to say is that the brain is developing in a way to meet the needs of it's holder. If it isn't used a lot - it gets smarter, but if it's not used - it degrades [offtopic](just look at the cattle people in the streets, consumers)[/offtopic].
[just my theory]
There are bodybuilders - people, who develop their muscles by giving them a maximum load. I think, the brain can be developed in the same way - by giving it a merely giant load in the same way bodybuilders do.
I've experimented on myself the whole this summer, however it's another story...
[/just my theory]
My opinion is - even if the humanity will ever invent AI, it just won't be AWARE of it. Just because it won't be intelligent from it's stand point.
What I mean - I'll give an example: there are IQ tests for humans (of course), they intend to give some quantity score to your intelligence. But they measure ONLY how FAST you're thinking, they don't take in account your memory much. If you're mature enough to have them (I mean, you know the basics of math - 6-7th school grade) - you can solve them.
Beleive me, a completely educated person has the same chances of an outcome as a 12-13 years old kid. I can even give myself as an example - I've had about 130 in the mid-school and now I have 137 as a grown person (population average is 90-110). The deviation isn't big.
What about neural connections - you're half wrong. By increasing them we are gaining associativity, which is an advantage. But there is a single point of failure - these neurons are EMULATED (we're talking about an AI), I can even say that, most probably, they're built using the stochastic neuron model, which means that (I'm simplifiying this a lot) each interconnect (synapse weights) have to be calculated by the hardware. This will raise EXPONENTIALLY with the interconnect number growth, so we will have to upgrade the hardware to keep up to the requirements.
Beleive me, this won't go for long - there will be a limit for growth.
1) Have self-awareness, it should explore not because it has to due to the circumstances, but because it WANTS TO
2) Try to learn/study ITSELF also
3) Give a thought about why it's doing 1 and 2.
However, I'd like to add that I don't beleive in them. They're just entities made up by humankind, like god.
God is not someone up high in the skies, not the creator, it's an IDEA, which itself is created by the society. It's an IDEA of some high entity, the abstraction.
Don't take this as an offence, people, it's just mine opinion.
That's what I've tried to say in my previous post illustrated by the "Chinese room" problem.Burning Angel wrote:The machine that tries to pass for human can just be programmed with human responses and not have a real intelligence, since the Turing test only examines whether or not the computer behaves like a human being, not if the computer behaves intelligently.
I'll just give you some food for thought:Burning Angel wrote:Although you can argue that some human behavior is just the product of a natural stimuli-response program, and that humans are intelectually and psychologically conditioned (or programmed) to have certain responses to specific situations. This is called Human Psychology btw.
There is a phenomenon called a Mowgli Syndrome called after a main hero of Redyard Kiplings's poem "A Jungle book", Mowgli.
This phenomenon is about feral children, who were raised by wild animals.
The point is - when such a child is discovered and captured, it still retains all his habits and INSTINCTS (

They are no subject for psychology, because they don't even have a proper personality to be recognised as a person. Basically, they are animals, who just look like humans.
What I'm trying to say is that the brain is developing in a way to meet the needs of it's holder. If it isn't used a lot - it gets smarter, but if it's not used - it degrades [offtopic](just look at the cattle people in the streets, consumers)[/offtopic].
[just my theory]
There are bodybuilders - people, who develop their muscles by giving them a maximum load. I think, the brain can be developed in the same way - by giving it a merely giant load in the same way bodybuilders do.
I've experimented on myself the whole this summer, however it's another story...
[/just my theory]
That's what I wanted to say in the first my serious post in this topic - we can judge machine's intelligence only from OUR point of view, because we don't have an alternative to it. (errgh, this is applicable to the aliens also, not only machines).Sam wrote:The term describes only a state of thinking of machines made by humans, this is why it is "Artificial". Human can only, in a certain way, create artificial intelligence based on his own way of thinking. If you think about the intelligence or the soul, it is only based on experience. From the beginning, the baby has a brain full of neurons not connected to each other. He will learn, think, understand, by acquiring experience that will connect the neurons and create a network.
My opinion is - even if the humanity will ever invent AI, it just won't be AWARE of it. Just because it won't be intelligent from it's stand point.
I disagree. This will only change the database size, nothing more. You can't expect some quality changes in it's core, it will just have a bigger experience base.Sam wrote:By increasing the amount of memory, meaning that you increase the number of "neural connections", you improve the "intelligence" of the machine, the number of problems that can be resolved.
What I mean - I'll give an example: there are IQ tests for humans (of course), they intend to give some quantity score to your intelligence. But they measure ONLY how FAST you're thinking, they don't take in account your memory much. If you're mature enough to have them (I mean, you know the basics of math - 6-7th school grade) - you can solve them.
Beleive me, a completely educated person has the same chances of an outcome as a 12-13 years old kid. I can even give myself as an example - I've had about 130 in the mid-school and now I have 137 as a grown person (population average is 90-110). The deviation isn't big.
What about neural connections - you're half wrong. By increasing them we are gaining associativity, which is an advantage. But there is a single point of failure - these neurons are EMULATED (we're talking about an AI), I can even say that, most probably, they're built using the stochastic neuron model, which means that (I'm simplifiying this a lot) each interconnect (synapse weights) have to be calculated by the hardware. This will raise EXPONENTIALLY with the interconnect number growth, so we will have to upgrade the hardware to keep up to the requirements.
Beleive me, this won't go for long - there will be a limit for growth.
I agree, but I'd like to add something to this definition - the mind has to:Burning Angel wrote:To even attempt to have this discussion, we would need to have a decent definition of "intelligence". My definition of "intelligence" is the ability of the mind/brain to learn, and to use the acquired information to reach a certain goal. It is the ability to learn about, learn from, understand, and interact with one’s environment or situation
1) Have self-awareness, it should explore not because it has to due to the circumstances, but because it WANTS TO
2) Try to learn/study ITSELF also
3) Give a thought about why it's doing 1 and 2.
Please, don't get me started on this. I can talk about philosophy for hours.Burning Angel wrote:We would need to talk about all the philosophical implications involved with the existance or non-existence of souls.
However, I'd like to add that I don't beleive in them. They're just entities made up by humankind, like god.
God is not someone up high in the skies, not the creator, it's an IDEA, which itself is created by the society. It's an IDEA of some high entity, the abstraction.
Don't take this as an offence, people, it's just mine opinion.
Last edited by moooV on Tue Oct 13, 2009 2:59 pm, edited 1 time in total.
ac8dad43d497508fe83d143ee096c252
- Burning Angel
- GIB
- Posts: 296
- Joined: Sun Aug 24, 2008 4:28 pm
Re: Artificial Intelligence Thread
Obviously, the brain can trained to be quicker and more efficiant.[just my theory]
There are bodybuilders - people, who develop their muscles by giving them a maximum load. I think, the brain can be developed in the same way - by giving it a merely giant load in the same way bodybuilders do.
I've experimented on myself the whole this summer, however it's another story...
[/just my theory]
For example, I practice by playing chess, doing puzzles and sudokus, and, especially, training speed mathematics/mental calculation (the Trachtenberg system and Vedic mathematics). You can also play memory games to increase your memory (capability and speed). Even reading can increase your mental capabilities.
For example, my natural IQ is around the 120s (depending on the test), but due to my "brain training" I have it now in the high 130s, and with a few weeks of intensive training, I can temporarily increase it to the 150s. Its like physical training, if your constant, you recieve results. But if you abondon it, you lose capabilities.
Btw, I know what it means to train. I`m an amateur weight lifter/bodybuilder (I weigh 79 kilos and I can lift 160 kilos and do 25 pull-ups) and martial artist. I've practiced boxing, judo/juijitsu, taekwondo, and wrestling. Now I practice Krav Maga and Sayoc Kali/Eskrima.
Sorry, I lost track of the discussion. We are talking about Artificial Intelligence. Let's take a look at some of their modern capabilities and limitations.
Problem solving and planning: Scientist have developed complex algorithms that imitate human reasoning to solve puzzles, play board games or make logical deductions. For example, computer can play chess and defeat Grand Masters. Or calculators can calculate infinitely faster than normal humans.
Memory: A computer has far more memory capacity than a person. It's possible to have an entire library in your hard drive. How many books can a person memorize word for word? For most people, that is physically imposible.
Perception: Computers are able to recieve information from their sensors and analize their surroundings.
Learning and Natural Language processing: Today computers can't learn on their own. They need someone to imput information. They can't even read or understand human language in a self-suficient level. For example, they can't go into internet and read a text. They need someone to imput that text in binary code. The computer may not even know how to analyse correctly that information.
Creativity and abstract knowledge: Even though there have been many advances lately, they are still far behind human standards. Besides, creativity is very philosophical and abstract in itself.
Re: Artificial Intelligence Thread
You're completely wrong. I'll explain it:Burning Angel wrote:Memory: A computer has far more memory capacity than a person. It's possible to have an entire library in your hard drive. How many books can a person memorize word for word? For most people, that is physically imposible.
I've read some time ago that the size of human memory is estimated more than 100 petabytes (1Pb = 1024Tb = 1048576 Gigabytes), and I agree with this estimation.
You're judging about it in the most obvious way, you're comparing it to the character amount.
But human memory doesn't operate on characters! It operates on entities and associations between them. Even when you're memorizing some text, you're building associative chains how the characters are bound together in that text, you're rememering how it looks (just like a photograph of the book page [as I use to see it]), you're even linking it with your past experience. Even the character font matters a lot.
That's a giant amount of data. That's why you can't per-character memorize a book.
I'll explain it, giving myself as an example once again:
I'm a programmer, I use to write code. I have everything set up as I wish in the development environment: I have a custom syntax highlighting, I have a customized font/size, and window opacity. When I write something, I remember it all in the smallest detail for about a year.
BUT when I see even MY OWN CODE, which I've written YESTERDAY just in another highlighting/font/size/editor, I can't recognise it, I even can't tell what is written there. Even the desktop wallpaper and window opacity settings count.
I have a completely photographic memory, I use to remember things as a movie. That's why I use to take my customized operating system with me on the flash drive if I go somewhere away from home to ensure that I'll see everything right the same as I've seen it at home. Especially if I go to present my work. Some people find that strange about me, but noone has ever complained.
One more example - there are frequent moments in your life, when you've had a great idea, something distracted you (f.e. a phone call), and... the idea is LOST completely. You will never remember it again. That's the associativity's fault.
The point is - you percept text as a photo, movie, some people percept it as a sound, but never as a character flow.
ac8dad43d497508fe83d143ee096c252
Re: Artificial Intelligence Thread
Suddenly everyone has lost interest to this topic. It feels like I'm talking to the wall. 

ac8dad43d497508fe83d143ee096c252
Re: Artificial Intelligence Thread
Thats Gaia for you.
100 Petabytes. Interesting.
100 Petabytes. Interesting.
- Burning Angel
- GIB
- Posts: 296
- Joined: Sun Aug 24, 2008 4:28 pm
Re: Artificial Intelligence Thread
Sorry Int 29Ah, it's just that I have economic exams soon. Fair enough.
So when do you think that AI will become sentiant and self-aware?
So when do you think that AI will become sentiant and self-aware?
- Sergio Nova
- Künstler
- Posts: 2890
- Joined: Fri Jul 06, 2007 3:08 pm
- Location: São Paulo or Valles Marineris
Re: Artificial Intelligence Thread
I was without internet, but now I'm back (fear, people).Int 29Ah wrote:Suddenly everyone has lost interest to this topic. It feels like I'm talking to the wall.
About this topic, it has become a subject for experts and that, decidedly, is NOT my case. Thus, I can just observe and learn.