Let's continue this undeservedly abandoned topic.
Burning Angel wrote:The machine that tries to pass for human can just be programmed with human responses and not have a real intelligence, since the Turing test only examines whether or not the computer behaves like a human being, not if the computer behaves intelligently.
That's what I've tried to say in my previous post illustrated by the "Chinese room" problem.
Burning Angel wrote:Although you can argue that some human behavior is just the product of a natural stimuli-response program, and that humans are intelectually and psychologically conditioned (or programmed) to have certain responses to specific situations. This is called Human Psychology btw.
I'll just give you some food for thought:
There is a phenomenon called a
Mowgli Syndrome called after a main hero of Redyard Kiplings's poem "A Jungle book",
Mowgli.
This phenomenon is about
feral children, who were raised by wild animals.
The point is - when such a child is discovered and captured, it still retains all his habits and INSTINCTS (

) he has gained in the wild life. They are completely UNADUCATABLE, as far as I know (I'm a dilettante in such question), parts of their brain which are responsible for learning and psychical development aren't even formed properly.
They are no subject for psychology, because they don't even have a proper personality to be recognised as a person. Basically, they are animals, who just look like humans.
What I'm trying to say is that the brain is developing in a way to meet the needs of it's holder. If it isn't used a lot - it gets smarter, but if it's not used - it degrades [offtopic](just look at the cattle people in the streets, consumers)[/offtopic].
[just my theory]
There are bodybuilders - people, who develop their muscles by giving them a maximum load. I think, the brain can be developed in the same way - by giving it a merely giant load in the same way bodybuilders do.
I've experimented on myself the whole this summer, however it's another story...
[/just my theory]
Sam wrote:The term describes only a state of thinking of machines made by humans, this is why it is "Artificial". Human can only, in a certain way, create artificial intelligence based on his own way of thinking. If you think about the intelligence or the soul, it is only based on experience. From the beginning, the baby has a brain full of neurons not connected to each other. He will learn, think, understand, by acquiring experience that will connect the neurons and create a network.
That's what I wanted to say in the first my serious post in this topic - we can judge machine's intelligence only from OUR point of view, because we don't have an alternative to it. (errgh, this is applicable to the aliens also, not only machines).
My opinion is - even if the humanity will ever invent AI, it just won't be AWARE of it. Just because it won't be intelligent from it's stand point.
Sam wrote:By increasing the amount of memory, meaning that you increase the number of "neural connections", you improve the "intelligence" of the machine, the number of problems that can be resolved.
I disagree. This will only change the database size, nothing more. You can't expect some quality changes in it's core, it will just have a bigger experience base.
What I mean - I'll give an example: there are IQ tests for humans (of course), they intend to give some quantity score to your intelligence. But they measure ONLY how FAST you're thinking, they don't take in account your memory much. If you're mature enough to have them (I mean, you know the basics of math - 6-7th school grade) - you can solve them.
Beleive me, a completely educated person has the same chances of an outcome as a 12-13 years old kid. I can even give myself as an example - I've had about 130 in the mid-school and now I have 137 as a grown person (population average is 90-110). The deviation isn't big.
What about neural connections - you're half wrong. By increasing them we are gaining associativity, which is an advantage. But there is a single point of failure - these neurons are EMULATED (we're talking about an AI), I can even say that, most probably, they're built using the stochastic neuron model, which means that (I'm simplifiying this a lot) each interconnect (synapse weights) have to be calculated by the hardware. This will raise EXPONENTIALLY with the interconnect number growth, so we will have to upgrade the hardware to keep up to the requirements.
Beleive me, this won't go for long - there will be a limit for growth.
Burning Angel wrote:To even attempt to have this discussion, we would need to have a decent definition of "intelligence". My definition of "intelligence" is the ability of the mind/brain to learn, and to use the acquired information to reach a certain goal. It is the ability to learn about, learn from, understand, and interact with one’s environment or situation
I agree, but I'd like to add something to this definition - the mind has to:
1) Have self-awareness, it should explore not because it has to due to the circumstances, but because it WANTS TO
2) Try to learn/study ITSELF also
3) Give a thought about why it's doing 1 and 2.
Burning Angel wrote:We would need to talk about all the philosophical implications involved with the existance or non-existence of souls.
Please, don't get me started on this. I can talk about philosophy for hours.
However, I'd like to add that I don't beleive in them. They're just entities made up by humankind, like god.
God is not someone up high in the skies, not the creator, it's an IDEA, which itself is created by the society. It's an IDEA of some high entity, the abstraction.
Don't take this as an offence, people, it's just mine opinion.