Wednesday, August 22, 2007

A problem of definition

It is not unusual to find people having an argument over something, without first doing a clear definition of the question. Take, for an example, intelligence and consciousness. There are lots of discussions about the possibility or impossibility to create artificial versions of these, without first having a common definition. Such an argument is almost a waste of time, except for the situation where it may lead to better understanding of definition.

Another famous example is the question of the meaning of life. Define life first, and I think the question will be easier.

Yet another example is the Chinese Room by John Searle (it consists of a human not understanding Chinese who will take a chinese question and use a set of written rules to produce an answer in chinese). The question is then, can this construction be considered to understand chinese? This is a very hot discussion, but I see no attempt to first define what you mean with "understand chinese".

I can't help thinking about the the computer that produced the answer 42, because the question wasn't exact enough. While quite funny the first time I read it, people still ask questions that way.

Friday, August 3, 2007

Is Consciuosness an Illusion?

This is not an origial idea of mine, see for example The Grand Illusion:
Why consciousness only exists when you look for it
. Suppose it is an illusion, then what is it?

I beleive consciousness is simply an analytical function in the brain. You can't say that something has it or not. Rather, that something has more or less of it. Most things we do today are done unconsciously. This means that we will find the right response quickly, almost without realizing that it was done. This is because the response has been learned by the brain. When we encounter something that is new to us, then something interesting happens. Because of the fantasic human intellect, we frequently are able to work out a new response. The first time, it will take some time. But with practise, it will soon be learned and can be handled unconsciously. This gives humans a very strong competitive edge compared to other species.

A common example is from some sports. To be really good at the sport you are practicing, you have to learn it by heart. If you play table tennis and don't have the right reflex, you will fail immediately. More training is needed.

Using this (vague) definition of consciousness, now let's see if animals, plants, or computer programs could be considered to be conscious. Most animals will act immediately on reflex. When I throw a ball to my cat, it doesn't stop to think and compute where the ball will be. Instead, it will dash ahead, and usually catch the ball. The reason that it succeeds is that it is very quick, not that it realizes where the ball will bounce. Sometimes the cat will lie down and look at what I am doing for a long time, but I have never seen it's behavous change because of it. I just think that the cat depends more on instincts and reflexes than on conscious analysis. Because of that, I believe the cat is much less conscious than human beings, but not zero.

A plant doesn't seem to have any analytical power. It is entirely pre-programmed, and by my definition, completely unconscious. Well, it wouldn't surpise me if there are plants that are somehow able to learn and react through some chemical system, which would give it a consciousness greater than zero (but only just).

A computer program certainly takes a lot of time sometimes to find answers, but in general does not change it's behaviour. It is however hard to say something for sure about all computer programs. For example, there are fairly advanced AI functions that control monsters in computer games. These monsters will react and behave differently depending on what the player do. While fairly complex, I would still consider this as similar to pre-programmed instincts. I have not seen a game where the monsters come back the next day and adapt to your way of playing. I do expect this to happen any time soon, as it should certainly be technically possible. It is a thing that is interesting to the computer game industry as players usually become tired of playing only against AI driven monsters. The usual solution to this is to allow players to play against other human players, which will behave different every time. To summarize, I would say that computer programs aren't conscius, but there will soon be programs that will behave more and more consciously.

Monday, July 30, 2007

How intelligent is a general AI after the singularity?

An interesting question is how intelligent a General AI can become? That really depends on the definition of intelligence. There is one component that signifies intelligence, and that is speed. If I would be 10 times as quick to learn new things and reach conculsions, then most would consider me as more intelligent. But is speed really all there is?

A popular comparison is to compare the human intellect with monkeys. There are problems that humans can eaily solve, but monkeys can't solve no matter how many that are working at it. So there is some major difference between the human intellect and the monkey intellect, which is not just a difference of speed.

Will we be able to construct, or at least boot-strap, a general AI that can optimize itself so that it becomes both quicker but also have that extra ability? I think it will be almost impossible to get it already from the beginning, as humans don't know what the difference is. Only way to find out about it (for the AI) would be to develop it during it's own optimization.

Can we know if there is another level of intelligence that we don't know about?

Instead of looking at the difference between a monkey and a human, let's look at the difference between a genius and an average human. In this case, is the difference only a matter of speed? Most would say that the genius can come up with ideas and solutions that an average human wouldn't find, no matter how much time was allowed. I think a genius is characterized by two things:
  1. A capacity to quicker understand certain areas than the human average.
  2. A capacity to find unexpected relations the average human would not have thought about.

The first one is obviously about speed. The second is about looking for relations in areas that normally would not be considered as worth looking at. It is some kind of trial-and-error process. I think this is also really just a matter of speed, where the average human knows the he/she don'thave enough time to investigate unlikely possibilities.

Monday, July 23, 2007

Recipe to create a General AI in your PC

This is a recipe on how to create a General Artificial Intelligence. Each though it looks simple, each step below is far from trivial.
Version: 1.1, July 25, 2007.

Learning
Create a software that can learn, and then teach it something.

Answering
Change the software in such a way that it can provide answers to questions, or more generally, produce response to stimuli.

We now have a passive system that may impress people with the possibility to understand questions and give answers. But it will not be considered as a General AI, and even less be as being conscious. And it will probably be slow.

Consciousness
How do we make the program conscious? It is a problem as consciousness isn't well defined. One way to start is to add a goal (motivation, driving force) and allow the program to:

  1. Execute without having been asked to.
  2. Pose questions to itself on how to fulfill the stated goal.
  3. Act according to the answers.

This program has a higher probability of being regarded as conscious, but probably not sapient. The question is then to find a good goal.

Human driving force
How about defining a goal similar to the human driving force? This has been developed over millions of years, and was originally and basically to spread your genes. That in turn has lead to derived goals, like behaving socially. It is questionable whether such a basic goal is desired for our general AI. We don't want it to multiply on the expense of competitors (like humans).

A more interesting goal is something with which we can sympathize as well as utilize. The steps outlined above are far from simple. I have the feeling that defining a good goal can be the hardest of them all.

Environment

Enable the program to interact with the environment, analyze the result of that interaction, and then learn from it. That is the final step that will allow the program to really improve, and that is where we may hope to see development in areas that could not be predicted.