Wednesday, November 23, 2005

On Intelligence



A large part of the technological singularity theory concerns the idea that the total amount of intelligence, both human and machine, is together increasing exponentially. At the heart of this idea is an understanding that as intelligence increases in power so does its ability to imagine new possibilities and its power to realize those possibilities – including more intelligence for itself. But what is this allusive “intelligence” that is going to become so powerful? And just what does it mean to speak about levels of intellectual power? Furthermore just what is the difference between the intelligence of a bacterium, a man, and a super-human intelligence?

Well, to start there is no definitive answer to these questions but I think that I can can get very close to the matter at hand. First off in the most general sense intelligence is manifested as a special kind of action. This action is teleological, or goal directed, behavior. Anything that can be called intelligent must have the ability to initiate action based on a desired outcome. This much is true of anything from a microbe to a search engine to yourself.

Now all intelligent actions exist as a series of choices. Each choice is selected out of two or sometimes many choices and that choice is selected based on a desired outcome. So at the very heart of the matter intelligence is an ability to successfully make choices based on a desired outcome. Having laid this foundation of thought let us move on now to see how this can be used to further understand the difficult questions I have asked.

All organisms have intelligence and all organisms evolved from a most basic form of intelligence. At the most basic level an organism must make choices and initiate actions that will allow it to stay alive long enough to reproduce. This means that the organism must be able to identify and avoid threats, get food, and, in the case of sexual reproduction, find a mate. These are the 'goals' that define its intelligence. In order for it to be successful at accomplishing its goals it must have a basic 'understanding' of how cause and effect work in its environment. The more simple the environment and the goals then the less intelligence is required. Present day artificial intelligence is very similar to the kinds of intelligence displayed by primitive organisms. They can only operated in very simple environments and they have very simple goals that define how they make decisions.

If we were to examine the levels of intelligence in the various organisms we would find that those levels have also grown exponentially. For a very long time there wasn't much going on in this area, but then the doubling effect began to work its magic and intelligence began to grow incredibly fast. All of this growth can be attributed to the gradual growth in the difficulty of achieving the already established goals. For instance if I am a predator and one of my goals is to find food then I have to have enough intelligence to get my prey. But if my prey has the goal of avoiding being eaten then it needs to be smart enough to avoid being caught. May the smartest organism win. So you see what I mean. As intelligence increases these basic goals become more difficult to achieve and nature begins to aggressively select for intelligence. But what is happening on a fundamental level as these organisms gain in intelligence?

What is happening is a greater and greater ability to understand how cause and effect operate so that actions may be initiated with the desired outcomes. This is normally accomplished through what is called instinct. The organism may not 'understand' in the way that a human does but the system that defines the organism 'understands' what kind of actions are desired in the appropriate circumstances.

So instinct is a kind of intelligence that is more rigid and is probably analogous to most of the intelligent algorithms that we can produce today. However as we move up the chain of life to the higher mammals and especially man intelligence gets much less rigid and much more powerful. This is because at this stage in the game intelligence no longer needs to wait for nature to select the appropriate instincts but rather the model of cause and effect in the environment has become so sophisticated and accurate that the organism can begin to be inventive. This means that the organism can begin to 'understand' in the sense that we normally think of when using that term. An intelligence on this level now understands how the world works so well that it can predict what the ramifications of its actions will be and can even think of new ways to accomplish its goals.

It is this power which, when fully realized, gives birth to technology. And it is for this reason that man is the technological animal. He is able to improve on his 'cause and effect model of the world' and then transmit that model to other humans through his corresponding acquired ability of language. With this power man is able to continually improve on his model and transmit that knowledge from generation to generation. As this model(science and metaphysics) becomes more powerful and accurate so does man's ability to accomplish his goals.

This is where things get interesting because man no longer has only those original goals that were discussed above. Man now has the burden of responsibility. This is where the concept of value comes in. Man began with the original goals that made evolution possible to begin with but somewhere along the way he also came into possession of values. In order for there to be ethical action certain outcomes must be deemed of can more value than others. Human beings have developed sophisticated culture which deals with the issue of vale. This occurred during the great axiological age. It was during this time that most of the great religions appeared and also when the first systems of codified law appeared. The significance of this is that we have, as a species, created new goals that are still based in the old goals but which transcend them in the sense that we now understand certain goals to be good and others to be evil. We can not change what is good and evil. It is hard wired into our brains. We could change that hard wiring itself but based on what. The only goals we know are this good and evil. On what premise would we change this basic part of our humanness. Would we claim that some other good was more good? This is not possible.

For this reason I have come to the conclusion that human-equivalent machine intelligence should not be treated as something other than man. It must have the same goals and values. If a machine with human level intelligence had precisely the same super-goals as human biological intelligence then it would be essentially human. However if these kinds of machines had values or goals that differed from man's goals then there would inevitably be conflict and the machines would win that conflict.

Therefor it is of utmost importance that we fully master how the brain works and from whence human values come before Strong AI arrives on the scene.

11 Comments:

That is a very, very interesting take on intelligence. I obviously don't see eye to eye with you on this. I reserve intelligence within the confines of that which makes man unique.

Note that we both recognize morals and values as part of intelligence. Also note that, in AI-themed movies such as I, Robot and The Terminator, the AI machines never stop to consider whether it's morally justifiable to "do away" with humans. They just interpret their respective goals in a purely logical fashion. So your point regarding the importance of mapping out and synthesizing human values prior to creating strong AI makes a lot of sense.

This idea is largely philosophical and not all that scientific. I wonder if those at the forefront of AI research have ever consulted those in the philosophy world. In fact, I wonder if anybody has tried a fresh-start approach to AI from the moral/ethical angle.

I just want to point out that I still consider language, reason, and morals to be components of the mind which exist in a supernatural sense and simply
interact with our electrochemical brains. Thus, I don't believe these components of intelligence will ever be fully and accurately synthesized.

By Blogger nickster, at 23/11/05 08:59  

"On Intelligence" is an interesting title for this interesting and provocative post.
There is a book called On Intelligence by Jeff Hawkins, that represents a significant move forward in the understanding of human intelligence.

If you go to the above link and read through the review and excerpts from the book, then go to OnIntelligence.org and see what Hawkins is doing with his hard earned insights and wealth, you will understand why so many people are excited about this new burst of energy and ideas into the machine intelligence field.

It was probably a coincidence that you titled this post "On Intelligence." It reminded me of the book, having read it recently.

By Blogger al fin, at 23/11/05 16:03  

Thanks for the link Al. I'll have to read that book, it looks like I would enjoy it. I didn't realize that you had a blog too. I just added a link to your site. Some very interesting stuff and a lot of links I'll have to explore.

By Blogger Micah J. Glasser, at 23/11/05 23:44  

It seems, in the first post above, that the poster is insinuating something like objective morality in his use of the terms Morals and Values. In fact, from both the human and AI perspective, the moral thing to do is that which maximizes your current goal system outcome. The goal system then must be very carefully constructed for any AI which even has the potential for takeoff. Unfortunately, though most religions would have us believe otherwise, the universe does not appear to have universal wrongs (evil) or universal rights (good). It then falls to each intelligent agent (be (s)he human, AI, IA, or ET) to create a goal system for themselves which maximizes not just the good of the individual, but the good of the entire sentient population. I'm doing a very bad job of stating the goals of the Singularity Institute (www.singinst.org), but to answer your question about philosophers and AI programmers talking, you may find it interesting and educational to read through the SL4 mailing list archives. (sl4.org)

By Blogger Ian Stuart, at 28/11/05 14:03  

This comment has been removed by a blog administrator.

By Blogger Ian Stuart, at 28/11/05 14:04  

Ian you've hit the nail right on the head concerning the conncetion between all forms of intelligence, value, and the maximization of good. However, I wonder if you realize the subtle difficulty of this question which is very philosophical and seems to be constantly overlooked. The problem is THE philosophical problem. What is the Good? You state "It then falls to each intelligent agent (be (s)he human, AI, IA, or ET) to create a goal system for themselves which maximizes not just the good of the individual, but the good of the entire sentient population".This is certainly true. But it doesn't answer THE question. Just what is the good for an entire sentient population? Would it be a universal good that would be the same for all sentient beings? This is the same problem with the utilitarian calculus of J.S. Mill. Personaly I'm a Hegelian and I think the greatest good is that which maximizes the freedom of all beings. Any ways this is a great discussion. I might have to write a seperate post to deal with this subject at greater lenth. Thanks for provoking my thought on this.

By Blogger Micah J. Glasser, at 30/11/05 15:08  

As much as I typically love to write contrarian and inflammatory comments, I find that I am in complete agreement with you here. Unfortunately, defining exactly which utility function to maximize is non-obvious. Yudkowsky and the Singularity Institute are opting for collective volition, which appears to be synonymous with maximizing individual freedoms. I can personally not see a better solution just yet. Of course, if the function were obvious, we would have begun implementing it a long time ago. I look forward to reading your post on the subject.

By Blogger Ian Stuart, at 30/11/05 15:56  

China Wholesale has been described as the world’s factory. This phenomenom is typified by the rise ofbusiness. Incredible range of products available with China Wholesalers “Low Price and High Quality” not only reaches directly to their target clients worldwide but also ensures that wholesale from china from China means margins you cannot find elsewhere and buy products wholesaleChina Wholesale will skyroket your profits.wedding dressescheap naruto cosplayanime cosplay

By Blogger products, at 18/8/09 21:59  

Women’s nike tn Shox Rivalry est le modèle féminin le plus tendance de baskets pour le sport. tn chaussuresConcernant la semelle :spyder jacketsCheap Brand Jeans Shop - True Religion Jeans nike shoes & Puma Shoes Online- tn nike,Diesel Jeans le caoutchouc extérieur, l’EVA intermédiaire Levis Jeanset le textile intérieur s’associent pour attribuer à la.ed hardy shirts pretty fitCharlestoncheap columbia jackets. turned a pair of double plays to do the trick.Lacoste Polo Shirts, puma basket, Burberry Polo Shirts.wholesale Lacoste polo shirts and cheap polo shirtswith great price.Thank you so much!!cheap polo shirts men'ssweate,gillette mach3 razor bladesfor men.As for Cheap Evisu JeansCheap Armani Jeanspolo shirtsPuma shoes

By Blogger products, at 18/8/09 22:00  

Here’s a list of tools you will need to start: Jewelers’ pandora jewellery wire cutters - If you can only afford one pair, get memory wire shears. pandora charms These are designed to make clean cuts on tough memory wire, so can also be used for pandora charms uk softer wires. Chain-nose pliers sometimes called cheap pandora charms needle-nose pliers – Very versatile for picking up and grasping small items, pandora charms sale bending eye pins, closing jumps rings, even closing crimp beads. discount pandora charms Round-nose pliers – Used for creating loops on beaded head and eye pins. Can also be used for winding your own jump rings and as the second pliers you’cheap pandora ll need for closing jump rings. Optional pliers – Wire-looping pliers which have several graduated circumferences to allow you to form perfectly uniform jump rings and loops in place of the pandora discount uk round-nose pliers mentioned above. Crimping pliers which have little notches to allow you to both flatten a crimp bead and then bend it to form a rounded finished look instead of the flat crimp you pandora uk get using the chain-nose pliers. As for materials, I recommend some assortment packs of beads in coordinating colors, some decorative metal spacers, seed beads in both silver and gold These can serve as spacers and beautifully set off pandora sale your other beads., tube-shaped crimp beads Buy the best you can find – these are what hold it all together!, head and eye pins. Other than that, let your choice of project be your guide. You might want some silver or pewter charms.

By Blogger 每当遇见你, at 7/10/10 23:28  

Greetings -

Why do you think it is that there is so little apparent evolution in the intelligence of members of the same species, relative to each other, such as ants, lions, fish, or even cats for that matter?

In view of the long periods of time that some of these creatures have existed in recognizable form, it is interesting that we do not seem to have evidence of much progression. One might have thought that the cats known to the Egyptians and their descendants in our homes would have experienced more natural selection for "brightness", particularly in view of the many different circumstances in which they have found themselves over the span of human recorded history.

Yet, despite thousands of generations since then, their behaviour is apparently identical.

Cheers,
Eric Lehner

By Blogger Eric Lehner, at 11/11/12 23:14  

Post a Comment

<< Home