Thursday, December 08, 2005

An Evolutionary Strategy for Human Level AI

I've been struck by the idea that the best model for achieving human level AGI (Artificial General Intelligence) should be based in biology but not necessarily from reverse engineering the human brain. Here's what I have in mind.

Presently there is only one known example of AGI in the universe and that is, of course, the mind of man. As it turns out the human brain is also the most complex thing known in the universe as well. There are many key attributes of human intelligence that we either do not understand at all or we understand very dimly, such as consciousness and natural language. Supposing that certain types of functions in the human brain that we currently do not fully understand are responsible for the ability of natural language or consciousness, then how can we hope to build the functional equivalents of these neurological structures without the knowledge of how it works in human intelligence?

What I propose is that we build an artificial chain of life. We could pick out about twenty different organisms to represent the chain of life from bacteria all the way up to man. The advantage of this artificial chain of life (ACL) is that in real evolution the neurological substrate of each organism is kept and any modifications of higher level organisms are merely built on top of the old structures.

This would have to be a massive research project. It would require that we map out the genome of each respective organism that is representing in the ACL. The reasoning behind this is that if we start with the most simple organism we should be able to figure out precisely what genes code for what structures. Then every time we move up a rung in the ladder of the ACL we already have the artificial genetic blueprint for the underlying structure of the artificial brain of that next organism.

So at every stage along the way we have a functional replication of the neurological structures of the lower artificial intelligence and we have the corresponding genome of that organism and the genome of the next higher organism. Then it must be ascertained what new genes in that higher organism are coding for new neurological structures. Because the genome will be very similar to the genome of the next lower organism this should be fairly easy to figure out. Also because we would already have a functional replication of the next lower neurological structures we would be able to use that as a platform to functionally replicate the next higher neurological structures.

At the lower end of the ACL each step up the latter would be fairly simple, but as we progress up the latter each stage of brain evolution becomes exponentially more complex. For instance going from a lobster to a snake would be easy but going from a chimp to a man is an enormous step. This works out perfectly, however. Say we spend one year developing each stage of artificial intelligence. Each year that goes by going from one stage to the next gets exponentially more difficult, but each year our technology and science gets exponentially better in correspondence. If this is an accurate picture then we would be able to functionally replicate the human brain in twenty years.

Another aspect of this is that it would allow for us to naturally let our programming skills grow exponentially with our knowledge of cognitive science. What I mean is that on the lower rungs of the ACL the programming is rather simple and we currently have the skills to program and the right hardware to run these programs. But as we move up the ACL we can build on these programming skills in a fluid way. I would propose that we concentrate on replicating the actual programming of the organism to behave like a real one. The advantage of this would play out at the higher levels. Imagine an AI that was based on a chimp. We could build multiple robots with the functional replication of a chimps neurological structures and with a chimps programming for social behavior. This would allow us to see how these AI behave in the context of social intelligence and presumably allow us to begin to slowly implement the kinds of structures in human brains that are missing in chimp brains. In this way we could virtually watch the evolution of man unfold and the end result would, at least in theory, be a very human-like AI.

So what do think?


First, welcome to the SL4 list. It is good to see you posting.

Overall, I think that this is an interesting idea, though I do not know that it actually helps with the task of building a Friendly AI, I can see how it would help with the general task of building the an AGI. Unfortunately, this is exactly what needs to be avoided until friendliness can be assured.

The danger lies in having a human brain, with all of the associated foibles, able to operate at computer speeds and with nearly limitless memory and data processing capabilities. Humans have not proven to be inherently altruistic, and a mind with that sort of power, especially one raised in such an alien environment, could truly be frightening. Also, IBM and BMI have decided to cut out all of the lower creatures and jus jump straight to a human brain, so by the time you get with the rabbit level, they should just about have a human.

By Blogger Ian Stuart, at 8/12/05 23:43  

Hey Ian,
It looks like a lot of good conversations are taking place on the SL4 mailing list. It should be a lot of fun.

It seems to me that the human brain, once functionally reproduced, could be easily modified both structurally and programmatically toward the desired state of benevolence. As I've argued in the past I think that it is imperative that the machine be accepted as human. Only a machine that identifies itself as a part of the human civilization can have benevolent super-goals. One of the ideas that set me toward thinking about this is that I have been studying human evolution. One theory of the evolution of human intelligence is called the theory of social intelligence. Its also a theory of the evolutionary function of consciousness. At any rate I support basing AI on the human mind. Benevolence must be a result of enlightened consciousness. In other words it must choose benevolent action because it believes that benevolent actions are ultimately the most rationality actions. We know that there exists humans which have achieved this state of consciousness. But we have no idea how a mind that was based on some alien model would actually play out in the real world. Also if we followed this evolutionary method we would be able to master the knowledge of the evolution of mind and would therefor be far more able to accomplish benevolent AI at the appropriate time (at the end of the ACL) then we would know in our state of relative ignorance about cognitive science. In other words, lets master how the human brain works before we march off trying to build a new and improved version. If we don't the consequence could be a Frankenstein.

By Blogger Micah J. Glasser, at 9/12/05 14:33  

Post a Comment

<< Home