Written in a coffee shop on the corner of 1st and 71st Street in New York on bright autumn day. Dispatched to silicon.com via a free wi-fi node.
For more than 20 years now I have worked on artificial life and intelligence systems. Much of the time, efforts in this area have been confounded by a lack of common definitions and descriptions, compounded by questionable performance measures.
In fact I think I can confidently state that we lack any meaningful description, definition, measure, quantification or understanding of intelligence, to the point where we are almost flying blind.
As our species is clearly intelligent, this lack of understanding presents something of a paradox, and a major barrier to scientific study.
In general, we can’t even converse productively about this topic without descending into comparisons with carbon life and intelligence, and worse, belief systems. It seems that most humans feel really threatened by machines that outperform, or challenge them mentally. Witness the ‘hoopla’ surrounding Gary Kasperov, IBM Deep Blue and a game of chess.
Looking at this with the cold eye of reason, we ought not to be upset by the fact that machines beat us at anything. We should be asking how they did it, and how we might exploit that capability to the full.
The reality is our brains aren’t going to get a whole lot bigger, and we are not going to get any smarter, but the problems we face as a species will multiply – and we will need all the intellectual help we can get.
So now to business. One of my hobbies is the study of obtuse and difficult problems that present roadblocks to our continued progress, such as the quantification and understanding of AI. Recently I had a bit of a ‘huh’ (the most important expressions in science) moment when a mathematical analysis of a ‘minimally intelligent system’ produced the following:
Where:
I = Intelligence (comparative ability to solve problems)
S = Sensor facility
A = Actuator or some output device
P = Processor power
M = Memory capacity
And:
K = A constant related to the system type
ks = A constant related to the system configuration
kp = A constant related to the processor
km = A constant related to the memory
If you are not a mathematician, scientist or engineer, don’t panic. Just focus on the implications detailed below.
If I am right in my analysis the implications are profound as this formula says:
- You can have intelligence without memory and processing power (in the discrete sense). All you need is a sensory and actuator system (S and A) that affords a reactive output from an input stimulus. Is this a reasonable outcome? I reckon. There are many examples in nature – slime mould and jelly fish, for example. In the field of robotics this quality has also been demonstrated many times.
- More importantly, this formula says that intelligence grows as the logarithm of the sensor, actuator, processor, memory (SAPM) product. So provided the complex product is far greater than unity, if we increase processing power tenfold, then intelligence is bounded by a log(10) increase, and if I increase memory by 100-fold it is bounded by a log(100) increase, and so on.
To make this more explicit let’s assume a ‘base 10’ log system to simplify the enumeration. And let us say that the product of the SAPM terms increases from 1 in steps 100, 1000, 10,000; then the comparative intelligence increase would increase as 2, 3, 4.
And so to increase a system’s intelligence tenfold, the SAPM term would have to be increased by a factor of 10,000,000,000.
An axiomatic condition that the formula satisfies is that if either the sensor (S) or actuator (A) go to zero, then so does the intelligence. Obviously, if there is no input, or no means to output, then to all intents and purposes the system is dead to the world. At best it would be some gibbering or twitching entity incapable of coherence.
This description of intelligence signifies quite a slowdown in the assumed rate of AI progress, and offsets the fears of the ‘singularity community’ somewhat. It also explains why all intelligence measures to date, based on IQ tests and/or neural count and interconnects, are out of kilter with our real-life experiences of intelligent systems.
Just a word of warning: The system model I used to derive the above was of the simplest and most fundamental kind. Even a modest increase in the number of elements, loops and nested processes quickly renders a full analysis impossible with the mathematical tools and abilities at our disposal. And I don’t see this situation improving anytime soon – if ever!
So it might just be that we have to build and evolve our AI systems much further before we have a tool set capable of proving, or otherwise, the above formula and the conclusions for a more complex, or general case.
What is going to be interesting is whether we have to pose the question – or will our systems just become curious enough to do so themselves?