Nasa / Space

The Singularity: the rise of the machines?

The Singularity is the point in time when humans will hit a technological wall, and machine intelligence will exceed human intelligence. Read about the science fiction-esqe implications to this theory.

Moore's Law (i.e., the number of transistors that can be placed on an integrated circuit will double approximately every two years) has been expanded to anything technology related. As a matter of fact, Moore's Law is expected to continue for at least five more years and perhaps much longer. The question becomes, though: What will be the change that causes us to deviate from that theory?

Some say it will be a dramatic slowdown in technological advancement. The human race will eventually hit a wall, and we'll be stuck at a technological level. This might happen centuries from now, or it may be next year, but we will come upon that point; our brains can only handle so much advancement and innovation. Entomologist and population biologist Paul Ehrlich gave a lecture on June 27, 2008 in San Francisco during which he stated that the human brain has not changed much in about 50,000 years. Cultural evolution information such as this leads to the wall-type theory.

But, what if the opposite happened? What if that wall was beyond the point where humans could create autonomous, thinking, self-advancing machines? Machines could reprogram their own source code and essentially learn freely without human intervention (think Data on Star Trek). This point in technology could trigger what has become known as The Singularity (or the Technological Singularity).

If it is possible for humans to reach this point in technological advancement, the machines could create new, better versions of themselves (thus advancing their intelligence) or simply rewrite their source code to advance their intelligence. Without the limitations of the human brain, the cap on intelligence could be infinite.

There are science fiction-esqe implications to The Singularity theory. If artificial intelligence (AI) exceeded human intelligence, what would stop the machines from taking over and potentially destroying humanity? There are several scientific theories on this subject, from an AI Box (the AI is kept constrained in a virtual world where it cannot affect the external world) to a friendly AI (which will likely be harder to create than an unfriendly AI, but it may keep unfriendly AIs from developing simply to maintain their existence).

This turning point is likely to happen at some point, even though none of us alive today may ever see it. Although transistors on ICs have indeed been doubling every two years or so since the 1950s, it's not likely to continue indefinitely.

This basic overview of The Singularity is meant to be a jumping off point to get Geekend readers talking about the theory, which has genetic, biological, and philosophical implications. For more details about The Singularity, take a look at these resources:

145 comments
ilovesards
ilovesards

this means, if we have 1000grams brain, we have just used 1 gram,creating nuclear powers . when will we use full 100 % ? if you believe in bible= it will be very near. after armagedon= the cleansing of this world by almighty God= very near . during this time,man could fly faster than light speeds= transform himself to light particles, explore the 14billion light year (its radius only) universe. if you believe in science, it could be the robots to take over , and would take MATRIX wars . which one you want ?

JohnOfStony
JohnOfStony

I didn't vote because I don't consider any of the 3 options valid. Moore's Law cannot continue indefinitely as we are already approaching atomic levels of size of individual components. However, just because we can't make components any smaller doesn't mean we'll hit a technological wall. Technology will continue to advance by more ingenious devices, not more of the same only smaller, the latter being an extremely limited view of the future. No mention has been made of organic electronic devices which grow and reproduce themselves. Just as 16th century scientists couldn't predict the future because they had no concept of electricity, today's scientists cannot predict the future because we have no idea what major breakthroughs may occur. To think we know it all today is just stupid.

darije.djokic
darije.djokic

The laws of physics dictate that Moore?s law can not continue indefinitely - either the calculating elements (transistors) will reach a size below which they will not be able to function as switches, or the speed of signal transfer in them will reach a limit - optical computers will use light with its top speed of 300 kkm/s several orders of magnitude higher than what in use today. But even with that, if the computer has to be big enough to perform a task of certain complexity, the size will come n mismatch with the requested velocity and the device will slow down and eventually bog-down. The same goes on with coding - when the absolutely best program is created (irrelevant if by man or machine) there will be nothing better. That will not happen any time soon, but we can surmise that at a certain point the machine might become better than man today. There is nothing to worry: by that time we will be able to artificially induce at will the evolution of humankind into a new species (let us call it Homo superior) with bigger brains giving severalfold higher intelligence, and bigger bodies to carry such a head. Of course the size of bodies and brains also has a limit of growth that is not endless - laws of physics apply to biology as well. Will the machines at the end of their evolution be more intelligent than people at the end of their can not be estimated. Even that is irrelevant: if they will, humankind will integrate itself with the machines. controlling the from inside, before they could take over - becoming one, a new hybrid cyborg species that would look everything but the Borg collective from ST.

jeasterlingtech
jeasterlingtech

not going to happen... mankind is in the decline a hundred years from now average joe is not going to be able to pour beer out of a boot even if the instructions are written on the heel. the ten percent that advances is going to be smothered by the ninety in decline

Dr_Zinj
Dr_Zinj

"Much" is a relative term. No, not any gross physicial changes in the brain, but several significant changes in operation that effect social organizations. We've undergone a lot of genetic mutations in the past 50K years, and several of them right before the point we began organizing into cities, states, and nations. While that is merely a correlation at this point, there is suffient basis to theorize that it was a genetic mutation expressed in our mental capabilties that allowed us to move past the family/clan level of social organization without resorting to instantly killing each other. The scenario of AI machines taking over or humanity is actually growing less likely. Our advances in understanding and manipulating biology make it more likely that we will either begin redesigning and modifying ourselves; either at the cytoplasmic/genetic level, or at the gross physical level; or we'll make a mistake (either accidental or deliberate) that wipes us out, or worse, wipes out all other life on the planet.

Snak
Snak

First of all, the assertion that the Human brain has not evolved in 50,000 years is most probably nonsense. The body has; only a few hundred years ago we were smaller than we are now. Medieval armour would not fit a modern man. To accept that the body has evolved (evolution, contrary to common belief does NOT take an age - it happens quite quickly. It HAS to because envirnomental changes that drive evolutiuon can (and do) happen quickly) but not the brain is a perverse sort of arrogance. In order to determine whether its possible to create an intelligent machine, one must first of all know what 'intelligence' is. Intelligence cannot exist without imagination, and imagination cannot exist without conciousness. Many scientists have debated conciousness and many experiments on brain-damaged people have thrown up some interesting ideas. One such is that both a sense of 'self' and 'conciousness' are imaginary. The phenomena of 'conciousness' may be an accidental by-product of the close proximity in the brain of say, optical and audio circuitry. The fact that events we see and events we hear happen within us (in our heads) and are connected in the real world also, gives us a fuller experience of them and 'awareness' is taking place in between the circuitry, as a by-product (this is why the so-called 'seat of conciousness' in the brain cannot be found). But no-one yet knows for sure. Likewise, the illusionary concept of 'self' is drummed into us from birth (YOU have made a mess. YOU are making a noise. Mummy loves YOU) and, without this, no real 'self' occurs - this is bourne out with the evidence of wild-reared children (yes it does happen) who have no such sense of 'self'. We can imagine the world around us in terms of past, present and future; a necessary talent for hunting, foraging, planting etc. With imagination, with an 'awareness' of our environment, with a diet that allows us to spend time cogitating instead of constantly needing to forage, with problems that need solutions, 'intelligence' evolves. We all have an innate understanding of higher mathematics for example - we can correctly kick or throw a ball at a target without conciously calculating speed, trajectory etc. Dogs can catch balls - so they do too.... in fact any hunting animal can do this. To confer 'intelligence' on a machine (and don't forget, in order to make a machine as complicated as a brain, we may have to resort to biological 'chips' rather than silicon/germanium ones), it first needs an awareness of itself. It needs imagination. Awareness for a machine may not need to be different to ours. A knowledge of what events make what noise, and why, coupled with the ability to see these happening and a recognition of the entity's position in space/time relative to the event may well be enough. Imagination: machines already have this in the form of Chess computers that calculate sequences of moves before electing to play the best one. You only have to marry imagination with awareness and you have a fledgling intelligence that would be limited only by the physical circuitry - which, in the case of a biological entity, can evolve. The main question would then be 'are we in danger of being disposed of as inefficient, outdated, superceded?'. Well, as with any offspring, that will depend entirely on its parents. Interestingly, experiments show that we might not actually have free will - that ALL our actions are instinctive, with the illusion of having made a decision provided by this concept of 'self' and 'awareness'. If this is the case, then the intelligent robot is already here - and has been here for at least 2 million years.....

misceng
misceng

The Moore's Law does not apply to humans unfortunately. Current observation of the education systems of the developed world seems to show that those coming out of education are less capable than their predecessors. We can therefore expect a slow down of development which will postpone the singularity indefinitely.

Ramon Somoza
Ramon Somoza

The question is not correctly stated, in the sense that it's an either/or. There might be a different option, in the sense that human beings might actually supplement their own intelligence with artificial means. For example, in the same way that we have today artificial limbs and pacemakers, and other inserts, it might well be that we develop additional implants to increase our own intelligence. If so, then the potential human intelligence might be inlimited.

MarkWAliasQ
MarkWAliasQ

Seems obvious to me that we will get to a point where we are adding our own upgrades.

AnsuGisalas
AnsuGisalas

你的灵魂是污点对我来说,到船尾上!

wizard57m-cnet
wizard57m-cnet

In the last few years, the field of prosthetic devices has undergone tremendous strides in replacing lost limbs, notably hands and legs. I've read reports of a prosthetic hand that can be linked to the patient's nerves and actually controlled to the point of gripping a styrofoam coffee cup. The prosthetic consists of micro-machines, chips and circuits, and as electronics continue to get smaller the levels of control improve. There has been talk of using nanobots to correct vascular blockage ( "blocked arteries" ), aid in cancer treatment, dissolve blood clots in deep vein thrombosis...not talk of "if" but "when" the electronics become small enough. Interesting, to say the least!

Slayer_
Slayer_

Why can our brains not be enhanced with a computer? Some extra processor, memory, maybe networking to a central database for information storage and retrieval. Need to know something new, request info from the database, few minutes its downloaded into your brain and you suddenly know it.

jkameleon
jkameleon

3GHz wavelength is about 10cm in vacuum. CPU chip has a couple of centimeter across. More than 50% of energy is spent on clock distribution. Consequently, the computers of today can't get any faster than this. Yet, today's silicon transistors could operate even in terrahertz range. The future development seems pretty logical: More parallel, possibly asynchronuous processing, which will necessitate more innovation also in the field of computer architecture and software, not just hardware. A break from the more than half a century old von Neumann's architecture is long overdue. I see no reason why a massively parralel machine, with each of its processing units 1000 times faster than the CPUs of today, couldn't surpass human brain. What would it mean for humanity, it's impossible to tell. It's impossible to predict the actions of something smarter than we are. When men speak of the future, the gods laugh.

HAL 9000
HAL 9000

It may very well be the first and we are not going to be here much longer as the Atmosphere changes dramatically. ;) Col

AnsuGisalas
AnsuGisalas

the planck length should at the very least be the final cut-off for circuitry density... on the other hand, we break into viable quantum computing way before that border, and then we have a whole new ballpark again.

clarelove
clarelove

The increase in stature has more to do with nutrition than evolution. Chiefly intake of animal protein.

Sterling chip Camden
Sterling chip Camden

pacemakers, hearing aids, even glasses and contact lenses. The upgrades will become more plentiful, and more effective.

Nicholas.Newman@Skynet.be
Nicholas.Newman@Skynet.be

Isn't this what we do with the web all the time? But we need to exercise judgement on what we retrieve, we need to choose. If this was done direct to the brain, we'd need to build in a monitoring function to assess the validity of what we retrieve. And just imagine the potential for hacking, spying and directing the brains!

wizard57m-cnet
wizard57m-cnet

At least tell us you've re-imaged your OS, and keep up to date with security patches!

AnsuGisalas
AnsuGisalas

Ain't broken. What you describe would simply rob us of what makes us clever in the first place. We're superior to computers in thinking exactly because we make our own truths. They're superior in being abaci, let them have their thing :p ;)

santeewelding
santeewelding

You as tired of this shilt as I am tired? Like, in another thread, about somebody establishing relevance for us all.

AnsuGisalas
AnsuGisalas

Glad you like it! I actually ripped it off from user anintruder, but I have a feeling that anintruder doesn't need google as a go-between.

wizard57m-cnet
wizard57m-cnet

Hehe, my wife claims I already suffer from those on occassion! On COLD nights, I jokingly comment "Wow, this is a 2 blonde night!" I receive the "non-sequeter" glare, and "I don't think so" response!

santeewelding
santeewelding

Right here, with every word you write. Been that way forever.

santeewelding
santeewelding

There is only Google. At which time the name will be dropped, there being no need to distinguish it from other, there being no other.

Slayer_
Slayer_

But you would have access to a cloud based information network. Getting hacked would be a psychological thing...

jkameleon
jkameleon

Memristor means even more data will fit into memory sticks, smartphones & such. Without P/E cycle limitation of the current flash technology, we'll see more laptops & other personal computers without mechanical hard drive. Memristor & further miniaturization will not necessarily mean faster CPUs, however, because the limiting factor here is the speed of light. I doubt x86 CPU clock will ever pass 10GHz, at least not by much. Smaller & simpler ARM processors, though, might get a bit faster. The CPU speed wall might actually be a good thing. The only thing it actually limits is ol' von Neumann's architecture. Methinks it's about time we invent something new.

NickNielsen
NickNielsen

do that all by themselves, with no help from us. All we need do is point and laugh. :D

AnsuGisalas
AnsuGisalas

The slightest nod to the topic at hand coupled with the appearance of reverance; it's the perfect varnish for the mother payload. Or is it a veneer... Maybe it's a varnished veneer. Or a tarnished teniers. Never mind, so long as the suckers get caught off guard, red-handed, pants down and sins all hanging out!

NickNielsen
NickNielsen

But lately I'm finding that both relevance and reverence are overrated.

AnsuGisalas
AnsuGisalas

But in content, the irreverant is seldom irrelevant. We need all the help we can get in Ordo Iocularii :)

NickNielsen
NickNielsen

But I'm still having trouble with the difference between irrelevant and irreverant. ;\

santeewelding
santeewelding

As I cris-crossed your post, whipsawn, you began to make sense.

wizard57m-cnet
wizard57m-cnet

Who poured the beer in the boot? Why? What brand of boot? Style, cowboy boot, climbing boot? I would assert that if someone were of the intelligence to pour beer in the boot to begin with, somebody closeby will dare them to drink it...tinea pedis spores and all. Therewith, the pretense that someone would need to look for instructions on the heel would be negated. Conclusion...no relevance between beer in the boot and the Singularity can be inferred, non sequeter. The wrong liquid was used as the point of reference, and most of us know the liquid involved in that old saying. Which also cannot be linked to the Singularity, as it does not bear any relevance to machines, but may hold some relationship with common sense of some percentage of the human population. The same ones that would waste good beer by pouring it in a boot to begin with.

wizard57m-cnet
wizard57m-cnet

Consider the way of the ant ( to borrow from one of your recent posts, Santee! ), they work as a group, signaling by way of chemicals, touch, sight to a point, each individual ant takes on a task, and the end results can be astounding...mounds, tunnels, even in somewhat solid materials. Google is in a way mimicking the ways of the lowly ant in regards to its distributed data centers. By themselves, pretty good, but when linked via electrical signals, the goals can be amazing...as well as frightening. Perhaps our first non-human intelligent creation will be a multitude of distributed datacenters made up of countless little servers, each performing a task, to the betterment of the collective "hive"?

wizard57m-cnet
wizard57m-cnet

Although many times we are "hunting" for the answer, and the object of a hunt is generally referred to as "prey". Which brings to mind an enjoyable book series, John Sandford's "Prey" books... but that's a different genre.

Sterling chip Camden
Sterling chip Camden

Google is great, Google is sage Let us thank them for this page By their site we must be fed Give us this day our daily web AMEN

charlvj
charlvj

Chip, I've never really thought of it until you said that. Perhaps our brains are now more geared into processing and reasoning than memorizing. We have so much data that requires processing every hour, as well as the reasoning and wisdom to determine how to use the results of that processing... It would be interesting to somehow artificially increase (help) the brain's memory while keeping the reasoning power. A book comes to mind that has often intrigued me: Gridlinked, by Neal Asher. He introduces a device that you "install" behind your ear that gives you a mental interface to the local Runcible (Super Dooper computer). You can query the computer by mere thoughts. For example, the main character -- an investigator -- walks down the street and sees a face that looks familiar, so he searches the polices records for the face through the mental interface thingy, and by the time he passes the person he knows all about this person's criminal history and that he is currently wanted. Both cool and scary...

Sterling chip Camden
Sterling chip Camden

... we now very rarely memorize long epic poems. That was often their only way to preserve a story for generations, so they developed the memory to support it. Now, we can't even remember where we put our keys, and we ask Google about anything else. I'm afraid our brain outsourcing (or outboarding, as you said) may be irreversible short of a technological disaster.

Slayer_
Slayer_

This sounds like an improvement. How many times does someone have to speed in the winter, and crash their car, before they learn? Bet they'd learn damn fast if they were "forced" to ride along with every fetal accident ever recorded, forced to feel the pain of death over and over.

AnsuGisalas
AnsuGisalas

is only developed by the work we have to do get information and results. If it's fed in, frictionless, what's left? An abacus with a built-in abacus.

NickNielsen
NickNielsen

[i]Clock could be distributed throughout entire chip on lower frequency, which could be multiplied locally by PLLs.[/i] This method of clock distribution has been used for decades in communications equipment of all types. PLL is used to stabilize frequencies, but is not the means of distribution. My first exposure to PLL was a tube oscillator that used a 1MHz crystal in a Colpitts configuration to generate a 3MHz master oscillator output at 1 x 10^-6 stability. My example with the atomic clock was for a secure (read: military) communications system that required frequency accuracy of 1 x 10^-12 to maintain synchronization over an RF connection.

jkameleon
jkameleon

PLL is very useful concept, which can be employed for many different things, not just stabilizing an oscillator's output frequency. Clock distibution is one of them http://en.wikipedia.org/wiki/Phase_locked_loop#Clock_distribution The other is freqency multiplication/division. Clock could be distributed throughout entire chip on lower frequency, which could be multiplied locally by PLLs. > In such a system, one or more master clocks (usually atomic) generate an output signal; the distributed clocks then use this signal as the standard for their outputs. Before I ventured in finance, I was writing programs for specialized wireless modems, among other things. I used software emulated PLL for synchronization of receiver and transmitter clocks. It worked like a charm, receiver and transmitter remained in sync minutes after the signal was lost. No atomic clock necessary, the usual quartz oscillator was enough.

NickNielsen
NickNielsen

Phase-locked loop is a method for stabilizing an oscillator's output frequency. It is internally to the oscillator. http://en.wikipedia.org/wiki/Phase_locked_loop The system he describes is known as distributed timing and has been used in secure communications systems for decades. In such a system, one or more master clocks (usually atomic) generate an output signal; the distributed clocks then use this signal as the standard for their outputs.

jkameleon
jkameleon

It's called Phase Locked Loop, or PLL. Another possibility is standing wave with phase shifts taken into account. This probably (and hopefully) won't help speeding up von Neumann arcitecture much. Having clock synchronized throughout processor chip doesn't help, if other signals travel slower than light. It could, however alleviate some multicore synchronization problems. If I understand that quantum teleportation thingy the TR was writing about the other day correctly, this could also be the possibility. True, information can't be transmitted faster than light, but clock carries no information, so... It's a long shot, but at a first glance, it looks feasible.

Nicholas.Newman@Skynet.be
Nicholas.Newman@Skynet.be

Pardon my ignorance, but instead of distributing clock signals from a single clock, has anybody tried distributing them over shorter distances from several synchronised clocks, with presumably "Ok, let's synchronise our watches" every now and again? Or is this what's done anyway?

Editor's Picks