Legal

Peter Cochrane's Blog: Gesture control gives the mouse two fingers

Communicating by two, three or four simultaneous mechanisms is far better than just one...
Facial recognition: When we speak and gesticulate at the same time, there is a lot more information for a machine to work on

Facial expressions and head movement are being used in machine interfacesPhoto: Nick Heath/silicon.com

Written at Schiphol Airport while waiting to board KL1515 to Norwich and dispatched to silicon.com via a company wi-fi hub in London three days later.

Mobile phone users often look strange wearing their Bluetooth headsets while talking into space and gesticulating at the same time. Well, it is about to get a whole lot stranger.

Throughout my life I have had to learn one machine interface protocol and language after another. Some still survive but many have died along with the technology that spawned them.

Everything from convoluted keyboard fingering combinations to the stilted patterns of speech necessary for the early recognition systems has been superseded.

Low-level machine code has now been sidelined by successive generations of higher level languages.

And as a result, human productivity has escalated with the arrival of user-friendly, and increasingly intuitive, interfaces.

Everything is now so much slicker, smoother, accurate and efficient. But here comes the new boy on the block - gesture space. Everything from games machines to music players, mobiles, tablets and laptops seem to be heading for this humanistic mode of communicating.

Just when you might have thought interface technologies had reached a pinnacle, along comes something new, better and far more intuitive. Are there any snags? Yes. Lighting conditions and multiple limbs and digits seem to be fulfilling a similarly problematic role to background noise in speech recognition.

Do we need a fallback for these default failure cases? I reckon so, so the keyboard and mouse or pad will probably be around for some time yet. However, there is a new aspect and dimension here. Communicating by two, three or four simultaneous mechanisms is far better than just one.

When we speak and gesticulate at the same time there is a lot more information for a machine to work on. When you add facial expressions, head movement and gaze-awareness, it all becomes radically better. In fact, it becomes almost human.

About

Peter Cochrane is an engineer, scientist, entrepreneur, futurist and consultant. He is the former CTO and head of research at BT, with a career in telecoms and IT spanning more than 40 years.

0 comments

Editor's Picks