The year is 2025. You’re sitting in a surgery watching your doctor carefully insert the tips of her fingertips into black thimble-like actuators.
A screen in front of the doctor flashes with the image of a glistening tunnel of flesh and, as she huddles over the controls, you feel a stirring in your bowels.
The gelatinous mass you feel coming to life inside you found its way into your body 24 hours earlier, when you swallowed a pill that looked unremarkable, save for its bulk.
That pill was actually a package of edible electronics, a miniature robot that will allow the doctor to feel inside your body without making a single incision.
This is the coming world of augmented humans, where technology gifts people senses, skills, and strengths never before available.
The swallowable robot is only one scenario that researchers in Bristol in the west of England are working to make a reality, as part of research that seeks to use bots to enhance, rather than replace, people.
Other projects include work to allow surgeons to operate on people located miles away with superhuman precision, and managers to split their day between offices situated on opposite sides of the world.
The conversation about robots today so often revolves around fears of how they will replace us, rather than help us.
Yet as the research taking place at Bristol shows, robotics is “more about augmenting people than it is about making them obsolete,” says Professor Anthony Pipe, deputy director of Bristol Robotics Laboratory.
He sees this research as reflecting a future where robots and humans enjoy a more symbiotic relationship–where robots work alongside people, enhancing their capabilities.
“There are lots of areas where robots could help humans do things,” said Pipe. “That’s really one of the big new areas. So as opposed to replacing humans, helping humans will be a large area for growth.”
Pipe talks about “human-robot teams” working together. “We’re not saying the robot suddenly becomes a simulacrum of a human being–it’s still a robot doing the dumb things and being instructed by a human being–but it may be able to do more useful and skillful things than robots have been used to do so far.”
He is not alone in his assessment that robots will routinely collaborate with people. In the US, professor Manuela Veloso of Carneige Mellon University has built CoBots, wheeled bots that automatically escort people through the university building but ask people for help when needed–or instance, to call the elevator for them.
Just as bots can help people, so they will likely always need humans, Veloso said–whether it’s an automated car that needs a person to take the wheel during snowy weather or a robotic warehouse picker that can’t get a grip on a slippery object.
“Just as humans like you and I are not able to do everything and don’t know about everything, robots will always have limitations,” said Veloso. “The thing would be to continue developing algorithms in which the robots themselves are useful but capable of asking for help.”
The swallowable robot–called the MuBot–has been the focus of researcher Ben Winstone’s work at Bristol Robotics Lab in the west of England.
In effect, the device would transplant the tips of the doctor’s fingers onto its exterior, so when the robot pushes against the inside of the intestinal tract, the doctor would feel the sensation as if his or her own fingers were pressing the flesh. Using this device, doctors of the future could feel for the telltale outline of tumors and other cancerous growths in patients.
“Medical practitioners have spent years developing a highly enhanced sense of touch to allow them to carefully palpate tissue and recognise suspect lumps and bumps,” said Winstone.
“If you could take their hands and put it inside the body without opening the body up, then they can start to feel around and have an idea what’s going on,” he said.
Allowing clinicians to feel at a distance required Winstone and his collaborators to build an electromechanical fingertip on the outside of the robot. Inside the bot are an array of pins that replicate the biological features found on the internal surface of human skin. It is these pins that stimulate the receptors responsible for letting our fingertips feel. When we detect the shapes of objects, we use the Meissner’s Corpuscle, a mechanoreceptor that sits close to the surface of the skin and measures how it deforms when pressed. Similarly, when we detect how rough a surface is we rely on the Pacinian corpuscle, which acts a bit like a microphone in sensing the vibrations upon touch.

When the soft-skinned bot presses against the intestinal wall, these pins are pushed inwards and vibrate in much the same way receptors do inside our fingertips.
Attaching sensors to each of these pins would require electronics that were too complex, power-hungry and delicate. So instead the bot relies on a camera that captures the pins’ stirrings and relays the footage to a computer that calculates what touching that gut wall would feel like based on the movement of the pins.
The bot isn’t static but remote-controlled. Using a live feed from the bot’s video camera, the clinician can guide the tiny craft through the patient’s gut, pressing up against areas of interest. As a way of moving the robot, Winstone is drawing on biology for inspiration and examining how worms propel themselves forward by flexing the muscles along the length of their body, something called peristaltic motion.
“We’re looking at using peristaltic locomotion because it complements a soft bodied robot that can comply with the twists, turns and contractions of the gut as it is moving along and it doesn’t obstruct,” he said.
The sensation of touch then needs to be transferred to the doctor. For this task, a wearable haptic ‘fingertip’ is used, again lined with pins. Using the data harvested from the bot a computer arranges these pins into a 3D model of the intestinal wall. In this way, the doctor can feel what it would be like to be exploring the inside of the intestine with his fingers. Another benefit is that once the shape of the intestinal wall has been captured, the doctor or a colleague can rerun the recording and probe the intestine as many times as needed.
“There’s no way to look around deep inside the body without opening people up, so it’s a really interesting and exciting opportunity to see what can be done,” said Winstone. “You could save money on medical procedures if you discover you don’t need to take it further because you know the situation is safe, or you could help people sooner and more effectively by identifying something more quickly.”
Winstone also believes that people would be more inclined to get symptoms checked if resolving them meant “swallowing a slightly large pill” instead of invasive surgery.
Of course, that “slightly large pill” is in reality a robot and the thought of stuffing a machine down your gullet is understandably alarming.
The difference is that this pill is what is referred to as a soft robot, said Winstone.
“There’s often the idea that robots are hard, tin men. There’s a whole field of robotics made out of metal but a new approach to robotics is being realised that uses smart soft materials for both sensing and actuating,” he said. “It is more natural and more suitable for interacting with living beings, so it is much safer.”
In the case of the robot pill, Winstone has been experimenting with encapsulating the camera and pin-based tactile sensor in gel surrounded by a rubber exterior.
Then there’s the thorny issue of how to power the bot. Winstone had the problem of the bot either needing batteries that were too bulky to swallow or too feeble to last the required time. As a solution, he is looking into wirelessly transmitting the power to the bot through the patient using magnetic resonance induction.
“I’m looking at magnetic resonance induction to power the robots inside the body. That means you’re not dependent on batteries and you have the opportunity to charge. Essentially it means you have power for as long as you need it.”
Winstone estimates that he’ll have a “relatively good proof of concept” by next year and that, if everything goes to plan, there could be a system that patients could use within about 10 years.
Your life as a bot
Science fiction is full of stories where people live vicariously, sitting in virtual reality pods from where they control robotic avatars that can perform seemingly impossible tasks safe in the knowledge that any damage–or even death–is virtual.
The system that Dr. Paul Bremner is making may be a long way from letting us live these second lives, but perhaps is a first step. At the lab in Bristol, Bremner is making a rig that allows someone to control a robot in a different room and maybe eventually, from a different city or even country.
It’s a system that works today. Visit Bremner at the lab and you can strap an Oculus Rift virtual reality headset to your face and look through the eyes of a robot.
The robot in question is an Aldebaran Robotics Nao, a not-especially-imposing android standing at just under two feet tall with a permanently surprised look on its face. While this robotic avatar may view the world from the perspective of a toddler, the system still offers an out-of-body experience. Turn your head and so does the Nao, lift your arm and–thanks to tracking by a Microsoft Kinect–so does the Nao.
Gazing at the world through the bot’s eyes–actually two stereo 720p cameras–is at once peculiar and engaging, particularly turning your head to see yourself standing next to you.
One application Bremner can eventually see for the remote robotics technology is giving managers the ability to drop into offices situated hundreds or even thousands of miles apart–all without leaving their houses.
“That’s nominally one of the things that you want to be doing with it. Rather than having a Skype conversation, you have the conversation with the robot as your avatar,” said Bremner.
Of course, taking the boss seriously when he’s made of white plastic and only comes up to your knee would be tricky, but Nao’s limited stature shouldn’t be a problem. Bremner should be able to take the system he develops for Nao and transfer it to a bigger, more relatable bot.
There’s a long way to go in getting a bot to capture the subtleties of body language–the narrowing of the eyes, the pursing of the lips, the opening of the palms. In contrast Nao can open and close its hands and wears a single expression of open-mouthed wonder.
That’s why Bremner is looking at other robots such as the 3D-printed Poppy, who is twice the height of Nao, as well as the more expressive Robothespian, whose facial features can be modified using backprojection. For more expressive gestures, Bremner is considering fitting a bot with custom hands that can make a greater range of shapes.
While robots with their stiff joints and fixed faces may lack the expressiveness of a human, their ability to gaze at someone and reproduce limited arm gestures is a step up from telepresence robots today, said Bremner.
“The issue, I think, with those systems is they’re basically just Skype on wheels,” he said.
Lots of the subtle cues of face-to-face interaction are lost as a result, Bremner added. You don’t know exactly who the person is focusing on and it’s harder to keep people’s attention when you’re a screen on a pole.
Bremner’s robots could also replace some of the feedback lost by not speaking to someone face-to-face, through superimposing messages on your vision to tell you how it gauges the conversation is going.

“We could overlay that on your vision so you can have a better idea of how the interaction is going and what changes you need to make,” he said.
The bot could even go beyond reproducing your arm and head movements to exaggerating the bot’s motions to help you get on with the person you’re talking to.
“We want to be able to add some semi-autonomy to the robot control,” said Bremner, “so that if you’re interacting with someone who is themselves an extrovert, when you do a gesture, the robot does a large gesture.”
Similarly, Bremner’s partners at Queen Mary University of London are studying how to tailor the robot’s gestures to suit the mood suggested by the speaker’s voice or to stress a particular point.
Bremner is particularly interested in how people’s reaction to someone changes when they are embodied by a robot, and whether people would still respect and listen to their boss as a bot.
“What effect does that have on people’s personality perception, group interaction, and that kind of stuff,” he said.
At present the system is far away from any practical use. Bremner is currently using it to study how people react when interacting with someone’s robotic avatar and thus laying the groundwork for future interactions between, remote-controlled and autonomous, robots and humans.
Bremner said, “The idea is to gather a lot of information on how people behave when they are controlling the robot and how the interaction is successful, so we can build up this domain of knowledge for an expert system.”
A major technical hurdle facing Bremner is how to remove the wires between the bot and the PC that currently processes some of the images. Once that is achieved, the robot will be able to be mobile, rather than stationary.
Once cracked, Bremner wants the robot to roll before it can walk and expects the first movable bot will be wheeled, with people controlling it using a Segway-like motion where they lean in the direction they want it to travel.
The remote robots will get their first real-world test soon, with Bremner planning to see how people react to the bot in team-building exercises, such as desert island survival tasks.
Getting participants shouldn’t be a problem. When Bremner has shown it off the reaction has been one of excitement.
“Most people are really like, ‘Wow, this is something really exceptional’,” he said.
Robotic surgery
The versatility of the human hand is thought to have played a role in our rise to become the dominant species on Earth.
But hands have their limitations, particularly when attempting to carry out precision work such as laparoscopic surgery, where doctors operate using a few small incisions rather than a large open one.
This minimally invasive surgery causes less blood loss and residual pain in patients and means that procedures that used to require patients to stay weeks in hospital can be recovered from far more quickly. However, such work not only requires a steady hand but the use of multiple tools and assistants.

Today robotic systems such as the da Vinci Surgical System give surgeons the ability to carry out such operations with improved precision and less bleeding. These robotic systems offer these improvements by allowing the surgeon to remotely-control robot hands capable of far more exact movements than humans.
Although such systems are now becoming commonplace when carrying out the delicate task of removing a prostate, for example, there is room for improvement in certain areas.
One such area is training. Typically it takes surgeons about 2,000 hours to become proficient with da Vinci robots, according to doctoral researcher Antonia Tzemanaki, something that can take between a year and five years for a doctor to accumulate.
To help such systems become more dextrous and intuitive to use, Tzemanaki and her colleagues at Bristol are developing a robotic system that, when compared to the “pliers” and “scissors” of the da Vinci machine, more closely mimics the movements of a human hand.
The team is building what looks like a robot claw with three digits. Each digit can hold one of 13 specialist instruments for different operations.
The surgeon puts their hand inside an exoskeleton with magnetic sensors that measure the hand’s position. The exoskeleton then relays the hand’s movements to the robotic hand and maps the movement of the surgeon’s fingers to each of the robotic digits with their attached instruments. If the surgeon moves their thumb, index finger, or middle finger then those movements will be reproduced by the robotic hand’s thumb, index or middle finger.
By closely mimicking the movement of the doctor’s hands, the team believes the instrument will be more straightforward to learn how to use.
For example, in an operation to remove a gall bladder, a surgeon would generally require an assistant to hold the gall bladder out of the way while the surgeon cuts tissue.
With the robotic hand, the surgeon can instead use an instrument attached to one of the digits to hold the gall bladder up, while using instruments attached to another robotic hand to apply traction and cut.
“The instrument shrinks down the surgeon’s hand and lets them operate with a superhuman level of precision,” said Tzemanaki. “Their movements will be miniaturised and filtered to make them more accurate. So there’s no tremor, there’s more precision and of course at the end of these digits there can be different instruments.”
Each robot hand has three instruments, mimicking a partial human hand. The first two instruments–the forefinger and thumb–act as grippers. The remaining “middle finger” can house any number of tools, with a blade, hook, irrigation device, and coagulation device among the many options.

Together, the two robot hands give the surgeon up to six instruments to use simultaneously and allow the doctor to mix and match the instruments they need. For example, on one hand the index and thumb could be needle holders, while acting as forceps on another–with each hand allowing for one person to operate three different instruments at the same time.
“Whatever the surgeon is doing is reproduced in the instrument but better,” said Tzemanaki.
By mirroring the pinches and grips a human hand is capable of, the system improves on the dexterity and usability of current state-of-the art robots for carrying out laparoscopic operations, such as the da Vinci Surgical System, she said.
If such robotic systems become commonplace, then there will also be an opportunity to overlay information from medical scans onto the video feed showing the patient, giving the surgeon more information to aid them in carrying out the operation.
Looking further into the future, such a system could benefit from the research Winstone is doing to allow doctors to remotely experience a sense of touch.
“In my PhD we’re talking about kinematics, the designing of intelligent instruments,” said Tzemanaki. “But then the absolute next and crucial step is to give a sense of touch. [The point is to make it] as natural and as close as possible to the movement of the human hand.”
As with all medical projects, the raft of regulatory approvals needed before the technology could be released means it is likely many years from being made available, perhaps as many as 20, said Tzemanaki.
Better together
The way robotic technologies can and will augment human abilities is sometimes lost amid concerns people will be unable to compete in a world of smart machines.
And while the impact of fast-approaching automation, drones, and robots on industries such as haulage, delivery, and retail is yet to be felt, the projects at Bristol demonstrate ways that people and robots can achieve more by working together rather than in competition.
Professor Erik Brynjolfsson, the MIT economist warning societies to prepare for the upheavals automation will trigger in the job market, calls this symbiotic relationship between humans and computers “Racing with machines.”
A powerful demonstration of the concept could soon be realised by the robotic exoskeletons being made by the likes of Ekso Bionics and ReWalk. While two-legged robots currently struggle to stay upright by themselves, a human in a robotic exoskeleton promises to combine the strength of a machine with the balance of a person and may one day allow the injured and infirm to walk with ease and help construction workers and soldiers carry back-breaking loads.
In the immediate future, bots are likely to continue to suffer from significant limitations. Today, robots have difficulty with manual tasks that we find simple, such as picking items from cluttered warehouse shelves, and roboticists predict their creations will find such tasks difficult for years to come. But use a robotic system to enhance a person’s capabilities and let the human fill in the gaps in the bot’s skills, and the result could be something far greater than the sum of its parts.
For Dr Peter Ledochowitsch, scientist at the Allen Institute for Brain Science, it is a reflection of how biological and synthetic systems complement each other’s strengths, as demonstrated by a remote-controlled cyborg beetle, which can fly far longer than any similar-sized electronic device.
By combining the strongest elements of natural and man-made creations, he believes we can continue to transcend the limits we face as humans.
“I adhere to the idea that as we make technological progress, people will find jobs that before were inconceivable,” said Ledochowitsch.
It’s a notion that father of robotics, the late Joseph Engelberger, would have agreed with. The founder of one of the world’s first robot manufacturers, he was a staunch opponent of the idea that bots would diminish human existence by taking away jobs.
To the contrary, he saw robots as freeing people from humdrum busywork and empowering them to achieve more.
Robots don’t destroy employment, he told The New York Times, they “take away subhuman jobs which we assign to people” and in doing so give them the time and the tools to be better humans.
Cover image credit: iStockphoto/video-doctor