When it comes to securing IoT (Internet of Things) devices, one argument that seems to carry weight is that the digital brain — Application-Specific Integrated Circuit (ASIC) — commonly used in IoT devices comes from the chip manufacturer with its functionality etched in the silicon. Security risks can be designed out permanently.

Another integrated circuit — Field Programmable Gate Array (FPGA) — is gaining traction among IoT device manufacturers. As the name implies, the chip is reconfigurable, even during run time. FPGAs also have improved performance characteristics. Those two advantages are overriding the cost advantage afforded by ASICs.

How “being reconfigurable” affects security

The Georgia Tech Research Institute (GTRI) team of Ron J. Prado, Zane R. Franklin, Jay Danner, Naman Shah, and leader Dr. Lee W. Lerner investigated how “being reconfigurable” affects security. “Because FPGAs are programmable and they tightly couple software and hardware interfaces, there’s concern they may introduce a whole new class of vulnerabilities compared to other microelectronic devices,” writes Lerner in this Georgia Tech case study. “There are entirely new attack vectors to consider, ones that lie outside the traditional computer security mindset.”

Lerner adds, “There are many possibilities for how to make connections between logic elements. Unselected or unused resources can be used for nefarious things like implementing a Trojan function or creating an internal antenna.”

The logic elements Lerner refers to include Configurable Logic Blocks (CLB), Input-Output Blocks (IOB), Block-RAMs (BRAM), and a Digital Clock Module (DCM). The FPGA diagram offers an idea of the interconnecting matrix required.

Possible avenues of attack

The first step taken by those wanting to compromise FPGA-controlled devices, according to the researchers, is to become familiar with the FPGA’s design. Since nefarious types are looking for different parameters than the chip’s developers, there is a good chance of them finding what they’re looking for.

“One potential avenue of attack involves ‘side-channels’ — physical properties of circuit operation that can be monitored externally,” adds the GTRI report. “A knowledgeable enemy could probe side-channels, such as electromagnetic fields or sounds emitted by a working device, and potentially gain enough information about its internal operations to crack even mathematically sound encryption methods used to protect the design.”

Once the device is understood, all that’s left is to figure out an exploit. It could be as simple as a rogue employee or intruder walking up to an IoT device and reprogramming the gate array (Google Glass uses an FPGA to control its display). The big picture is that being programmable gives the bad guys a “way” in. However, all is not gloom and doom.

The researchers’ solution and hacking challenge

The researchers have an answer — Trustworthy Autonomic Interface Guardian Architecture (TAIGA). In Dr. Lerner’s paper Trustworthy Embedded Computing for Cyber-Physical Control (PDF), he provides this overview:

“TAIGA ensures process stability — even if that requires overriding commands from the processor or supervisory nodes. It’s analogous to the autonomic nervous system of the body, which keeps your heart beating and your lungs respiring — the basic things that your body should do to be in a stable state, regardless of anything else that’s going on.”

The GTRI research team is working on a proof-of-concept challenge. They are installing TAIGA on a small robot running Linux. When finished, access to the robot will be given to interested parties at this website. Lerner exudes confidence, stating, “We provide formal assurances that TAIGA will prevent anyone from hacking critical control processes and causing the robot to perform actions deemed unsafe.”

Lerner wisely adds, “If someone figures out how to run the robot into a wall or damage its cargo, then obviously we’ll know we have more work to do.”

Note: TechRepublic and ZDNet are CBS Interactive properties.