I was recently discussing the hoopla around “the internet of things,” a technology whereby cheap, mobile, interconnected “systems on a chip” make it possible to embed what amounts to a full-blown computer in everything from shipping containers, to watches, to coffee makers. With all our devices connected and talking to each other, and centralized services, suddenly all manner of neat things can happen. Your car can send you an email when it needs service, and your coffee maker can order your favorite espresso bean once its hopper nears empty.

All of this sounds wonderful and, according to technologists, is “just around the corner,” yet few recall we’ve been here before. A decade ago, the emphasis was not on hardware, but on software that would allow an “internet of things” to spring up. Most of this push was in the late 1990s and based around Sun’s (now Oracle) Java development environment. What the Java advocates quickly learned was that one piece of the puzzle, a highly connected and universal development environment, didn’t necessarily complete the puzzle. With current emphasis on the “internet of things” we have the software and hardware, yet very little around how all these devices will communicate; who will manage, aggregate, and distribute the data; and whether people actually want their refrigerator communicating.

Black boxes

Another technology that’s never quite fulfilled its promise is the idea of “black box” computing, where bits of computing functionality and business logic can be regarded as “black boxes” that are easily interconnected and reordered to generate new business processes. Recently, this idea has met its resurgence in terms like Middleware, Software as a Service, Cloud Computing, and “Delivery Platforms.”

Like an internet of things, this is a conceptually desirable technology. Rather than dealing with complex systems with a rat’s nest of interfaces and interconnections, an ability to quickly interface disparate platforms is obviously compelling. The problem is that it is immensely difficult to implement. Everyone from IT leaders to junior developers know that integration generally poses one of the most time consuming and error-prone processes in deploying a new system, and conceptually sound ideas like SaaS have had more limited success when meeting reality.

To a lesser extent, cloud computing has suffered a similar fate. Cloud was supposed to take dysfunctional or commoditized elements of your IT operations and send them to the “experts” in the cloud, who could provide the same service better and cheaper. Rather than a “fail-proof” system, many IT leaders have found vendors that suffer the same outages and security problems, but they’re now separated by complex contracts and distant support teams.

How to react to these technologies

Working in IT is a constant battle between embracing new technologies that generate competitive advantage, and maintaining a healthy dose of skepticism rather than jumping on every new trend. The interesting thing about most of these technologies is that they seem perennially around the corner since there’s legitimate demand for the concept. Just as I’ve long awaited my flying car, everything from cheap and interconnected devices to standardized technology delivery platforms is compelling. Before betting the future of your organization, look for the usual success stories and attempt to get vendors to collaborate (including financially) on small test projects within your own organization. While we may not be parking our flying cars in the garage anytime soon, the perennial appearance of certain future technologies makes them worth watching, and occasionally implementing as part of your R&D and innovation efforts.