Compiled on the M6 driving to Liverpool and dispatched to silicon.com a couple of weeks later via a free wi-fi connection in my hotel.
I just cannot work any faster or smarter – I seem to have exhausted every degree of freedom, every element of efficiency improvement I can muster.
Yet if I am to have any hope of future advancement I need more intelligence in the network and in my machine. Analysing my workflow shows that ‘searching and finding’ constitute the biggest areas for efficiency improvements, followed by ‘anticipatory computing’.
What do I mean by ‘searching and finding’ slowing me down? Well, using a popular search engine to identify reference information on any topic relies more upon my abilities than that of the engine employed. Let’s take a look at what I received from some specific search terms:
Artificial Intelligence = 12,700,000 links (0.28 seconds)
Artificial life = 18,800,000 links (0.25 seconds)
Robots = 71,600,000 links (0.11 seconds)
Androids = 1,680,000 links (0.38 seconds)
On one level the quantity of information available is very impressive but on another it’s absolutely useless! Let’s drill down further in just one category:
Robots Humanoid = 2,550,000 links (0.13 seconds)
Humanoid Robots = 4,680,000 links (0.36 seconds)
Still far too much information for me to process – let’s refine this again:
Robots Humanoid Cognitive = 153,000 links (0.26 seconds)
Humanoid Robots Cognitive = 56,900 links (0.29 seconds)
Cognitive Humanoid Robots = 57,300 links (0.28 seconds)
Cognitive Robots Humanoid = 153,000 links (0.30 seconds)
Humanoid Cognitive Robots = 84,600 links (0.23 seconds)
Robots Cognitive Humanoid = 151,000 links (0.28 seconds)
And of course I am being presented with the ‘top 10’ from each crop, which is not a lot of use when the population spans 57,000 – 153,000. Let’s just try one more cut on a couple of these:
Robots Humanoid Cognitive Vision = 898,000 links (0.37 seconds)
Humanoid Robots Cognitive Vision = 17,800 links (0.35 seconds)
At one end of the search field we have suddenly diverged, while at the other we have converged a fraction further. By now you will have recognised the problem and can relate it to your own experiences.
As the web has grown we have migrated from searching for a ‘needle in a haystack’ (easy) to looking for a ‘needle in a needle stack’ (hard). And this problem can only get worse as the information and network activity expands.
Conventional search techniques cannot crack this problem and we need to employ new thinking and new techniques. Moreover, we need to lessen the human workload and delegate to our machines and the network.
How easy can it be, or how hard?
For sure there is no silver bullet or even singular solution – we need multiple and simultaneous search techniques. As far as I can see the ideal would be to have my work – email, documents, downloads and reading – monitored by my machine. This should then be augmented by a networked search engine which adopts a parallel role by monitoring my search and download activity.
There are a lot of inferred information and clues to be had by just monitoring what I write, read, watch, listen to, communicate, save, delete and search.
Better still, comparing groups of individuals with similar profiles would improve the focus even more. For us to leverage the next level of IT advantage, we need all this to happen in the background so we can be fed the right information at the right time, and in effect, anticipate our every need.
At this point I can hear all the old conservative minds screeching to a halt with all the fears of sharing and being monitored, not to mention the loss of control and oversight. At the same time the young set are saying, ‘Yes, let’s share, let’s change, let’s get ahead!’
Where am I on this? I say: ‘Yes please, where do I sign?’
I need to go faster, do more, become more effective, get and stay ahead of the field. I just know I can do and achieve more, with a little bit of machine help.