How can you believe what you read online?
Written at home on a wet evening and dispatched to silicon.com via personal LAN.
Over the past few years I must have seen a dozen new search engines claiming to be the answer to a maiden's prayer.
Each has been based on single, or combined, parameter search and techniques involving: bots, crawlers, directories, indexing, inference, mathematics, nets, phrase, spiders, statistics, wiki, words and more.
And in each case the operational attributes have been in some way advantageous for some niche application or information space, but nothing could be held in overall superiority.
Today, the top 10 most used search engines include Google, Yahoo!, MSN, Ask, AltaVista and Yandex. There are hundreds of lesser-known examples, plus those amalgamating and federated search engines of search engines.
So you might think we had the search topic cracked. Not so.
By and large, today's search engines are brilliant and useless at the same time. Finding 63,800,000 results is both confounding and worrying.
Have I missed something vital and have I got the full picture, the latest and most relevant document, and perhaps more important, is what I am reading true?
Bluntly, I have no idea, and neither do you. What we really need is a search-engine filter based on validity. In short, a truth engine.
What we want to know is: are these statistics accurate and up to date? Are these historical events in order? Do I have the best available data? Is that politician correct? Is he lying, or is he bending the truth to his own advantage? And so on.
Our entire history of growth and success has been founded on a high-wire act that teeters between misconception and deception. Truth is at once vital, valuable and tenuous. And until recently I feel it was more identifiable and easier to establish.
The explosion of information seems to have made us richer and poorer at the same time. And while the baby boomers tend to question the validity of things, probe and research, generations X and Y see no such requirement.
A quick visit to Urban Legends and other debunking sites such as Snopes.com and Truthorfiction.com reveal a wealth of riches and illustrates my point about validity and truth. Just search 'debunking' for much more. Searching in sequence gave:
- 'Truth' gave 3,060,000 results.
- 'Truth engine' gave 991,000 results.
- 'Truth engine development' gave 213,000 results.
- 'Truth engine research' gave 1,860,000 results.
The general irrelevance of these search returns offers me little reassurance that this mega problem is being investigated to the point where we might expect a commercial offering in the near future.
This is all a bit of a blunder. It threatens the generation of content and the provision of a valued service. So, what's the answer? One thing is certain is that it cannot be policed or stopped.
As far as I can see the most formidable truth engine available to date is based on wiki technology. And while it is easy to see how we might automate the truth filtering of established factually based data, it is not clear how we might go a lot further without cognitive computation complete with contextual awareness. That is, a machine intelligence to match - or excel - our own.
When will this happen? In my view - soon. Web 2.0 is the starting point. Bandwidth, connectivity and sensors everywhere are the vital components - without which there will be no collective intelligence.
And the next big step is adaptability of software and ideally, but not absolutely necessary, hardware. These aspects are being born of the evolutionary developments in artificial life and artificial intelligence.
How soon is soon? I reckon the next 20 years will be really interesting, and we might just be able to find what we are looking for first time.