Briefly, it says that "in general" it's not possible to determine if a given program will ever reach an end point, or halt. Since you can write your "analyser" so that it inspects the given code, returns a number to indicate whether it's safe, then stops, you can apply this principle to the problem at hand.
Note that this is the "general" case. A lot of code can be analysed and it can be determined whether it will work as specified. The problem is that once you get over a certain level of complexity, you will hit this issue. With an OS, an application and the net involved, we're way past this point.
There's been a lot of work done on provably correct programming over the last 30 years, but it's still mainly an academic technique and is difficult to apply to large scale systems. One significant problem is that you have to accurately define your requirements before you can analyse your code, and writing the spec can be more complex than writing the code you're trying to test.
If you've used assertions in programming, you'll understand a little of how this works. Imagine having to create a set of assertions that will "completely" verify the correct operation of an entire OS.
I don't pretend to be an expert on this stuff, but I've always been interested in the more esoteric/formal side of programming and computation. Any errors are mine!
Keep Up with TechRepublic