The TSA's "behavior detection" techniques for finding terrorists fit in well with many of the rest of the TSA's policies, in that it provides an excellent example of security theater — policies enacted more for show than for any real effect.
The TSA is the agency responsible for making your life difficult while you travel. These are the guys that make you remove your shoes, throw away your shampoo, and miss your flight because they're busy confiscating your laptop and looking through the naked photos of your wife. It's the agency whose job it is to ensure your security while you fly — but mostly just engages in poorly rehearsed security theater. The TSA is, for many, the poster child for why the EFF's Surveillance Self-Defense Project is a good idea.
You might have noticed that I'm a big believer in privacy. It's not just because of political leanings and a strong belief in the sanctity of individual rights. It's not just because I believe the most important part of the U.S. Constitution is the Bill of Rights. It's also because I believe privacy is security; without privacy, security is nothing more than an illusion. Privacy is, to a significant degree, exactly what security is meant to protect. Lose sight of that at your peril.
The TSA, along with many other government agencies in the last six or seven years, has repeatedly proven itself an egregious violator of privacy. Its policies not only violate privacy far too often, but are often actually counterproductive. The TSA's plans for the uses of behavior detection techniques are not any exception to this. I'll see if I can explain this briefly for you without eliciting laughter.
The "behavior detection" techniques of the TSA involve having "specially trained" personnel wandering around the airport terminals of the nation watching how people look and act. If that special training makes them think someone is planning a terrorist act (or other violation of the law, one assumes), the subject will probably look "anxious", or so the training claims. Someone marked as looking suspiciously anxious will then be approached and questioned. The specially trained agent will watch how the person reacts to questions, and determine whether the subject is behaving suspiciously enough to warrant detaining, further questioning, and ultimately bringing in law enforcement to arrest the person and investigate further.
The above-linked article about "behavior detection" has this to say about its success rate:
In that time, 43,000 of the millions of travelers watched by crowd-scanning behavior-detection screeners have appeared suspicious enough to warrant a closer look, the TSA says. The closer looks generated 3,100 calls from the TSA to police for further questioning.
The police arrested 278 of those people, none on terror charges. Among the charges described in TSA news releases about behavior-related arrests are immigration violations and possessing guns and illegal prescription drugs.
If you do the math, you'll discover that fewer than 0.65% of the people identified as a potential terrorist were actually doing anything that led to an arrest. One might imagine that most of those were people with bench warrants for their arrest because of unpaid speeding tickets, had a baggie of hydrocodone pills in a backpack, or were otherwise far from being grave dangers to society. Of those arrested, zero were terrorists, according to the article.
There's some question whether the TSA would achieve roughly the same results by picking out 43,000 people at random. In fact, that may effectively be what they're doing. There's not really any good evidence to suggest that these behavior detection agents of theirs are any more effective than picking every seventh person out of line at a security checkpoint to harass. There's talk of further developing the technique, to hone it for greater accuracy, and to automate some of it.
Here's the fun part: I think that "behavior detection" might eventually be a good idea for security purposes. People are walking around in public areas, displaying the equivalent of what poker players call "tells" for anyone to see. It's not a violation of security to look at the way a person acts and make guesses about whether or not that person is thinking about something in particular. I have no objection to trying to analyze expressions for signs that someone is doing, or planning on doing, something that could cause harm to others. In and of itself, the use of such a security technique is actually a good idea.
This marks, for a change, an incidence of the TSA employing a technique that focuses on the real problem — terrorists. It's not focusing on a specific method for committing acts of terror that someone saw in a movie or that was actually used once in the past, and probably won't be used in the future because everybody's ready for it. It's not trying to randomly choose targets for body cavity searches, resulting in very angry senior citizens and traumatized eleven-year olds. It's not focusing on racial profiling, either. It's an honest to goodness attempt to focus on determining what might make a terrorist uniquely recognizable.
Unfortunately, the TSA is doing it wrong. It's using a technique that hasn't been tested and developed to the point where anyone can say it's better than picking people at random, and it's using it in a way that can lead to invasions of privacy, such as conducting searches of persons and personal belongings without permission. Solve these two problems and I'll be behind the TSA all the way with its behavior detection program.
Until then, you can count me among its opponents.
If you find yourself seduced by security programs that sound good, but for which you have no hard evidence of effectiveness, or it's prone to false positives, consider the case of the TSA's behavior detection techniques. Are you, like the TSA, engaging in security theater, trying to make things seem secure without measurably improving security at all? If so, it's time to rethink what you're doing.