The enterprise architecture for law enforcement and security in the next decade will be focused on using technology to identify the bad guys and stop them in their tracks.
ComputerWorld, 14 January 2008, reports that “Homeland Security is bankrolling futuristic technology to nab terrorists before they strike.”
Here’s the target architecture:
“The year is 2012 [probably a bit optimistic on the date]. As soon as you walk into the airport, the machines are watching. Are you a tourist—or a terrorist posing as one? As you answer a few questions at the security checkpoint, the systems begin sizing you up. An array of sensors—video, audio, laser, infrared—feeds a stream of real-time data about you to a computer that uses specially developed algorithms to spot suspicious people. The system interprets your gestures and facial expressions, analyzes your voice and virtually probes your body to determine your temperature, heart rate, respiration rate, and other physiological characteristics—all in an effort to determine whether you are trying to deceive. Fail the test, and you’ll be pulled aside for a more aggressive interrogation and searches.”
Last July, The Department of Homeland Security, “human factors division asked researchers to develop technologies to support Project Hostile Intent, an initiative to build systems that automatically identify and analyze behaviors and physiological cues associated with deception.”
The intent is to use these screening technologies at airports, border crossings, as well as possibly in the private sector for building access control and candidate screening.
Sharla Rausch, director of DHS’s human factors division says that “in controlled lab setting, accuracy rates are in the range of 78% to 81%.”
Where is the current research focused?
- Recognition of gestures and microfacial expressions
- Analysis of variations in speech (i.e. pitch, loudness)
- Measurement of physiological characteristics
The hope is that by combining all three modalities, “the overall predictive accuracy rate” will improve.
What are some of the challenges with these technologies?
Currently, too many false positives
- Existing technologies, like the polygraph have “long been questioned by scientists…and remain inadmissible in court.”
- Ability of algorithms to “correctly interrupt” suspicious behavior or cues
- Profiling is continuously objected too based on discriminatory grounds
- Privacy concerns about the personal data collected
- Testing is limited by security concerns in the field
- Deployment will be limited due to cost, leaving soft targets potentially at risk
Will this Big Brother screening technology come to fruition?
Absolutely. The challenges with the technologies will be resolved, and putting aside the profiling and privacy issues, these screening technologies will become essential to our protecting ourselves.