If a human makes a mistake, it’s that individual’s mistake. But if a computer equipped with algorithmic surveillance and a facial recognition system generates a false positive detaining the wrong airline passenger because he had dark skin, the political sparks fly.
Part of the problem is that smart cctv systems identify everybody. They do not just scan a small point sample at random like human security camera personnel do. They read and compare every single person’s biometrics against known FBI, Interpol, CIA, MI-6, etc database. The result is that everybody becomes a target for detention and arrest.
The political issues that arise out of the use of Smart CCTV systems get hotter when airport and government agency policies dictate that anyone flagged needs to be thoroughly questioned. Even if a human security officer can tell that the person flagged is NOT a match with a known terrorist or fugitive, that person must be subjected to a thorough screening in most countries and airports.
This system is in place to protect the security or law enforcement agency from the responsibility of mistakenly releasing a real terrorist or fugitive.
So the brunt of the burden of this level of precision security falls on people of darker complexion. People of darker complexion in predominantly caucasian zones will disproportionately match international terrorist databases. Making matters worse, low quality CCTV cameras with older CCD sensors do not run Facial Recognition Systems accurately on low light subjects (people of darker complexion).
Is this an issue of decreasing danger and risk for everybody by increasing facial profiling risk for a disproportionate few?
How do you see this problem getting solved?