Data collection and privacy, a rant on facial recognition software

The first time I walked through automated passport scan and facial recognition gates at an airport, I felt a bit uneasy. 

Eric Schmidt said "if you've done nothing wrong, you've got nothing to worry about." And if you know anything about privacy, you know that this "nothing to hide argument" is common but poor. 

The best snarky counter I've heard: it's like saying the right to free speech is only important if you have something to say. The whole point of individual liberty (aka Western civilization!) is that the government's intrusions into your life need to be justified. 

The more sophisticated argument runs: the point is not that there's something to hide, but that the person has the power to hide something. The discovery of this ability - to shield the self from others - is the wellspring of individuality. 

Facial recognition software is being rapidly embedded into camera sensors everywhere. The price of surveillance, hardware and software, is falling further and further. Tracking is ubiquitous and inevitable. 

The only cogent regulation I can come up with, given how pervasive this tech will become and the power of the companies and governments deploying it, is to regulate its use cases: 

- You cannot use emotions derived from facial expression to deny insurance cover, but you can use them to prompt secondary screening at immigration. 

- You cannot use mobile phone patterns as probable cause, but you can use them to surveil an already suspicious suspect. 

- You cannot use browsing history to construct an ad targeting profile, but you can use sites where the user is logged in. Etc.

This is certainly a partial answer. The better solution would be to stipulate that all devices and software must remain fully transparent to the end user any time it communicates - expanding the inspection principle of free software to proprietary code. It's hard and costly, but it could work. 

The introduction of the GDPR shows how difficult this stuff is. Whereas parts of the directive are well-intentioned, the implementation is that very little has changed. It's just more paperwork and a weak demonstration of the illusion of power in the EU. Oh yeah, and governments are largely exempt (wtf?). 

In the meantime, we've increased our collective anxiety a bit further. We've lost a bit more about what it means to be human. And we've inched more towards the dystopian totalitarian future that science fiction seems so hell-bent on prescribing for us.

Seriously, what was the last future-positive AI literature you've read? Star Trek? My point exactly.