This afternoon, while I ate lunch, I watched a new-to-me anime called Pscho-Pass. The TL:DR summary of the show is a future where everyone is chipped and constantly monitored. If their Criminal Coefficient becomes to high, they are arrested for the good of society. It doesn’t matter whether they’ve commited a crime or not, if the potential that they will commit a crime exceeds the threshold set by the computer, they’re arrested, or killed if they resist arrest. Like many anime, it sounds like a dystopian future that could never happen. Except when I got back to my desk, I saw Bruce Schneier’s post, Surveillance by Algorithm. And once again what I thought was an impossible dystopian future seems like a probable dystopian present.
As Bruce points out, we already have Google and Amazon suggesting search results and purchases based on our prior behaviours online. With every search I make online, they build up a more detailed and accurate profile of what I like, what I’ll buy and, by extension, what sort of person I am. They aren’t using people to do this, there’s an extensive and thoroughly thought out algorithm that measures my every action to create a statistically accurate profile of my likes and dislikes in order to offer up what I might like to buy next based on their experience of what I’ve purchased in the past. Or there would be if I didn’t purposefully share and account with my wife in order to confuse the profiling software Amazon uses.
Google is a lot harder to fool and they have access to a lot more of the data that reveals the true nature of who I am, what I’ve done and what I’m planning to do. They have every personal email, my calendar, my searches, in fact, about 90% of what I do online is either directly through Google or indexed by Google in some way or shape. Even my own family and friends probably don’t have as accurate an indicator of who I really am behind the mask as Google does, if they choose to create a psychological profile of me. You can cloud the judgement of people, since they’re applying their own filters that interfere with a valid assessment of others, but a well written computer algorithm takes the biases of numerous coders and tries to even them out to create an evaluation that’s closer to reality than that of most people.
It wouldn’t take much for a government, the US, the UK or any other government, to start pushing to have an algorithm that evaluates the mental health and criminal index of every user on the planet and alerts the authorities when something bad is being planned. Another point Bruce makes is that this isn’t considered ‘collection’ by the NSA, since they wouldn’t necessarilly have any of the data until an alert had been raised and a human began to review the data. It would begin as something seemingly innoccuous, probably similar to the logical fallacies that governments already use to create ‘protection mechanisms’: “We just want to catch the peodophiles and terrorists; if you’re not a peodophile or terrorist, you have nothing to fear.” After all, these are the exact phrases that have been used numerous times to create any number of organizations and mechanisms, including the TSA and the NSA itself. And they’re all that much more powerful because there is a strong core of truth to them.
But what they don’t address is a few of the fatal flaws to any such system based on a behavioural algorithm. First of all, inclination, or even intent, doesn’t equal action. Our society has long ago established that the thought of doing something isn’t the same as doing the action, whether it’s well-intentioned or malign. If I mean to call my mother back in the US every Sunday, the thought doesn’t count unless I actually follow through and do so. And if I want to run over a cyclist who’s slowing down traffic, it really doesn’t matter unless I nudge the steering wheel to the left and hit them. Intent to commit a crime is not the same as the crime itself, until I start taking the steps necessary to perform the crime, such as purchasing explosives or writing a plan to blow something up. If we were ever to start allowing the use of algoritms to denote who ‘s a potential criminal and treat them as such before they’ve commited a crime, we’ll have lost something essential to the human condition.
A second problem is that the algorithms are going to be created by people. People who are fallable and biased. Even if the individual biases are compensated for, the biases of the cultures are going to be evident in any tool that’s used to detect thought crimes. This might not seem like much of a problem if you’re an American who agrees with the mainstream American values, but what if you’re not? What if you’re GLBT? What if you have an open relationship? Or like pain? What if there’s some aspect of your life that falls outside what is considered acceptable by the mainstream of our society? Almost everyone has some aspect of their life they keep private because it doesn’t meet with societal norms on some level. It’s a natural part of being human and fallable. Additionally, actions and thoughts that are perfectly innocuous in the US can become serious crimes if you travel to the Middle East, Asia or Africa and the other way as well. Back to the issue of sexual orientation, we only have to look at the recent Olympics and how several laws were passed in Russia to make non-heterosexual orientation a crime. We have numerous examples of laws that have passed in the US only later to be thought to be unfair by more modern standards, with Prohibition being one of the most prominent examples. Using computer algorithms to uncover people’s hidden inclinations would have a disastrous effect on both individuals and society as a whole.
Finally, there’s the twin ideas of false positives and false negatives. If you’ve ever run an IDS, WAF or any other type of detection and blocking mechanism, you’re intimately familiar with the concepts. A false positive is an alert that erroneously tags something as being malicious when it’s not. It might be that a coder used a string that you’ve written into your detection algorithms and it’s caught by your IDS as an attack. Or it might be a horror writer looking up some horrible technique that the bad guy in his latest novel is going to use to kill his victims. In either case, it’s relatively easy to identify a false positive, though a false positive by the a behavioural algorithm has the potential to ruin a persons life before everything is said and done.
Much more pernicous are false negatives. This is when your detection mechanism has failed to catch an indicator and therefore not alerted you. It’s much harder to find and understand false negatives because you don’t know if you’re failing to detect a legitimate attack or if there are simply no malicous attacks to catch. It’s hard enough when dealing with network traffic to understand and detect false negatives, but when you’re dealing with people who are consciously trying to avoid displaying any of the triggers that would raise alerts, false negatives become much harder to detect and the consequences become much greater. A large part of spycraft is to avoid any behaviour that will alert other spies to what you are; the same ideas apply to terrorists or criminals of any stripe who have a certain level of intelligence. The most successful criminals are the ones who make every attempt to blend into society and appear to be just like every other successful businessman around them. The consequences of believing your computer algorithms have identified every potential terrorist are that you stop looking for the people that might be off the grid for whatever reasons. You learn to rely to heavily on the algorithm to the exclusion of everything else, a consequence we’ve already seen.
So much of what goes on society is a pendulum that swings back and forth as we adjust to the changes in our reality. Currently, we have a massive change in technologies that allow for surveillance that far exceeds anything that’s ever been available in the past. The thought that it might swing to the point of having chips in every persons head that tells the authorities when we start thinking thoughts that are a little too nasty is a far fetched scenario, I’ll admit. But the thought that the NSA might have a secret data center in the desert that runs a complex algorithm on every packet and phone call that is made in the US and the world to detect potential terrorists or criminal isn’t. However well intentioned the idea might be, the failings of the technology, the failings of the people implementing the technology and the impacts of this technology on basic human rights and freedoms are something that not only should be considered, they’re all issues that are facing us right now and must be discussed. I, for one, don’t want to live in a world of “thought police” and “Minority Report“, but that is where this slippery slope leads. Rather than our Oracle being a group of psychics, it might be a computer program written by … wait for it … Oracle. And if you’ve ever used Oracle software, that should scare you as much as anything else I’ve written.