Justice is blind, the saying goes, which means that a person’s particulars – their social status, race, gender, etc. – should have no bearing on fair judgement in any legal dispute. By this standard, we are all considered equal before the law. In A Theory of Justice, the philosopher John Rawls proposed the following thought experiment: what kind of society and government would you choose if you did not know beforehand the role you would play in that society? Imagining yourself an anonymous, “random” person — raceless, genderless, and classless — in other words “blinded” to your own particulars, what would you endorse as the fairest system for the distribution of assets, of opportunity and obligation for the citizenry?
And now imagine Justice wearing Xray glasses! Artificial Intelligence has arrived, and by hoovering up personal data at an increasing rate, data we are often even unaware of “emitting” — the rhythm of daily movement and sleep, vocal patterns and word choice frequencies — we have begun to disclose information that in many instances exceeds what we know about ourselves. To the holders of this information we are the furthest thing from anonymous. Now consider how this utter loss of anonymity might ultimately impact how society distributes assets and opportunity, and how it dictates each individual’s obligation to the state.
Machine learning and other AI algorithms are being used in a wide variety of contexts, including medical and legal decision-making, in addition to ad targeting, mortgage risk and sentencing recommendation systems, etc. The existing database already creates a cyber-profile for each of us at a level of resolution inconceivable just a decade ago. And then, of course, computers are inference engines, and these high-resolution profiles spit out predictions about our choices and behaviors that we have no role in vetting. Based on this “number crunching” we may be thrown into categories we did not even know existed, with important implications.
And then there is the worry: will AI serve us in our best interest? The fictional HAL 9000 computer in Stanley Kubrick’s 2001: A Space Odyssey was designed to serve mankind but ran dramatically afoul of this credo. Should we begin now to consider the rules of engagement between ourselves and the super-intelligent machine agents we may reasonably anticipate? As the use of AI algorithms grows more and more prevalent, it is important that we think hard about the ethical, legal, and social implications of these technologies and their myriad current and potential future uses.