top of page

Post

How algorithmic decision-making can exacerbate discrimination and even undermine the rule of law.

AI has unquestionable power that is helping humans overcome the challenges of the coming decade. In particular, it is driving more accurate diagnoses in healthcare and improving efficient energy allocation to combat climate change. When AI is increasingly being integrated into the world’s economic and social fabric, it is vital that while exploiting its many benefits, we remain mindful of the potentially damaging impact AI poses. One especially ought to consider the effects on human rights, particularly those concerning equality and non-discrimination. There is growing evidence that automated decision-making can perpetuate racial discrimination, posing a particular threat to the rule of law when these decisions affect the judicial realm. The current legal framework is insufficient and greater regulation is required to protect rights.


The digital revolution has given huge power to algorithms to make decisions about various aspects of our lives - from our job eligibility to how much we pay for insurance and what we see on our social media feeds. Given the personal significance of these decisions, it is worrying that little is known about the content of these algorithms. This is largely due to stringent trade secret laws and algorithms’ statistical and technical complexity. There is a significant fear that some of these algorithms are replicating and perpetuating existing human biases in their decision-making, leading to discriminatory (and potentially unlawful) outcomes, and yet they remain almost entirely unaccounted for. Furthermore, unlike human decision-makers who have the agency to change their minds according to social changes over time, AI systems cannot rectify their behaviour.


In Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, data scientist Cathy O’Neil examines the ways algorithms use ‘big data’ to make decisions on sensitive topics such as loan eligibility and health insurance premiums. O’Neil considers the case of ZestFinance, a financial services provider which employs data analytics to assess eligibility for payday loans and the rate of interest for individual clients. As part of the application for the loans, the company assesses the spelling, grammar, and word choice of candidates in order to ascertain their education level. This was justified due to the correlation between educational attainment and creditworthiness. However, this is controversial because it unfairly punishes non-native English speakers and those unable to attain language proficiency due to learning disability or socioeconomic status with higher rates of interest.[i] Rather than looking to solid evidence, such as past credit, average earnings or even age, ZestFinance relies on dubious correlation likely to make credit less accessible for those with learning disabilities and immigrant families.


Similarly, Facebook’s advertising policies have been accused of being discriminatory. Facebook operates by collecting and selling data to advertisers; it has been financially successful due to its ability to sell targeted commercials. Advertisers can select characteristics according to what they perceive to be their ‘target market’. Until recently, advertisers could select ‘racial affinity’ as a characteristic target on Facebook’s advertising platform, meaning that housing and job commercials could be, if selected, removed from ethnic-minority users’ timelines. This manifestly unlawful practice has changed following numerous class action suits in the United States, yet the principle remains. The problem with AI-targeted advertising is that these adverts impact which employment and housing opportunities citizens are exposed to, undermining people’s opportunities in a way that contravenes anti-discrimination legislation.

In relation to recruitment, data has been used to make decisions over candidates’ suitability. Ironically, these measures have been implemented in order to overcome human bias and to make the job market more meritocratic. However, this has not always been the case and there are serious concerns over this practice. Notably, Amazon wound down its AI recruitment tool after fears it was perpetuating sex discrimination. Once again, this is because the algorithm relied on existing data to make future decisions, meaning data pertaining to a predominantly male engineering workforce was privileged. Thus, the algorithm was not producing gender neutral results and even (inadvertently) penalised the use of the word ‘women’ on CVs.[ii] As liberal societies seek to root out discrimination, there is evidence that rather than promoting equality, algorithms are entrenching existing inequalities and must, therefore, be regulated.


Algorithmic decision-making poses a direct threat to the rule of law when it is adopted within a judicial realm. Criminal justice systems are at the heart of how societies administer justice and have the power to withhold citizens’ liberty when people are deemed to have fallen foul of the law. The most potent use of AI has been the COMPAS system in Florida and other U.S. States. The software uses data and statistical modelling to predict whether a convict is likely to reoffend. Ostensibly, this is an effective way of using statistics to improve criminal justice efficiency and ensure public safety. Nonetheless, ProPublica’s research demonstrated that that this AI system incorrectly categorised African Americans as ‘high risk’ twice as often as Caucasians. This is outrageous given the extent to which the information was used by courts to determine bail and sentencing.[iii] This data is not only reinforcing existing bias, but also potentially undermining access to justice to citizens. It is worth reiterating that the workings of this algorithm are undisclosed due to trade secret laws, meaning it is impossible to ascertain the reasoning behind outcomes. COMPAS’s algorithm should be publicly available to be scrutinised. Equality before the law and the right to a fair trial are essential components of a functioning democracy which respects the rule of law. The idea that the use of flawed and potentially biased data sets could be the grounds for depriving citizens of their freedoms attacks the tenets of the rule of law.


To overcome bias, the law must regulate this practice in order to protect vulnerable groups. At present, General Data Protection Regulation plays a pivotal role in shaping data practices around Europe. Article 22 provides some limited protection, stating that ‘data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling’.[iv] However, the scope of this Article is too narrow to adequately protect vulnerable groups from discriminatory practices. Indeed, Article 22 only prohibits ‘fully’ automated decision making, allowing data to be used in other ways with minimal human input. Furthermore, the decision must be made using personal data which then has legal (or significant) impacts on the data subject. In reality this has not provided the rigorous protection required in the 21st century. The focus on individual rights inherent in GDPR means social concerns are often overlooked.


Given the potential for algorithms to perpetuate biases in numerous processes from recruitment to judicial decision-making, it is important to establish some solutions to these problems, which go beyond the limited protection offered by GDPR. Clearly a robust regulatory stance needs to be taken which scrutinises data input, output, and trends. Pasquale believes that greater transparency will promote fairer outcomes, stating that ‘secret algorithms’ process ‘inaccessible data’ and that, therefore, greater transparency into the architecture and input of algorithms will lead to more equitable and legitimate results.[v] However, more convincingly, Chander has argued that transparency is insufficient because of rigorous trade secret laws and the general inaccessibility of the data and, instead, we should opt for what he dubs ‘algorithmic affirmative action’.[vi] He describes these as ‘a set of proactive practices that recognize deficiencies in the equality of opportunity and act in a multiplicity of ways to seek to correct for those deficiencies.’ More specifically, Chander recommends altering algorithm design choices to take discrimination into account. He draws on a U.S. Federal Trade Commission report which focuses on how a company removed location data from its algorithm due to concerns about racial discrimination, since ‘different neighbourhoods can have different racial compositions.’ Equally, he suggests closer scrutiny by third parties who could assess input and output data to ensure anti-discrimination legislation is complied with.


In conclusion, it is clear algorithmic decision-making has the potential to exacerbate discriminatory practices. Utilised in judicial settings, it even risks undermining the rule of law. Whilst GDPR goes some way towards alleviating the damage in this field, it does not go far enough and so other methods such as Chander’s ‘algorithmic affirmative action’ may be necessary.



A piece by Isaac Swirsky.


Notes

[i] O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. London: Penguin, p.155.

[ii] Vincent, J. (2018). Amazon Reportedly Scraps Internal AI Recruiting Tool That Was Biased against Women. [online] The Verge. Available at: https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report [Accessed 20 Apr. 2019].

[i] Angwin, J. (2016). Machine Bias Risk Assessments in Criminal Sentencing. [online] ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [Accessed 30 Jan. 2020].

[i] Burgess, M. (2018). What is GDPR? The summary guide to GDPR compliance in the UK. [online] Wired.co.uk. Available at: http://www.wired.co.uk/article/what-is-gdpr-uk-eu-legislation-compliance-summary-fines-2018 [Accessed 30 Jan. 2020].

[v] Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Cambridge, MA: Harvard University Press, p.218.

[vi] Chander, A. (2017). “The Racist Algorithm?”. Michigan Law Review, 115(6).

Recent Posts

See All

Comments


Write for the Blog

If you are interested in writing a post for the Faces of Social Justice blog, please leave your email below – we will send you details on how to do so!

Thanks for submitting!

  • White Facebook Icon

© Faces of Social Justice by the Lawyers without Borders LSE Student Division. 

bottom of page