The Citizen, the Tyrant, and the Tyranny of Patterns
Costica Dumbrava
I agree with Wessel Reijers that social scoring systems limit political freedom and instrumentalise citizenship to impose social control. While technologies have always been used for political ends, the latest technologies relying on big data and complex algorithms offer uniquely powerful and highly effective tools to survey people, quash dissent, and reinforce an authoritarian rule. What is new is a wide appeal of technologies as ‘fixes’ for pressing social and political issues. Building on their ‘success’ in commercial sectors (banking and marketing), predictive algorithms and scoring systems are enthusiastically adopted by governmental agencies throughout the world to help making decisions in areas such as criminal justice, welfare, and border control. The Chinese Social Credit scheme is nevertheless unique because of its ambition to aggregate data from a wide variety of sources to provide a set of prescriptive algorithms for “good citizenship” that is backed by state coercion.
Good citizenship cannot be captured or fixed by an algorithm, because: (1) people genuinely disagree about what good citizenship is; (2) there are limits to how any conception of good citizenship can be enforced in states that uphold the rule-of-law; and (3) even the best scheme of algorithmic citizenship would fail to achieve its objectives due to the inherent weaknesses of applying algorithms to social affairs.
Algorithms vs Citizenship
Firstly, there are presumably as many conceptions of good citizenship as there are citizens. Some focus on rights, others on duties; some emphasise a shared history or a sense of belonging, yet others point to civility, solidarity, sacrifice, etc. Any attempt to design a citizenship algorithm will have to deal with this pluralism, either by settling on a minimalistic and generic version of citizenship (making a rather toothless algorithm) or by prioritising certain views over others (through democratic processes or not).
Secondly, it is one thing to figure out what good citizenship is and another thing to establish what citizens should be required to do, believe, or express following such conception. Helping your neighbour instead of binging on alcohol might be the more celebrated behaviour in many conceptions of citizenship, but, unless there are explicit laws proscribing such behaviours, the state should not penalise people for not helping neighbours and/or for alcohol binging. Using state coercion to enforce moral norms and social expectation, as the Chinese scheme seems to do, unwarrantedly transgresses the boundary between legality and morality. The Chinese citizens’ scores are built by aggregating and inferring from a wide variety of data (such as online shopping data, use of services, administrative, financial and educational records, social media activity) without a public (open to contestation) explanation about why any of this data is relevant and/or how the inferences work.
Thirdly, assuming that we agree on a normatively acceptable scheme of algorithmic citizenship, the success will depend on its capability to represent and make sense of complex social issues. This criticism concerns algorithms more broadly, but might be particularly relevant for the Chinese social scoring scheme given its lack of transparency and accountability. There is growing evidence about widespread bias and discrimination in algorithms applied in social contexts (e.g., policing, sentencing, hiring).[1] The bias is partly due to the dirtiness of data, a problem that is particularly acute in the case of ‘social’ data, which is often (self)reported or recorded by people and thus, inevitably, contains errors, inaccuracies, inconsistencies, or simply untruthful information. Yet, a bigger problem with algorithms is that they are bad at dealing with social and normative issues. While big-data algorithms could be useful in physical science (as in, say, inferring physical attributes of stars using data-driven models[2]), they are problematic when it comes to ‘understanding’ the social world, deliver social justice, and foster good citizenship. Even if we fix the many problems raised by applying algorithms in social affairs (dirty data, end of privacy, black box algorithms, unreasonable inferences, etc.), we are still left with what I call the problem of the tyranny of patterns. This problem comes in two parts.
Tyrannies of the Past and the Similar
The first part is the tyranny of
the past, which means that algorithms are used to assign risks based on
past behaviours and social facts. Briefly, this locks individuals into
patterned categories that deny them the capacity to redeem themselves and act
freely. You may have a crime-stricken from your criminal record, but you may
never be able to hide from previously recorded data about your social
background, place of birth, and education. The second part of the problem
relates to the tyranny of the similar. Algorithmic assessments of a
person are not only based on data about their past, but also on data about
other individuals and their past, who are deemed similar in some statistical
way to the person in question. One may, therefore, be treated as a risky or
trustworthy individual, and thus be barred or awarded access to certain
resources and opportunities, because of who her fellow ‘similars’ are and how
they behave. Algorithmic bias and unfairness are not just technical issues that
could be fixed by more and better data or by sharper algorithms; it is a ‘mathematical
certainty’[3], given
that they mirror the real world, which is itself biased and unfair. It is thus
highly misguided to entrust algorithms (and people behind them) with the task
of fixing citizenship and building more just societies.
[1] O’Neil. C. (2017). Weapons of Math Destruction: How Big Data Incrases Inequality and Threatens Democracy. Penguin Random House, London.
[2] Folk, D. (2019). How Artificial Intelligence Is Changing Science. Quantamagazine, March 2019. Accessed at https://www.quantamagazine.org/how-artificial-intelligence-is-changing-science-20190311
[3] Fry, H. (2019). Hello World: How to be Human in the Age of the Machine. Penguin Random House, London.