Queen's Policy Engagement

Why Artificial Intelligence Needs Democratic Governance

AI poses serious threats to democratic politics and institutions, as well as our capacity and right to engage freely in democratic practices says Dr Birgit Schippers.

Why Artificial Intelligence Needs Democratic Governance

Popular depictions of artificial intelligence-based systems present a frightening and opaque Orwellian power that appears to dominate the lives of human beings – think Terminator or Hal 9000. In my view, the idea that AI will take over the world is misplaced. However, I do believe AI poses serious threats to democratic politics, democratic institutions, and our capacity and right to engage freely in democratic practices.

We must urgently address these threats by strengthening democratic control over those who design and develop AI, profit from it, and can use it to the detriment of democratic politics.


Artificial Intelligence and Bias

Recent concerns over AI focused on its potential to compound existing racial and gender-based inequalities. For example, research by MIT scholar Joy Buolamwini and UCLA Professor Safiya Noble has demonstrated how algorithms trained on racially biased data sets discriminate against people of color, especially against women of color.

The standard response to this concern is a call for diversity: we should use a diverse range of training data for algorithms, for example by including more dark-skinned and female faces when developing training data for face recognition technologies. We should also diversify the pool of AI developers beyond the predominantly pale and male staff.


AI’s Structural Threat to Democracies

The call to diversify the AI workforce has become shorthand for democratizing AI. Such focus is undoubtedly important and welcome: a diverse workforce brings a range of life experiences and perspectives to the workplace and workplace practices. But diversity does not tackle AI’s structural threat to our democracies. This threat runs deeper than the “add color, gender, and stir” approach.

Paul Nemitz, Principal Advisor in the European Commission, argues that AI technologies have the potential to distort the infrastructure of democratic deliberation and its context.

AI provides the tools that enable direct interference with democratic processes, for example by facilitating practices of misinformation and by microtargeting prospective voters through individually tailored political advertising. Such microtargeting exploits individual fears and vulnerabilities for political gain. It undermines the shared information basis of political communities, and destroys what philosopher Hannah Arendt called our “common world.”


Regulating Tech Companies

The actions of Facebook and Cambridge Analytica in the 2016 Brexit referendum and the U.S. presidential election of the same year have become textbook examples of digitally mediated interferences in the democratic process. Worryingly, recent evidence suggests that such practices continue.

These interferences benefit from the almost unfettered power of big tech companies, their respective models of surveillance capitalism, and their capacity to undermine democratic processes, practices, and institutions.

Regulating the tech giants and curtailing their influence on democratic politics is an urgent task. This task requires effective oversight over AI companies, conducted by AI-literate, democratic institutions, but also by other participants in the democratic process, including journalists, NGOs, academics, and the broader citizenry.

 

AI Concerns

The effective regulation of tech giants must also tackle wealth inequalities that these companies create, whether in their local communities or globally.

Further, oversight must strengthen the employment rights of tech workers. These employees have been at the forefront of campaigns that monitor the practices of tech companies, and tech workers have been instrumental in highlighting projects whose sole purpose seems to be enhanced state surveillance and targeting vulnerable people.

For example, Google employees called out their company’s – now abandoned –Dragonfly project and its – also discontinued – participation in Project Maven, while Amazon staff asked their company to stop selling face recognition software to law enforcement agencies.

These examples highlight two additional concerns: the collaboration between private corporations and state agencies tasked with security, intelligence, and criminal justice responsibilities; and the design and development of AI-based technologies with a negative impact on human rights, civil liberties, and the capacity to participate in democratic politics.

 

AI as Technology of Control

Our rights-based democracies are vulnerable to the enormous surveillance capacity of AI-driven systems, which undermine our right to privacy and which interfere with the rights to freedom of expression and movement.

I do not deny that AI can be a force for good. But when used badly, wrongly, or with malicious intent, it becomes a technology of control. For example, the rolling out of face recognition technology in public spaces, or practices such as predictive policing, create a chill-factor that undermines the political culture where democratic politics thrives.

Data harvesting by private corporations and data sharing between those corporations and law enforcement agencies compound the threat to individual human rights, civil liberties, and the framework of democratic politics.

 

Artificial Intelligence Paradox

Artificial intelligence presents us with a paradox: AI is here to stay, and we increasingly rely on AI systems in our everyday lives and in practices as democratically engaged citizens, even in our criticism of AI.

We must urgently decide what AI-mediated democracies should look like. Can the AI genie released from the bottle of human creativity be democratized?

Democratic practices and values, including equality, participation, and accountability, must underpin the governance of AI, from the design and development stages to its application by users. AI needs democratic governance.

 

Article originally appeared on The Globe Post. 

The featured image has been used courtesy of a Creative Commons license. 

 

Dr Birgit Schippers
Posted by

Dr Birgit Schippers is a Senior Lecturer in Politics in St Mary's University College Belfast. She is currently a visiting Research Fellow at the Senator George J Mitchell Institute for Global Peace, Security and Justice at Queen's University Belfast. Birgit's research works at the intersection of critical political theory, international studies and global ethics.

Leave a Reply

Your email address will not be published. Required fields are marked *