Professor Gina Neff provided evidence at a recent All-Party Parliamentary Group on Artificial Intelligence (APPG AI) session on disinformation, deep fakes, and safeguarding democratic processes.

The integrity of this year’s UK General Election and local elections are threatened by the rapid proliferation of disinformation and deepfakes. In response to these threats, APPG AI convened an evidence session (‘Citizen Participation in AI: Navigating Disinformation and Deep Fakes – Safeguarding Democratic Processes and Responsible AI Innovation’) on how synthetic deepfakes are generated and fraud is detected using AI, the impact of AI-generated content on democratic processes and societal well-being, and the policy and regulatory challenges arising from these issues.

On 26 March, Professor Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy, attended the APPG AI session at the UK House of Lords. The session was led by APPG AI Co-Chairs Stephen Metcalfe MP and Lord Clement-Jones CBE and attended by other expert evidence givers included Carl Miller, Research Director at the Centre for the Analysis of Social Media (CASM) at Demos; Aled Lloyd Owen, Global Policy Director at Onfido; Markus Anderljung, Head of Policy at the Centre for the Governance of AI; and Sophie Murphy Byrne, Senior Manager at Government Affairs (EU&UK) at Logically. Simon Horswell, Fraud Specialist Manager at Onfido, provided a live showcase of the company’s fraud lab which which serves to address the challenge of training machine learning models with sufficient data.

Prof Neff focused her evidence on three key areas: the nature of elections and safeguarding democracy, the inadequacy of fact-checking as a solution, and protecting people’s right to participate in the public sphere. She emphasised the following points:

Protecting democracy involves a combination of social and technological measures. Disinformation campaigns need to be countered by protecting civil liberties and human rights, ensuring access to free and independent media, and strengthening the rule of law, accountability and transparency, and increased public awareness and participation. Technological solutions  alone will never suffice in safeguarding the democratic process.

The solution to safeguarding democracy cannot be fact-checking. There is a lack of available tools and methodologies to counter the spread of disinformation through AI for fact-checkers and journalists. Available tools to detect AI-generated photos and videos generated are often not sufficiently reliable or accurate. Much of virality is about emotions and humour, not facts and evidence, which are incredibly difficult to check by an automated fact-checking system. The issue is further complicated by the circulation of content (such as deepfake audio) on messaging platforms like Telegram or WhatsApp which are less researched and unmoderated.

Protecting people’s rights to participate in the public sphere is critical for maintaining a shared social reality. Disinformation campaigns often target marginalised groups, exacerbating societal divisions. Deepfakes in particular, which are overwhelmingly targeted at and harass women and people from minority groups, threaten to shut these voices out of the public sphere and conversation. This poses a significant threat to democracy.

As first steps to remedy the above problems, the Minderoo Centre for Technology and Democracy proposes three solutions:

  1. Code of Conduct: Establish guidelines for using generative AI in political campaigns and support independent journalism.
  2. Better Monitoring: Enable independent researchers to monitor online platforms’ health to identify threats and inform solutions. The EU AI Act will enable this and the Online Safety Act may eventually do the same. Further mechanisms for mandatory researcher access to data are crucial to inform society of threats and solutions that work to safeguard democracy.
  3. Better Tools and Platform Policies: Online platforms must design better tools and policies to prevent chronic abuse. They need to be encouraged to enhance safety-by-design approaches and upstream solutions, such as improved human content moderation, effective handling of user complaints and improved reporting mechanisms. Stronger guardrails will ensure generative AI are not used by malign individuals or organisations for the design and spread of disinformation campaigns.

Read the transcript of the discussion in the Parliamentary Brief.