A black and white image of a keyboard
Philipp Katzenberger for Unsplash

On 6 June, the Minderoo Centre for Technology and Democracy organised a workshop to hear from policymakers, industry, and civil society on the topic of AI-enabled intimate image abuse.

AI-enabled intimate image abuse is a global phenomenon. The landscape is moving incredibly fast with code-free technology becoming increasingly accessible.

AI-enabled intimate image abuse occurs when harm is caused by the creation and distribution of AI-generated non-consensual intimate images. The definition of ‘intimate image’ can differ significantly in some communities and does not only refer to sexual imagery.

Today, the average person can use code-free technology with no experience necessary to create images. Through this, we are seeing the democratisation of the ability to cause harm. The technical barriers to entry are now low, and we are seeing synthetically-altered images (“deepfakes”) that are increasingly realistic.

On 6 June, the Minderoo Centre for Technology and Democracy organised a workshop to hear from policymakers, industry, and civil society on the topic of AI-enabled intimate image abuse. The session explored what legislation and industry interventions are required to tackle the issue. This workshop followed a scoping session with key stakeholders from academia, industry and civil society in April 2022.

Recommendations

The following recommendations were discussed:

-AI-enabled intimate image abuse should be included specifically in Schedule 7 of the Online Safety Bill. The Online Safety Bill is currently the best opportunity to ensure that all forms of AI-enabled intimate-image abuse are covered by regulation – whether through bringing in existing criminal offences into Schedule 7 or including other forms of abuse in the (as yet not defined) list of harms to adults or codes of practice

-The definition of online harms in the Online Safety Bill should be more inclusive (i.e., include reference to gendered-harms in light of the disproportionate extent of online harms faced by women) and take into consideration the interrelation of harms

-Regulated services should be required to adopt systems and processes that are pre-emptive in detecting and mitigating emerging harms

-The conceptualisation of AI-enabled intimate image abuse as an online harm should take into consideration different cultural contexts, especially how it affects women

– Regulated services should be compelled to be transparent with their risk assessments

-AI-enabled intimate image abuse should be made a criminal offence, and the offence conceptualised from the viewpoint of the victim, and should not be based perpetrator’s motivations in the definition of such an offence

-The media literacy provision in the Online Safety Bill should be strengthened to empower users about navigating their safety online

Read the full summary from the discussion.