On Thursday, Gina Neff spoke to ABC News (Australia) about the Scarlett Johansson chatbot case and OpenAI’s new Safety and Security Committee

This week OpenAI announced it has begun training its next AI model and launched an internal Safety and Security Committee to produce a set of recommendations on the company’s projects and operations. The committee – chaired by CEO Sam Altman – has formed in response to recent high-profile resignations over safety and the company’s use of a voice similar to that of actress Scarlett Johansson for their new ChatGPT bot (Sky).

On Thursday 30 May, our Executive Director Professor Gina Neff spoke to ABC News Australia about regulation and the problem with internal safety committees. She pointed out that such committees can only take us so far – there are greater problems with AI that concern our planet and our democracies that are bigger than just one company and will not be tackled by companies alone.

Prof Neff emphasised that the Scarlett Johansson case is striking because there are still legal questions that need to be addressed when it comes to AI innovation; young companies in particular need to figure out how to get their technologies to work with existing regulations.

Prof Neff also addresses the viral AI-generated ‘All Eyes on Rafah’ image, suggesting that AI-generated content is not bad in and of itself, but challenges arise when we use such images to deceive, scam, or bring people into different political realities.

Watch the full interview on ABC News.