Panel discussion at AI Fringe
Sir Nigel Shadbolt in conversation with Chloe Smith MP at the AI Fringe | Credit: Gina Neff

What risks are most important to address at a meeting like the AI Safety Summit at Bletchley? In this latest AI Safety Summit diary, Gina Neff explores her conversations at Day 2 of the AI Fringe

My day started off early, with a briefing call with my team and BBC Radio 4 producers for Woman’s Hour, more on that later in the week…

I then headed over to the AI Fringe, where I did an interview with Alexander Norén, Senior tech and business correspondent at SVT (Swedish public service television) on the risks and opportunities of AI that we face now.

One wonderful thing about the AI Fringe event is the ability to see and talk to so many interesting people in one space. I had a great conversation between sessions with Max Low, a masters student at the Oxford Internet Institute and Aisha Farooq, who works at Brunswick, a ‘critical issues’ consultancy. Later, I had another rewarding conversation with Adam Leon Smith of Dragonfly and Holly Porter of BCS. Adam is really passionate about standards and was on a panel on responsible AI ecosystems.

With Max Lowa-masters-student-at-OII-and-Aisha-Farooq
Gina Neff with Max Low, a masters student at OII and Aisha Farooq, Brunswick | Credit Gina Neff

In the afternoon, I was asked to do a TV interview for the Canadian Broadcasting Corporation. The interview was about getting a better understanding of what more needs to be done at the national/international level around AI development, and whether the AI Safety Summit happening in Bletchley is a good starting point.

I think that this summit helps to bring together research into AI and do important horizon scanning of the safety concerns at an international level. As the Prime Minister said last week at the Royal Society, we can’t do this alone. AI doesn’t have borders, and we need to engage all of the world’s leading AI powers to ensure safe and responsible AI development.

I met up with Ibrahim Habli, Professor of Safety-Critical Systems at the University of York. He along with Ana MacIntosh, John McDermid and others are leading a new Centre  for Doctoral Training in AI Safety. A brilliant and timely programme to train up the next generation of people to work in AI Safety.

I then attended a panel that examined how AI Safety will help realise the opportunities of AI for people and society in the UK and around the world, and why it’s so crucial to have those conversations now.  This Sir Nigel Shadbolt in conversation with Chloe Smith MP, moderated by Resham Kotecha, Global Head of Policy, The Open Data Institute.

There was smartness, as usual, from Sir Nigel: “There is nothing at home in the circuits [in frontier models]”, he said. “They are not conscious and won’t become so”, he continued.

The panellists explained how AI is not a standalone risk, it is complex and could go across public services. Sir Nigel emphasised that this complexity means that we need to teach children a new kind of digital civics.

This was a good discussion pointing the AI Fringe audience to the Summit itself after two days of talking about expanding the canvas.

Gina-Neff-and-Alondra-Nelson
Gina Neff and Alondra Nelson | Credit Gina Neff

Then it was to the afterparty. I made my way to the headquarters of Google DeepMind for speeches by James Manyika and Michelle Donelan. It was great to catch up with Chris Meserole, who just left The Brookings Institute to run the newly announced Foundation Model Forum, an industry body to identify best practices for AI safety.

I also had great chats with Genevieve Bell, one of the most influential social scientists in Silicon Valley and recently named the next VC and President of Australian National University and Alondra Nelson, the former acting director of the US Office of Science Technology Policy, one of the key forces behind the US approach to AI regulation and recently named to the UN High-level Advisory Body on AI.

With both of them in the room in Bletchley Park, sociotechnical approaches to really big problems will be well represented.

 

 

 

Recommendations for the week

Some sessions coming up this week that our team recommend include:

– AI + National Security Symposium (2 Nov) – The challenges and opportunities that AI brings to national security.

– Gender Equity and AI (2 Nov) – Examining gender disparities and the critical role of gender diversity in the field of AI

 

Read all posts from my AI Safety Summit Diary:

Wednesday 25 October – AI Safety Summit Diary: the lead-up

Thursday 26 October – AI Safety Summit Diary: A Prime Ministerial visit

Monday 30 October- AI Safety Summit Diary: How do we ensure responsible AI?

Tuesday 31 October – AI Safety Summit Diary: The Summit approaches…

Wednesday 1 November – AI Safety Summit Diary: The here and now: The impact of AI on our lives

Thursday 2 November – AI Safety Summit Diary: What’s next after the AI Safety Summit?

Friday 3 November – AI Safety Summit Diary: How do we build a responsible and trustworthy international AI ecosystem?