Orange and blue microchip
Credit: Adobe Stock

Join Tim O’Reilly in conversation with John Naughton, to discuss how to make AI more accountable.

17:00 BST

Tuesday 10 October

Cambridge, UK

Register now

An abundance of information produces a scarcity of attention, as Herbert Simon noted in 1970. In the future (that is, our present) he postulated that we would need machines to help us manage our attention. And that insight is an essential foundation for understanding and managing today’s information technologies.

Given the role of machines in how humans process information today – what Simon called the “institutional context” in which human “bounded rationality” operates – then it becomes clear that the disclosures that we require of companies today miss the mark.

Today’s corporate disclosures have been designed for markets in which all the levers of market power and misbehaviour are financial.

By contrast, we need to define what today’s big tech companies and the developers of frontier AI should be required to disclose about the operational metrics that they use to monetize and engage users as well as to manage disinformation, fairness, safety, and other possible harms.

Even when AI companies are doing their best to do everything right, we can’t ignore the market-shaping power of internet platform technologies.

Without sufficient information from which to form a baseline of AI and platform behaviour, we will have little ability to see and correct things when things begin to go off the rails.

Join Tim O’Reilly in conversation with John Naughton, to discuss how to make AI more accountable.

About Tim O’Reilly

Tim O’Reilly has a history of convening conversations that reshape the computer industry. He’s played a key role in framing and evangelizing terms such as “open source software” “web 2.0” “the Maker movement” and “government as a platform”. He is the founder, CEO, and Chairman of O’Reilly Media, and a partner at early stage venture firm O’Reilly AlphaTech Ventures (OATV). He is also on the boards of Code for America, PeerJ, Civis Analytics, and PopVox. His book, WTF: What’s the Future and Why It’s Up to Us, explores what technology advances teach us about the future economy and government as its “platform.” He is a Visiting Professor of Practice at University College London’s Institute for Innovation and Public Purpose, where he is researching a new approach to regulating big technology platforms by limiting their ability to extract economic rents.

About John Naughton

John is co-founder of the Minderoo Centre for Technology and Democracy. By background a systems engineer, he is an academic and a newspaper columnist whose interests lie in the societal impact of digital technology. He is Emeritus Professor of the Public Understanding of Technology at the Open University, Director of the Wolfson Press Fellowship Programme and the Technology columnist of the Observer.

At CRASSH, he was co-director (with Sir Richard Evans and Professor David Runciman) of a five-year Leverhulme-funded research project on Conspiracy and Democracy (2013–2018) and with David Runciman ran a two-year (2014–2016) research project on Technology and Democracy . He has written extensively on technology and its role in society, and is the author of a well-known history of the Internet — ‘A Brief History of the Future’ (Phoenix, 2000). His most recent book, ‘From Gutenberg to Zuckerberg: what you really need to know about the Internet’, is published by Quercus.

About the Minderoo Centre for Technology and Democracy

The Minderoo Centre for Technology and Democracy is an independent team of academic researchers at the University of Cambridge, who are radically rethinking the power relationships between digital technologies, society and our planet.

We are based in CRASSH (University of Cambridge Centre for Research in the Arts, Humanities and Social Sciences).