Terminator or tech hype? AI and the apocalypse

Battle of Ideas festival 2023, Saturday 28 October, Church House, London

Recorded at the Battle of Ideas festival 2023 on Saturday 28 October at Church House, London.


An apocalyptic mood surrounds the latest advances in AI. Sci-fi and tech enthusiasts have long murmured about the ‘singularity’ – the point at which technology runs irreversibly away from us. Since the growth in use of OpenAI’s ChatGPT, such digital doomsaying has gone mainstream, going way beyond the usual concerns about AI taking our jobs.

This year, a statement urging global leaders to take seriously the existential threat of AI garnered many high-profile signatories, including Jared Kaplan, Sam Harris, Demis Hassabis, Sam Altman and Bill Gates. Rishi Sunak is on record as having met with several of the key signatories to discuss the global threat. In 2015, when Elon Musk helped establish OpenAI, he declared he was motivated by fear that AI could become the ‘biggest existential threat’ to humanity.

Sam Altman, who became OpenAI’s CEO following Musk’s departure, is an avowed ‘prepper’ – one of many tech executives who have invested in underground bunkers and supplies, lest the worst should happen. Google CEO Sundar Pichai admits that concerns about AI ‘keep me up at night’, while his colleague Geoffrey Hinton – a 75-year-old pioneer known as the ‘godfather’ of AI – quit his job at Google, saying that he now regrets his work and fears what he has created.

Are these apocalyptic fears of AI warranted? Or are they obscuring and stifling the true potential of this technology? The inscrutability of the way AI works – its ‘black box’ of algorithms – is now seen by many as Pandora’s box, dividing opinion around greater openness versus keeping the technology under wraps.

Who should have access to AI? Is it a liability if it falls into the hands of nefarious actors, or do we need greater transparency, to ensure that the technology aligns with our human values and objectives? Do fears of an existential threat reflect the pessimism of our current moment? Or should we take seriously the warnings from those who are at the forefront of developing this technology?

Dr Norman Lewis
visiting research fellow, MCC Brussels; co-author, Big Potatoes: the London manifesto for innovation

Elizabeth Seger
AI governance and ethics researcher, Centre for the Governance of AI

Professor Ulrike Tillmann FRS
mathematician; director, Isaac Newton Institute for Mathematical Sciences; fellow, Alan Turing Institute

Sandy Starr
deputy director, Progress Educational Trust; author, AI: Separating Man from Machine