(Mis)information: the battle for free speech online
Battle of Ideas festival 2024, Sunday 20 October, Church House, London
ORIGINAL INTRODUCTION
There’s no doubt that the growth of social media, online self-publishing and, now, AI, poses challenges for public discussion. It can be hard to tell what is true or false when claptrap, unfounded assertions or falsified narratives can be spread with ease online – sometimes by tinfoil-bedecked conspiracy proponents and bad-faith actors, sometimes by well-meaning, if naïve, individuals.
The worry is that our digital culture fuels hate speech, online harms, fake news, identity-driven polarisation and even violence. Elon Musk, owner of X, the social-media company most associated with free speech, is attacked by Big Tech critics who accuse him of stirring up tensions and violence by retweeting the claims of questionable individuals. Others worry about harmful or illegal material circulated on encrypted messaging services such as WhatsApp and Telegram. In a year of global elections, hyper-realistic videos and audio recordings created by Generative AI have fuelled fears of deep-fake threats to democracy.
Free speech does not mean unbridled licence because, as free speech critics regularly assert, you cannot falsely shout ‘fire!’ in a crowded theatre. But working out where the boundaries of acceptability lie is seldom easy. Many might baulk at the arrest of Pavel Durov, the Russian billionaire founder of Telegram, but also question if platforms accused of supporting child pornographers and people traffickers should operate unchecked by officials. Was the UK home secretary, Yvette Cooper, right to hold social-media companies responsible for ‘shocking misinformation’ that escalated the riots and ‘the deliberate organisation of violence’ on these platforms? And what should we make of Prime Minister Kier Starmer’s comments that ‘social media companies have a responsibility for ensuring that there is no safe place for hatred and illegality on their platforms’?
Others worry that democracy itself is under threat when the charge of spreading misinformation becomes a go-to justification for restrictions in public life. In Brazil, a notoriously activist judge supported by the Supreme Court has banned X/Twitter for 22million users there, X’s fourth largest market. In America, First Amendment free-speech protection proved no barrier to Facebook being strong-armed into taking down posts after the Biden administration accused it of spreading misinformation on Covid vaccines and facemasks.
Do the public need protection from untruths and, if so, what legal controls should there be? Is there a problem of political bias when it comes to assessing misinformation, with authorities picking and choosing which untruths they go after? How can we tackle the actual problems of misinformation, while resisting cancellations and bans? Or should we simply tolerate the free dissemination of all ideas, in order to avoid the development of a broader censorious culture?
SPEAKERS
Timandra Harkness
journalist, writer and broadcaster; author, Technology is Not the Problem and Big Data: does size matter?; presenter, Radio 4’s FutureProofing and How to Disagree
Fraser Myers
deputy editor, spiked; host, the spiked podcast
Nico Perrino
executive vice president, Foundation for Individual Rights and Expression (FIRE)
Eli Vieira
journalist; editor, Gazeta do Povo; writer, Twitter Files – Brazil
Dr Daniel Williams
lecturer in Philosophy, University of Sussex
CHAIR
Alastair Donald
co-convenor, Battle of Ideas festival; convenor, Living Freedom; author, Letter on Liberty: The Scottish Question