Oi, AI!
Artificial intelligence, perils and public opinion
Artificial Intelligence has seemingly gone mainstream overnight. It is proving to be the most extreme example of the ‘Accelerando’ — a period of rapid technological change foretold by sci-fi writers — with engineering and the art of the possible racing ahead of the due diligence of science and policymaking. Do people, what they think and what they want even get a look in?
Two years ago, colleague Reema Patel made an impassioned plea to avoid leaving technology to the experts. She identified a deficit created by “lock[ing] out the intelligent and critical layperson from a legitimate public debate about the potential, risks and limitations of technologies”. Instead, technology was typically developed by ‘DAD’ — decide, announce, defend.
Things may move quickly but fundamentals can hold firm. In 2017 — a lifetime ago for technology — Luciano Floridi described an increasingly shared and ‘enveloped infosphere’ in which “we are more analogue guests than digital hosts”. He did, though, dispute the apocalyptic vision of AI, diagnosing the real problem not as HAL — the fictional, sinister AI character in Arthur C. Clarke’s Space Odyssey — but H.A.L. — humanity at large.
Is H.A.L. under serious threat? Bill Gates, Elon Musk and Stephen Hawking are among those who have previously warned of the existential risk posed by AI. They should know but, according to a recent study by a group of researchers including Philip Tetlock, domain experts tend to be gloomier about the future than superforecasters.
This gap persists on a range of topics but is largest for AI. As The Economist points out, who gets to set the frame about future threats matters because domain experts tend to dominate public conversations; the media gravitate towards experts, despite superforecasters having a better record. Importantly, both groups had AI as the biggest worry when thinking about catastrophe or extinction, potentially because it is newer and does not have the longevity of threats such as nuclear annihilation.
AI is making the world nervous, adding fuel to an already ablaze ‘twitchy twenties’. Ipsos has found that on average across 31 countries, nearly as many people say that products and services that use AI make them nervous (52%) as say they are excited about them (54%).
Among a range of AI-related measures, nervousness has increased the most since a previous survey conducted 18 months previously. And, despite the publicity about AI’s new use cases, the percentage of adults who say they know what types of products and services use AI remains relatively unchanged.
While there is optimism about time management and entertainment options via AI, widespread concern exists about negative impacts, particularly in terms of employment. Across Ipsos’ countries, an average 57% of those in work expect AI to change the way they do their current job and 36% expect it to replace their current job.
A tendency to look both ways on technology has been neatly described by Ipsos CEO Ben Page as cognitive polyphasia. According to Ipsos Global Trends, seven in ten people can’t imagine life without the internet but a larger proportion — eight in ten — are resigned to losing some privacy because of what new technology can do. Globally, six in ten fear that technical progress is destroying our lives.
Trust and excitement about AI tend to be higher among younger generations, especially Gen Z, as well as among those with a higher income or education levels — those perhaps more insulated from technological progress. But younger age groups, perhaps scarred by the impact of social media, are more likely than older groups to strongly agree that technological progress is destroying their lives.
In Britain, the proportion of people holding this sentiment increased from 31% to 58% between 2013 and 2022. People’s trust that companies using AI will protect their personal information ranges from 72% in Thailand to just 32% in France, Japan, and the United States. Ipsos found a divide between generally AI-enthusiastic emerging markets and AI-wary high-income countries.
Culture matters, particularly as countries are seeking geo-political advantage through technology. In March, Chancellor Jeremy Hunt pledged £1b over 5 years on AI and supercomputing. Rishi Sunak wants Britain to be a global AI hub, leading debates about legislation and regulation, as a way of unlocking growth and improving public services.
You might say people have already voted with their feet (or clicks?). ChatGPT became the fastest-growing app in history, hitting an estimated 100 million monthly active users in one month, January 2023 (it took TikTok 9 months and Instagram 2.5 years to reach similar numbers, although Meta’s Threads has since done it even quicker). However, this would be to confuse uptake with endorsement, consumerism with politics, what is happening with what should happen.
As Oliver Morton wrote, we need to build an understanding that technology “does not have its own agenda but serves the agenda of others”. Others means people. AI needs people, and we need it to keep needing us.