Artificial intelligence will threaten our democracy ahead of upcoming elections in the United Kingdom and the United States, according to one of the world’s leading computer scientists.
Speaking on Beth Rigby Interviews… Dame Wendy Hall says AI’s ability to damage democracy should be more of an immediate concern than any existential threat posed by the technology.
The UK’s AI skills champion told Beth: “Next year we will see a growth in disinformation, the deep fakes of this world, because AI makes it very easy to do that.
“You can just get the tools off the internet and it’s getting harder and harder to detect that a video, or a photo, or a piece of text has been faked.”
Dame Wendy sits on the government’s AI Council, an “independent expert committee” providing “advice to government and high-level leadership of the AI ecosystem”.
As well as this, Dame Wendy is the regius professor of computer science at the University of Southampton where one of her specialties is AI.
She added: “We’ve got two major elections coming up next year – the US, UK – and the EU have got elections as well.
“I see this as a threat to democracy. In the sense that we’ve got to help people understand where they’re getting the messages from.
“I think that’s more important than worrying about an existential threat in a hundred years’ time, but, I’m not saying the existential threat isn’t there.
“It is very unlikely at the moment, I think.
“Let’s say it’s unlikely at the moment, but it’s a possibility.
“So we have to prepare for the fact that we are keeping the AI under our control, so that we don’t become the slaves to that master, which is where the regulation comes in.”
Dame Wendy wants the upcoming global summit on AI – being held in the UK – to focus on deep fakes, where people are added into pictures and videos by AI.
“We need people to quite quickly pull together the technology that’s used to detect fakes and to ensure that something is coming from a trusted source,” she said.
Despite the warning of the threat to elections, Dame Wendy did suggest that AI could improve decision-making by politicians by helping them condense information for briefs, as they “often talk about things they don’t know anything about”.
She said: “One thing you can do with a generative AI is give it a file and ask it to summarise it for you.”
Dame Wendy said claims by another of the prime minister’s advisers, Matt Clifford, that artificial intelligence could have the power to be behind advances that “kill many humans” in just two years’ time was an “overreaction” to the risks AI poses.
Dame Wendy said: “We aren’t anywhere near that. The headline that said we’ve got two years to save the world was very misleading, and the quote was taken out of context.”
Again, she pointed out that “things could get out of control” in generations “down the line”.
“I’m glad we’re taking a lead that we need to think about global regulation in the area of AI in the same way as we think about climate science.
“We think about nuclear threats, and we think about chemical biological warfare. It is at that level that we have to discuss it.”
How would AI kill us? “By developing a drug that kills us all,” Dame Wendy reckons.
“AI technologies and AI machines can develop a new drug, and in the wrong hands, it could be sold as a great cure for such and such,” she said, noting the failure in the medical industry to catch the potential harms of drugs like Thalidomide.
‘Freedom of speech at risk’
Dame Wendy also spoke about the difficulty with the government’s Online Safety Bill.
The bill, which the government says is being brought in to protect children and target those sharing illegal material, has come under criticism for the fact it would provide the authorities with access to private messages.
Technology companies have also said that some of the government’s demands to provide backdoors into people’s previously private messages would weaken the technology’s protection from bad actors.
Dame Wendy said: “We’ve got the Online Safety Bill, but actually the problem is that: Yes, we want to get child abuse offline.
“Yes, we want the big tech companies to be responsible for the really bad stuff that you can detect, and trace where it’s coming from and so on, and take offline.
“But we have freedom of speech at risk, and my choice is not necessarily your choice.
“It’s really difficult to say that the companies or governments decide what we can and can’t say online; that’s a very difficult area.”
The AI expert also spoke about Neuralink, the Elon Musk-owned company aimed at interfacing human brains and computers.
Dame Wendy said that it was “fantastic” for people with disabilities – but also highlighted it would mean a world “where computers can read your brainwaves”.
“I find that world a bit scary, I have to say. I want to make sure that we really understand what we’re doing with that technology.
“So I would like to think that this is a new technology that hasn’t yet emerged into the mainstream, but it’s definitely coming.”