artificial intelligence

AI systems could ‘turn against humans': Tech pioneer Yoshua Bengio warns of artificial intelligence risks

Yuichiro Chino | Moment | Getty Images

“It’s not just ‘the robots are coming’ and we need to hunker down and prepare for it. The choices we make along the way will determine the impacts,” Department of Labor Acting Secretary Julie Su said about AI and workers at the CNBC Work Summit.

  • Artificial intelligence pioneer Yoshua Bengio has warned of the potential negative effects of the technology on society and called for more research and "guardrails" to develop AI safely.
  • There are arguments to suggest that the way AI machines are currently being trained "would lead to systems that turn against humans," Bengio said.
  • "We have agency. It's not too late to steer the evolution of societies and humanity in a positive and beneficial direction," he added.
Professor Yoshua Bengio, at the One Young World Summit in Montreal, Canada, on Friday, Sept. 20, 2024

Famed computer scientist Yoshua Bengio — an artificial intelligence pioneer — has warned of the nascent technology's potential negative effects on society and called for more research to mitigate its risks.

Bengio, a professor at the University of Montreal and head of the Montreal Institute for Learning Algorithms, has won multiple awards for his work in deep learning, a subset of AI that attempts to mimic the activity in the human brain to learn how to recognize complex patterns in data.

But he has concerns about the technology and warned that some people with "a lot of power" may even want to see humanity replaced by machines.

"It's really important to project ourselves into the future where we have machines that are as smart as us on many counts, and what would that mean for society," Bengio told CNBC's Tania Bryer at the One Young World Summit in Montreal.

Machines could soon have most of the cognitive abilities of humans, he said — artificial general intelligence (AGI) is a type of AI technology that aims to equal or better human intellect.

"Intelligence gives power. So who's going to control that power?" he said. "Having systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism."

A limited number of organizations and governments will be able to afford to build powerful AI machines, according to Bengio, and the bigger the systems are, the smarter they become.

"These machines, you know, cost billions to be built and trained [and] very few organizations and very few countries will be able to do it. That's already the case," he said.

"There's going to be a concentration of power: economic power, which can be bad for markets; political power, which could be bad for democracy; and military power, which could be bad for the geopolitical stability of our planet. So, lots of open questions that we need to study with care and start mitigating as soon as we can."

Such outcomes are possible within decades, he said. "But if it's five years, we're not ready … because we don't have methods to make sure that these systems will not harm people or will not turn against people … We don't know how to do that," he added.

There are arguments to suggest that the way AI machines are currently being trained "would lead to systems that turn against humans," Bengio said.

"In addition, there are people who might want to abuse that power, and there are people who might be happy to see humanity replaced by machines. I mean, it's a fringe, but these people can have a lot of power, and they can do it unless we put the right guardrails right now," he said.

AI guidance and regulation

Bengio endorsed an open letter in June entitled: "A right to warn about advanced artificial intelligence." It was signed by current and former employees of Open AI — the company behind the viral AI chatbot ChatGPT.

The letter warned of "serious risks" of the advancement of AI and called for guidance from scientists, policymakers and the public in mitigating them. OpenAIhas been subject to mounting safety concerns over the past few months, with its "AGI Readiness" team disbanded in October.

"The first thing governments need to do is have regulation that forces [companies] to register when they build these frontier systems that are like the biggest ones, that cost hundreds of millions of dollars to be trained," Bengio told CNBC. "Governments should know where they are, you know, the specifics of these systems."

As AI is evolving so fast, governments must "be a bit creative" and make legislation that can adapt to technology changes, Bengio said.

Companies developing AI must also be liable for their actions, according to the computer scientist.

"Liability is also another tool that can force [companies] to behave well, because ... if it's about their money, the fear of being sued — that's going to push them towards doing things that protect the public. If they know that they can't be sued, because right now it's kind of a gray zone, then they will behave not necessarily well," he said. "[Companies] compete with each other, and, you know, they think that the first to arrive at AGI will dominate. So it's a race, and it's a danger race."

The process of legislating to make AI safe will be similar to the ways in which rules were developed for other technologies, such as planes or cars, Bengio said. "In order to enjoy the benefits of AI, we have to regulate. We have to put [in] guardrails. We have to have democratic oversight on how the technology is developed," he said.

Misinformation

The spread of misinformation, especially around elections, is a growing concern as AI develops. In October, OpenAI said it had disrupted "more than 20 operations and deceptive networks from around the world that attempted to use our models." These include social posts by fake accounts generated ahead of elections in the U.S. and Rwanda.

"One of the greatest short-term concerns, but one that's going to grow as we move forward toward more capable systems is disinformation, misinformation, the ability of AI to influence politics and opinions," Bengio said. "As we move forward, we'll have machines that can generate more realistic images, more realistic sounding imitations of voices, more realistic videos," he said.

This influence might extend to interactions with chatbots, Bengio said, referring to a study by Italian and Swiss researchers showing that OpenAI's GPT-4 large language model can persuade people to change their minds better than a human. "This was just a scientific study, but you can imagine there are people reading this and wanting to do this to interfere with our democratic processes," he said.

The 'hardest question of all'

Bengio said the "hardest question of all" is: "If we create entities that are smarter than us and have their own goals, what does that mean for humanity? Are we in danger?"

"These are all very difficult and important questions, and we don't have all the answers. We need a lot more research and precaution to mitigate the potential risks," Bengio said.

He urged people to act. "We have agency. It's not too late to steer the evolution of societies and humanity in a positive and beneficial direction," he said. "But for that, we need enough people who understand both the advantages and the risks, and we need enough people to work on the solutions. And the solutions can be technological, they could be political ... policy, but we need enough effort in those directions right now," Bengio said. 

- CNBC's Hayden Field and Sam Shead contributed to this report.

Copyright CNBC
Exit mobile version