AI systems could âturn against humansâ: Tech pioneer Yoshua Bengio warns of artificial intelligence risks


Famed computer scientist Yoshua Bengio â an artificial intelligence pioneer â has warned of the nascent technologyâs potential negative effects on society and called for more research to mitigate its risks.
Bengio, a professor at the University of Montreal and head of the Montreal Institute for Learning Algorithms, has won multiple awards for his work in deep learning, a subset of AIthat attempts to mimic the activity in the human brain to learn how to recognize complex patterns in data.
But he has concerns about the technology and warned that some people with âa lot of powerâ may even want to see humanity replaced by machines.
âItâs really important to project ourselves into the future where we have machines that are as smart as us on many counts, and what would that mean for society,â Bengio told CNBCâs Tania Bryer at the One Young World Summit in Montreal, a gathering of young leaders addressing the challenges facing the world today.
Machines could soon have most of the cognitive abilities of humans, he said â artificial general intelligence (AGI) is a type of AI technology that aims to equal or better human intellect.
âIntelligence gives power. So whoâs going to control that power?â he said. âHaving systems that know more than most people can be dangerous in the wrong hands and create more instability at a geopolitical level, for example, or terrorism.â
A limited number of organizations and governments will be able to afford to build powerful AI machines, according to Bengio, and the bigger the systems are, the smarter they become.
âThese machines, you know, cost billions to be built and trained [and] very few organizations and very few countries will be able to do it. Thatâs already the case,â he said.
âThereâs going to be a concentration of power: economic power, which can be bad for markets; political power, which could be bad for democracy; and military power, which could be bad for the geopolitical stability of our planet. So, lots of open questions that we need to study with care and start mitigating as soon as we can.â
We donât have methods to make sure that these systems will not harm people or will not turn against people ⊠We donât know how to do that.
Yoshua Bengio
Head of the Montreal Institute for Learning Algorithms
Such outcomes are possible within decades, he said. âBut if itâs five years, weâre not ready ⊠because we donât have methods to make sure that these systems will not harm people or will not turn against people ⊠We donât know how to do that,â he added.
There are arguments to suggest that the way AI machines are currently being trained âwould lead to systems that turn against humans,â Bengio said.
âIn addition, there are people who might want to abuse that power, and there are people who might be happy to see humanity replaced by machines. I mean, itâs a fringe, but these people can have a lot of power, and they can do it unless we put the right guardrails right now,â he said.
AI guidance and regulation
Bengio endorsed an open letter in June entitled: âA right to warn about advanced artificial intelligence.â It was signed by current and former employees of Open AI â the company behind the viral AI chatbot ChatGPT.
The letter warned of âserious risksâ of the advancement of AI and called for guidance from scientists, policymakers and the public in mitigating them. OpenAIhas been subject to mounting safety concerns over the past few months, with its âAGI Readinessâ team disbanded in October.
âThe first thing governments need to do is have regulation that forces [companies] to register when they build these frontier systems that are like the biggest ones, that cost hundreds of millions of dollars to be trained,â Bengio told CNBC. âGovernments should know where they are, you know, the specifics of these systems.â
As AI is evolving so fast, governments must âbe a bit creativeâ and make legislation that can adapt to technology changes, Bengio said.
Itâs not too late to steer the evolution of societies and humanity in a positive and beneficial direction.
Yoshua Bengio
Head of the Montreal Institute for Learning Algorithms
Companies developing AI must also be liable for their actions, according to the computer scientist.
âLiability is also another tool that can force [companies] to behave well, because ... if itâs about their money, the fear of being sued â thatâs going to push them towards doing things that protect the public. If they know that they canât be sued, because right now itâs kind of a gray zone, then they will behave not necessarily well,â he said. â[Companies] compete with each other, and, you know, they think that the first to arrive at AGI will dominate. So itâs a race, and itâs a danger race.â
The process of legislating to make AI safe will be similar to the ways in which rules were developed for other technologies, such as planes or cars, Bengio said. âIn order to enjoy the benefits of AI, we have to regulate. We have to put [in] guardrails. We have to have democratic oversight on how the technology is developed,â he said.
Misinformation
The spread of misinformation, especially around elections, is a growing concern as AI develops. In October, OpenAI said it had disrupted âmore than 20 operations and deceptive networks from around the world that attempted to use our models.â These include social posts by fake accounts generated ahead of elections in the U.S. and Rwanda.
âOne of the greatest short-term concerns, but one thatâs going to grow as we move forward toward more capable systems is disinformation, misinformation, the ability of AI to influence politics and opinions,â Bengio said. âAs we move forward, weâll have machines that can generate more realistic images, more realistic sounding imitations of voices, more realistic videos,â he said.
This influence might extend to interactions with chatbots, Bengio said, referring to a studyby Italian and Swiss researchers showing that OpenAIâs GPT-4 large language model can persuade people to change their minds better than a human. âThis was just a scientific study, but you can imagine there are people reading this and wanting to do this to interfere with our democratic processes,â he said.
The âhardest question of allâ
Bengio said the âhardest question of allâ is: âIf we create entities that are smarter than us and have their own goals, what does that mean for humanity? Are we in danger?â
âThese are all very difficult and important questions, and we donât have all the answers. We need a lot more research and precaution to mitigate the potential risks,â Bengio said.
He urged people to act. âWe have agency. Itâs not too late to steer the evolution of societies and humanity in a positive and beneficial direction,â he said. âBut for that, we need enough people who understand both the advantages and the risks, and we need enough people to work on the solutions. And the solutions can be technological, they could be political ... policy, but we need enough effort in those directions right now,â Bengio said.
- CNBCâs Hayden Field and Sam Shead contributed to this report.
No comments:
Post a Comment