A Canadian godfather of AI calls for a ‘pause’ on the technology he helped create

Powerful AI tools can have negative uses, and the world isn’t ready, Yoshua Bengio warns

Author of the article: Marisa Coulton

Published Mar 30, 2023  •  Last updated 4 days ago  •  6 minute read

Open more share options

Breadcrumb Trail Links

  1. News 
  2. Innovation

A Canadian godfather of AI calls for a ‘pause’ on the technology he helped create

Powerful AI tools can have negative uses, and the world isn’t ready, Yoshua Bengio warns

Author of the article:

Marisa Coulton

Published Mar 30, 2023  •  Last updated 4 days ago  •  6 minute read

Join the conversation

Yoshua Bengio in his office at Université de Montréal, in 2019.
Yoshua Bengio in his office at Université de Montréal, in 2019. PHOTO BY JOHN MAHONEY/MONTREAL GAZETTE

Article content

Yoshua Bengio, the Canadian professor who is considered one of the godfathers of artificial intelligence, said the technology he helped create should be paused before AI turns into something too big to control.

“I’m concerned that powerful (AI) tools can also have negative uses, and that society is not ready to deal with that,” Bengio, who teaches at Université de Montréal, said at a press conference on March 29. “That’s why we’re saying, ‘Slow down.’ Let’s make sure that we develop better guardrails.”

Bengio assembled reporters in his hometown after he and more than 1,100 executives, thinkers and researchers released an open letter that argues that society needs to hit the brakes on the development of large AI systems. Among the other signatories were Tesla Inc. founder and chief executive Elon Musk; Steve Wozniack, the co-founder of Apple Inc.; and Yuval Noah Harari, author of the international bestseller Sapiens: A Brief History of Humankind.

“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” asked the signatories of the letter, published by the Future of Life Institute. “Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”

The letter — and the desire of signatories such as Bengio to amplify their concerns — is the latest demonstration of alarm related to the release of OpenAI LLC’s ChatGPT, a chatbot that synthesizes material on the internet so well that it’s often impossible to discern that its creations were written by a machine. Goldman Sachs Group Inc., the investment bank, released a report March 27 that said such “generative” AI systems could spark a productivity boom that would boost global economic growth, while also bringing “significant disruption” to the labour market by replacing 300 million jobs.

It’s not only the technology’s potential that is causing unease, but the speed at which it is advancing. OpenAI introduced ChatGPT in November and it already has released an even more powerful update, ChatGPT-4. Bengio, Musk and their co-signatories said they worry about an arms race, as Alphabet Inc. and the other big technology companies rally to keep pace with OpenAI, a San Fransisco-based company backed by Microsoft Corp.

Should we risk loss of control of our civilization?


“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter said.

The writers called on “all AI labs” to pause the training of AI systems more powerful than ChatGPT-4 for six months, and resume only once “we are confident that their effects will be positive and their risks will be manageable.”

Sounding the alarm

Bengio is one of the reasons some say Canada has the potential to be a leader in AI. In 2018, he received the Turing Award, perhaps the most prestigious honour in computing, along with the University of Toronto’s Geoffrey Hinton and French computer engineer Yann LeCun for their pioneering work on neural networks and machine learning. Bengio helped create Mila, the Montreal-based AI research institute, and was one of the drivers behind an effort that resulted in the Montreal Declaration for a Responsible Development of of Artificial Intelligence in 2017.

The letter and press conference were the second time in the span of a month that Bengio sounded an alarm over what AI has become.

On March 20, Mila published a book, Missing Links in AI Governance, in collaboration with the United Nations Educational, Scientific, and Cultural Organization, or UNESCO. The compilation features 18 articles on AI written by some of the most cited experts on the topic, including Bengio; Andrew Ng, co-founder and former head of Google Brain, the company’s artificial intelligence research team; Erik Brynjolfsson, director of the Digital Economy Lab at the Stanford Institute for Human-Centered AI; and Joy Buolamwini, founder of the Algorithmic Justice League, an organization that works to combat bias in software.

Mila is not the only organization calling for the regulation of AI. Even the United States Chamber of Commerce, the country’s largest lobbying agency, has called for a “regulatory framework” for AI. “If appropriate and reasonable protections are not put in place, AI could adversely affect privacy and personal liberties or promote bias,” the Chamber said in a report.

Though there still is a way to go before AI reaches human-level intelligence, Bengio said at the March 29 press conference that it is already powerful enough to make a human interlocutor believe they are speaking with another human.

AI thinkers are raising the alarm now, before the technology gets even more powerful, because enacting change will require substantial lead time, Bengio said. Societies don’t change quickly, governments don’t pass laws quickly, and companies don’t change their ways quickly, he said.

Good, bad or neutral?

One of AI’s key dangers stems from its source dataset.

“Who has that data?” Benjamin Prud’homme, executive director of AI for Humanity at Mila, said earlier this month. “Who does that data represent? (Does it represent) minorities or does it represent only majorities? What language is that data in? Is it only in English or are we looking at other language?”

The technology could be used to “tackle climate change and biodiversity survival, to uphold human rights, (and) for health applications,” but it’s not, Prud’homme said. Instead, AI threatens to deepen the prejudice and racism already present in society by basing its output on non-inclusive datasets.

AI is already being optimized by a handful of large corporations for profit rather than broader social good, he said, adding, “If we don’t very carefully craft inclusive processes for AI systems —representation of women, Indigenous Peoples, racialized communities, etc. — it’s a significant risk to deepen inequality.”

Missing Link‘s overarching recommendation is that governments should step in to incentivize “socially beneficial applications” for AI, such as using it to discover new drugs for illnesses, which would lower research and development costs.

Governments can and should be involved in AI, Prud’homme said, adding that the “self-regulation of companies is not going to suffice; we need public authorities, democratic authorities to come up with regulation and protection of human rights and democratic principles.”

If we don’t very carefully craft inclusive processes for AI systems … it’s a significant risk to deepen inequality


“Canada has quite a leadership position on this,” said Prud’homme, who in 2019 was political adviser to Chrystia Freeland, the deputy prime minister and finance minister.

Canada became the first country to release a national AI strategy in 2017 with its $125-million Pan-Canadian Artificial Intelligence Strategy. Canada’s Artificial Intelligence and Data Act, if adopted in the coming months, will be the first of its kind in the world, said Prud’homme.

The Act sets out measures to regulate trade of AI systems and establishes common requirements for the “design, development, and use” of AI in order to reduce harm and biased output.

‘Great potential’

The book argues for a shift in understanding; AI is not a “neutral” technology, Prud’homme said, but source of power. “Are there many companies that are operating on an even playing field or is it only a few companies that have so much power that there is a bit of a concentration that we should be worried about?” he asked.

AI developers should work with governments to create “safety protocols” which include “regulatory authorities,” oversight and tracking systems, watermarking systems to distinguish “real from synthetic” and public funding for AI safety research, among others, the letter said. Until these are developed, governments should institute a moratorium on AI development.

“I think that AI has great potential, but it also poses very significant risks. We don’t need to panic; we need to take those risks seriously and we need to regulate them,” said Prud’homme.

Society needs to slow down, the letter reads, and take the time to enjoy a long “AI summer” rather than “rush unprepared into a fall.”

• Email: mcoulton@postmedia.com | Twitter: marisacoulton

Source: Financial Post