Artificial intelligence (AI) poses no immediate threat to humanity’s existence, Microsoft president says, but he believes governments and companies need to act faster to address the technology’s risks by introducing what he calls “safety brakes.” ” calls.
“We don’t see any danger in the coming years, in the next decade, of artificial intelligence posing an existential threat to humanity, but… let’s solve this problem before it happens,” says Brad Smith in an interview with La Presse Canadienne.
Mr. Smith, a Microsoft stalwart who joined the company in 1993, stresses that it is important to get to grips with the problems posed by technology so that the world doesn’t “constantly worry and talk about it.”
He believes the solution to potential problems lies in “safety brakes” that could work like the emergency mechanisms built into elevators, school buses and high-speed trains.
They are expected to be integrated into high-risk artificial intelligence systems that control critical infrastructure such as power grids, water networks and transport.
“Let’s learn from art,” says Mr. Smith.
“All films in which technology is an existential threat end the same way: man shuts down technology. So we have to provide a switch, a safety brake and ensure that the technology remains under human control. Let’s embrace this and do it now.”
Mr Smith’s comments come as a race to harness and innovate AI has begun across the tech sector and beyond following the release of ChatGPT, a conversational bot designed to generate human-like responses to text prompts.
Microsoft has invested billions in San Francisco-based ChatGPT creator OpenAI and also has its own AI-based technology, Copilot, that helps users create content drafts, suggests different ways to word a text they’re writing, and helps them with the Creating PowerPoint presentations from Word helps documents.
However, many are concerned about the pace of progress in AI. For example, Geoffrey Hinton, a British-Canadian deep learning pioneer often considered the “godfather of AI,” said he believes the technology could lead to bias and discrimination, unemployment, echo chambers, fake news, combat robots and other risks.
Canadian leadership
Several governments, including Canada’s, have begun developing safeguards around AI.
In a 48-page report released by Microsoft on Wednesday, Smith said his company supports Canada’s efforts to regulate AI.
Those efforts include a voluntary code of conduct released in September, whose signatories – including Cohere, OpenText, BlackBerry and Telus – promise to assess and mitigate the risks of their AI-based systems, monitor them for incidents and respond to the problems that arise .
Although the code has critics, such as Tobi Lütke, founder of Shopify, who sees it as an example of the country using too many “referees” when it needs more “builders”, Mr Smith noted in the report that through development of a Code Canada has “demonstrated early leadership” and is helping the entire world work toward a common set of shared principles.
The voluntary code is expected to be followed by Canada’s upcoming Artificial Intelligence and Data Act, which would create new criminal provisions to ban the use of AI that could cause serious harm.
The bill, known as Bill C-27, passed first and second readings but is still being considered in committee. Ottawa has said it will not come into force until 2025 at the earliest.
When asked why he thinks governments need to move faster on AI, Smith says the world has had an “extraordinary year” since the launch of ChatGPT.
“When we say we should go faster, it’s honestly not a criticism,” he says.
“It’s about recognizing the current reality, where innovation is moving faster than most people expected.”
However, he sees Canada as one of the countries best prepared to keep up with the pace of AI, with universities long focused on the technology and cities like Montreal, Toronto and Vancouver being hotbeds for innovation in the field .
“If there’s any government that I think can draw on a tradition to pass something like this, I think it’s Canada. I hope it will be the first,” says Mr Smith.
“It won’t be the last if it’s the first.”
A necessary international approach
However, as Canada’s AI law is under “careful review,” Smith said Canada should consider how it can introduce additional protections in the meantime.
For example, it anticipates that when purchasing high-risk AI systems, partners seeking contracts may be forced to rely on third-party audits to confirm that they meet relevant international standards related to AI.
In the report, Mr. Smith also supports an approach to AI that is “developed and used across borders” and that “ensures that an AI system certified as secure in one jurisdiction will also be certified as secure in another.” can.”
He compared this approach to that of the International Civil Aviation Organization, which uses uniform standards to ensure that an aircraft does not need to be modified mid-flight between Brussels and New York to meet each country’s different requirements.
An international code would help AI developers confirm the security of their systems and improve compliance worldwide as they could use internationally recognized standards.
“The voluntary code model provides an opportunity for Canada, the European Union, the United States, the other G7 members, as well as India, Brazil and Indonesia, to move forward together based on a set of shared values and principles. ” he said in the report.
“When we can work with others on a voluntary basis, we can all move forward faster, more alertly and more focused. This is not just good news for the tech world, but for the entire world.”