Should AI development require a license? The creator of ChatGPT agrees, but some experts disagree

Gerry Smith, CEO of ODP Corporation, explains why the company has expanded its collaboration with Microsoft to bring its AI technology to The Claman Countdown.

OpenAI CEO Sam Altman proposed to Congress this week that lawmakers should require companies to obtain a federal license before developing advanced artificial intelligence technologies like his organization’s ChatGPT.

“We believe government intervention will be critical to mitigating the risks of increasingly powerful models,” Altman said in testimony before the Senate Judiciary Subcommittee on Privacy, Technology and Law.

Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing on Tuesday, May 16, 2023 in Washington, DC, USA. Congress debates the potential and pitfalls of artificial intelligence as a product… (Photographer: Eric Lee/Bloomberg via Getty Images / Getty Images)

“For example,” Altman continued, “the US government might consider a combination of licensing and testing requirements for AR development and release.” [autoregressive] Models above a performance threshold.

Senators on both sides of the political camp hailed the idea as lawmakers scramble to regulate the rapidly evolving sector of the tech industry, while lamenting during the hearing that Congress should have done more to address the internet and social media in their respective countries curb debuts for the public.

TIKTOK claims that the Montana ban violates the First Amendment and encourages users to keep using the app despite the ban

However, experts say that rushing in drastic government restrictions on advanced AI would be a mistake, if at all possible.

As news of Altman’s licensing proposal receded, critics on social media warned that the move would benefit well-funded and early AI innovators like OpenAI, Google, and Microsoft, leading to a regulatory capture that would stifle competition from potential rivals.

Sam Altman, CEO of OpenAI, whose nonprofit developed ChatGPT, called on Congress to introduce a state licensing requirement for companies looking to develop advanced AI like his organization’s Large Language Model (LLM). Skeptics fear that such a rule… (Jonathan Raa/NurPhoto via Getty Images / Getty Images)

OpenAI is a non-profit organization and Altman reportedly doesn’t own shares in the limited-profit subsidiary, but ChatGPT’s dominance has led critics to question whether Altman was looking for a way to close the door on startups to block. Similar skepticism has been expressed at social media companies like Facebook owner Meta and the former owners of Twitter, when they openly called for government regulation.

WHAT ARE THE BIGGEST NAMES IN TECHNOLOGY DOING RELATED TO AI?

Patrick Hedger, executive director of the Taxpayers Protection Alliance, told FOX Business: “Some will say that’s not his motive because Altman doesn’t have a stake in the company. And while I think there are some noble intentions behind it.” While he’s trying, there are many others who have a vested interest in both OpenAI and their own AI products, and are looking for a regulatory moat.”

According to Altman, Hedger tweeted, “Another stupid professional driver’s license just got lost.”

“The regulation should take effect above a capability threshold,” Altman later clarified. “AGI [artificial general intelligence] Safety is really important and boundary models should be regulated.” He added: “Regulatory coverage is poor and we shouldn’t mess with models below the threshold.” Obviously open source models and small startups are important.”

(LR) Christina Montgomery, Chief Privacy and Trust Officer at IBM; New York University Professor Emeritus Gary Marcus and OpenAI CEO Samuel Altman are sworn in during a Senate Judiciary Subcommittee on Privacy, Technology and Law on… (Photo by ANDREW CABALLERO-REYNOLDS/AFP via Getty Images / Getty Images)

Hedger pointed out that an IBM executive also testified at the Senate hearing, calling for regulations on AI.

“They are a company of this size and scale that can comply with these types of regulations,” Hedger said. “Even if Altman’s motives are purely noble in nature, they are being exploited by many companies that are able to comply with an AI licensing regime, thus halting the progression they desire in decentralized innovation.”

Hedger says the potential harms of AI are overstated and sees potential in streamlining a number of existing technologies.

“A lot of people are trying to compare this to the early days of the internet, and Congress feels like they weren’t ahead of the internet,” he said. “But that’s exactly why the internet has been a success story, because it wasn’t stifled by regulation from the start, and I would hate to see AI do the same.”

DEMOCRATIC SENATOR PROPOSES NEW FEDERAL AGENCY TO REGULATE AI

Meanwhile, some pundits are wondering whether Congress — or anyone else — is capable of containing advanced AI at this stage of its explosive growth, even with the current know-how.

Jessica Inskip, Director of Education and Product at OptionsPlay, responds to concerns that AI is dangerous when it comes to “making money”.

Alex Harmsen is a technology entrepreneur who has been involved in AI software for several years, from autonomous vehicle technology to robotics to his ChatGPT-based investment tool PortfolioPilot.

Harmsen’s AI-powered investing guide is one of the few with a verified ChatGPT plugin, and he’s created many of his own AI models over the past 10-15 years. “Nevertheless,” he says, “I have the feeling that this is happening faster than I can adapt to it.” He cannot imagine how the rapid development feels for a layman, he says.

“I think if it’s not regulated, it’s going to outrun us, and it’s going to happen a lot faster than we as humanity can control,” he explained. “I also think if it is regulated, it will also happen faster than we can control.”

GET FOX BUSINESS ON THE GO by CLICK HERE

Harmsen told FOX Business he doesn’t have a lot of faith in governments making proper regulations around AI, let alone licensing requirements, because it would take years for AI experts with in-depth knowledge to build a framework (although he expects the technology will be “infinitely more powerful”). within a few months) and enforcement would be “enormously difficult”.

“In reality, the problem will not come from OpenAI or Google, but from the massive amounts of open-source models that can be accessed and run anywhere — that can run on any cloud, that can run on anyone.” server and can be used for everything from political alignment to spam to fraud and disinformation,” Harmsen said. “I don’t think we can put this genie back in the bottle.”