FTC investigates OpenAI for data leaks and ChatGPT inaccuracy.jpgw1440

FTC investigates OpenAI for data leaks and ChatGPT inaccuracy

Comment on this storyComment

The Federal Trade Commission has launched a major investigation into OpenAI, investigating whether the maker of the popular ChatGPT bot violated consumer protection laws by compromising personal reputation and data.

The agency this week sent the San Francisco-based company a 20-page request for records of how it manages risks associated with its AI models, according to a document reviewed by The Washington Post. The salvo represents the strongest regulatory threat yet to OpenAI’s business in the United States as the company embarks on a global charm offensive to shape the future of artificial intelligence politics.

Analysts have dubbed OpenAI’s ChatGPT the fastest-growing consumer app of all time, and its early success sparked an arms race among Silicon Valley companies to introduce competing chatbots. The company’s Chief Executive, Sam Altman, has proven to be an influential figure in the AI ​​regulation debate, testifying on Capitol Hill, dining with lawmakers and meeting with President Biden and Vice President Harris.

Big Tech took a cautious approach to AI. Then came ChatGPT.

But now the company faces a new test in Washington, where the FTC has repeatedly warned that existing consumer protection laws apply to AI, even as the government and Congress struggle to establish new regulations. Senate Majority Leader Charles E. Schumer (DN.Y.) has predicted that new AI legislation will be months away.

The FTC’s demands on OpenAI are the first indication of how they intend to enforce these warnings. If the FTC finds that a company violates consumer protection laws, it can impose fines or subject a company to a consent decree, which can dictate how the company handles data. The FTC has become the federal government’s top police officer in Silicon Valley, slapping large fines on Meta, Amazon and Twitter for alleged violations of consumer protection laws.

The FTC asked OpenAI to provide detailed descriptions of any complaints it received about its products that made “false, misleading, derogatory, or harmful” claims about people. The FTC is investigating whether the company engaged in unfair or fraudulent practices that resulted in “reputational damage” to consumers, the document said.

The FTC also asked the company to provide records of a security incident the company announced in March when a flaw in its systems allowed some users to view payment-related information. as well as some data from other users’ chat history. The FTC is checking whether the company’s data security practices violate consumer protection laws. OpenAI said in a blog post that the number of users whose data has been shared with someone else is “extremely small.”

OpenAI and the FTC did not immediately respond to requests for comment sent Thursday morning.

News of the investigation comes as FTC Chair Lina Khan is expected to face a controversial hearing before the House Judiciary Committee on Thursday, in which Republican lawmakers are expected to analyze her enforcement record and accuse her of mismanaging the agency to have. Khan’s ambitious plans to contain Silicon Valley have suffered significant losses in court. On Tuesday, a federal judge rejected the FTC’s attempt to block Microsoft’s $69 billion deal to buy video game company Activision.

The agency has repeatedly warned that action on AI is imminent in speeches, blog posts, editorials and press conferences. In a speech at Harvard Law School in April, Samuel Levine, director of the agency’s Office of Consumer Protection, said the agency is prepared to be “flexible” in dealing with emerging threats.

“The FTC welcomes innovation, but being innovative doesn’t give you license for recklessness,” Levine said. “We stand ready to use all our tools, including enforcement, to combat harmful practices in this area.”

The FTC has also published several colorful blog posts about its approach to regulating AI, in part citing popular sci-fi movies to warn the industry not to break the law. The agency has warned of AI scams that use generative AI to manipulate potential customers and falsely overestimate the capabilities of AI products. Khan also attended a press conference with officials from the Biden administration in April about the danger of AI discrimination.

“There is no AI exemption from applicable laws,” Khan said at the event.

Information the FTC collects from Open AI includes research, tests, or surveys that assess how well consumers understand “the accuracy or reliability of the results generated by their AI tools.” The agency made extensive demands for records of how OpenAI’s products could make derogatory statements, and asked the company to provide records of the complaints people sent about its chatbot making false statements.

ChatGPT fabricated a sexual harassment scandal and named a real law professor as the accused

The agency is focusing on such fakes after numerous high-profile reports that the chatbot was producing false information that could damage people’s reputations. Mark Walters, a radio talk show host in Georgia, sued OpenAI for defamation, alleging that the chabot made legal claims against it. The lawsuit alleges that ChatGPT falsely claimed that Armed American Radio host Walters was accused of fraud and embezzling funds from the Second Amendment Foundation. The response came in response to a question about a lawsuit about the foundation that Walters is not involved in, according to the complaint.

ChatGPT also said that an attorney made sexually suggestive comments and attempted to touch a student on a school trip to Alaska, citing an article that allegedly appeared in the Washington Post. But there was no such article, the class trip never took place and the lawyer said he was never accused of molesting a student, The Post previously reported.

In its request, the FTC also asked the company to provide detailed information about its products and how it advertises them. It also requested details on the policies and procedures OpenAI employs before releasing a new product to the public, including a list of times OpenAI has withheld a major language model due to security vulnerabilities.

The agency also requested a detailed description of the data OpenAI uses to train its products, which mimic human speech by ingesting text mostly sourced from Wikipedia, Scribd, and other sites on the open web. The agency also asked OpenAI to describe how it refines its models to counteract their tendency to “hallucinate” and invent answers when the models don’t know the answer to a question.

OpenAI is also required to provide details of how many people were affected by the March security incident and any steps taken to respond.

The FTC’s filing request, dubbed the “Civil Investigative Demand,” focuses primarily on potential consumer protection violations, but also asks OpenAI to provide some details about how it licenses its models to other companies.

Europe is advancing on AI regulation, challenging the might of the tech giants

The United States has lagged behind other governments in drafting AI laws and regulating the privacy risks associated with the technology. Countries within the European Union have taken steps to restrict US company chatbots under the bloc’s privacy law, the General Data Protection Regulation. Italy has temporarily blocked ChatGPT from working there due to privacy concerns Google has had to delay the launch of its chatbot Bard after receiving privacy rating requests from the Irish Data Protection Commission. The European Union is also expected to pass an AI law by the end of the year.

There is a lot of catching up to do in Washington. On Tuesday, Schumer hosted an all-senator briefing with Pentagon and intelligence officials to discuss the national security risks of artificial intelligence while he works with a bipartisan group of senators to draft new AI legislation. Schumer told reporters after the meeting that it will be “very difficult” to regulate AI as lawmakers try to balance the need for innovation with ensuring adequate protections for the technology.

On Wednesday, Vice President Harris hosted a group of consumer and civil rights advocates at the White House for a discussion on the security risks of AI.

“It’s a wrong decision to say that we can either innovate or protect consumers,” Harris said. “We can do both.”

Will Oremus contributed to this report.

Give this item as a gift