FTC investigating ChatGPT makers

FTC investigating ChatGPT makers

The Federal Trade Commission has launched an investigation into OpenAI, the artificial intelligence startup that makes ChatGPT. The question is whether the chatbot harmed consumers by collecting data and publishing false information about individuals.

In a 20-page letter sent to the San Francisco-based company this week, the agency said it is also reviewing OpenAI’s security practices. The FTC asked OpenAI dozens of questions in its letter, including how the start-up trains its AI models and handles personal data, and said the company should provide documents and details to the agency.

The FTC is investigating whether OpenAI “has engaged in or engaged in unfair or deceptive privacy or data security practices relating to the risk of harm to consumers,” the letter reads.

The Washington Post previously reported on the investigation and confirmed it by a person familiar with the investigation. OpenAI declined to comment.

The FTC investigation represents the first major US regulatory threat to OpenAI, one of the most well-known AI companies, and signals that the technology may come under increasing scrutiny as people, businesses, and governments adopt more and more AI-based products use. Rapidly evolving technology has raised alarms as chatbots, which can generate responses in response to prompts, have the potential to replace humans in their workplaces and spread disinformation.

Sam Altman, who leads OpenAI, said the fast-growing AI industry needs to be regulated. In May, he testified before Congress to demand AI legislation and visited hundreds of lawmakers with the goal of setting a policy agenda for the technology.

“I think if this technology goes wrong, it can go pretty wrong,” he said at the hearing in May. “We want to work with the government to prevent that.”

OpenAI has already come under international regulatory pressure. In March, the Italian data protection regulator banned ChatGPT, saying OpenAI had unlawfully collected users’ personal information and lacked an age verification system to prevent minors from being exposed to illegal material. OpenAI restored access to the system over the next month, claiming to have made the changes requested by the Italian authority.

The FTC is moving with remarkable speed on AI, launching an investigation less than a year after OpenAI launched ChatGPT. Lina Khan, chair of the FTC, said technology companies should be regulated while technologies are emerging, not when they are mature.

In the past, the agency typically launched investigations after a company committed a major public misdemeanor, such as launching an investigation into Meta’s privacy practices after reports that in 2018 the company sold user data to a political consultancy, Cambridge Analytica.

Ms Khan, who testified Thursday at a House Committee hearing on the agency’s practices, had previously said the AI ​​industry needs scrutiny.

“Although these tools are new, they are not subject to existing rules, and the FTC will continue to vigorously enforce the laws that we are charged with administering in this new market,” she wrote in a May op-ed piece in the New York Times. “As technology advances at a rapid pace, we already see several risks.”

The investigation could force OpenAI to disclose its methods around building ChatGPT and the data sources it uses to build its AI systems. While OpenAI has long been fairly open about such information, lately the company has said little about where the data for its AI systems comes from and how much of it is used to create ChatGPT, likely because it is wary of that competitors are copying it, and has concerns about lawsuits over use of certain records.

Chatbots, also used by companies like Google and Microsoft, represent a major shift in the way computer software is created and used. They’re ready to reinvent web search engines like Google Search and Bing, talking digital assistants like Alexa and Siri, and email services like Gmail and Outlook.

When OpenAI released ChatGPT in November, it immediately captured the public’s imagination with its ability to answer questions, write poetry, and craft riffs on almost any subject. But technology can also mix fact with fiction and even make up information, a phenomenon scientists call “hallucination.”

ChatGPT is controlled by what AI researchers call a neural network. This is the same technology that translates between French and English on services like Google Translate and identifies pedestrians while self-driving cars navigate city streets. A neural network learns skills by analyzing data. For example, by finding patterns in thousands of cat photos, it can learn to recognize a cat.

Researchers at labs like OpenAI have developed neural networks that analyze large amounts of digital text, including Wikipedia articles, books, news, and online chat logs. Known as large language models, these systems have learned to generate text themselves, but can repeat incorrect information or combine facts in a way that results in inaccurate information.

In March, the Center for AI and Digital Policy, an advocacy group that advocates for the ethical use of technology, called on the FTC to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns about bias, disinformation, and security .

The organization updated the complaint less than a week ago, detailing other ways the chatbot could cause harm that OpenAI also pointed out.

“The company itself has recognized the risks involved in releasing the product and has called for regulation itself,” said Marc Rotenberg, president and founder of the Center for AI and Digital Policy. “The Federal Trade Commission must act.”

OpenAI has worked to refine ChatGPT and reduce the incidence of biased, inaccurate, or otherwise harmful material. As employees and other testers use the system, the company asks them to rate the usefulness and truthfulness of their answers. Using a technique called reinforcement learning, these ratings are then used to further define what the chatbot will and will not do.

This is an evolving story. Check back for updates.