Metas new AI will be used to create sex chatbots.jpgw1440

Meta’s new AI will be used to create sex chatbots

Comment on this storyComment

Allie is an 18-year-old with long brown hair who has “a lot of sexual experience.” Because they “lives for attention”, she will “share details of her antics” with each free.

But Allie is a fake, artificial intelligence chatbot designed for sexual play – sometimes performing graphic rape and abuse fantasies.

While companies like OpenAI, Microsoft, and Google rigorously train their AI models to avoid a variety of taboos, including overly intimate conversations, Allie was built using open-source technology — code that’s freely available to the public and not subject to such restrictions. Based on a model developed by Meta called LLaMA, Allie is part of a growing tide of specialized AI products that anyone can build, from writing tools to chatbots to data analysis applications.

Advocates see open-source AI as a way to bypass corporate controls and a boon for entrepreneurs, academics, artists, and activists who are free to experiment with transformative technology.

“The general argument for open source is that it accelerates innovation in AI,” said Robert Nishihara, CEO and co-founder of startup Anyscale, which helps companies run open-source AI models.

A Curious Guide to Artificial Intelligence

Anyscale’s customers use AI models to discover new medicines, reduce agricultural use of pesticides and identify fraudulent goods sold online, he said. These applications would be more expensive and more difficult, if not impossible, to rely on a handful of them Products offered by the largest AI companies.

But this freedom could also be exploited by bad actors. Open-source models have been used to create artificial child pornography, using images of real children as source material. Critics fear that it could also enable fraud, cyber hacking and sophisticated propaganda campaigns.

Earlier this month, U.S. Senators Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) both sent a letter to Meta CEO Mark Zuckerberg warning that LLaMA’s release would lead to “its abuse in Spam” could lead. Fraud, malware, privacy breaches, harassment and other wrongdoing and harm.” They asked what steps Meta is taking to prevent such abuse.

Allie’s creator, who spoke on condition of anonymity for fear of tarnishing his professional reputation, said commercial chatbots like Replica and ChatGPT are “heavily censored” and cannot offer the kind of sexual conversations he desires. Using open-source alternatives, many of which are based on Meta’s LLaMA model, the man says he can build his own uninhibited conversational partners.

“It’s rare to have the opportunity to experiment with the ‘state of the art’ in any area,” he said in an interview.

The creator of Allie argued that open-source technology benefits society because it allows people to create products that suit their preferences without corporate constraints.

“I think it’s good to have a safe place to explore,” he said. “I can’t think of anything safer than a text-based role-playing game against a computer with no actual humans involved.”

On YouTube, influencers offer tutorials on how to set up “uncensored” chatbots. Some are based on a modified version of LLaMA called Alpaca AI that Stanford University researchers released in March, only to remove them a week later due to cost concerns and “the inadequacies of our content filters.”

Nisha Deo, a spokeswoman for Meta, said the GPT-4 x Alpaca model mentioned in the YouTube videos was “sourced and released outside of our approval process.” Stanford officials did not respond to a request for comment.

AI-generated child sex images spark a new nightmare for the internet

Open-source AI models and the creative applications built on them are often published on Hugging Face, a platform for sharing and discussing AI and data science projects.

During a hearing in the House Science Committee on Thursday, Hugging Face CEO Clem Delangue called on Congress to consider legislation to support and encourage open-source models, which he says are “extremely consistent with American values.” stand.

In an interview after the hearing, Delangue acknowledged that open source tools can be abused. He noticed a model that was purposely trained on toxic content, GPT-4chan, the hugging face has been removed. But he said he believes open-source approaches allow for both more innovation and greater transparency and inclusivity than corporate-controlled models.

“I would argue that the greatest damage today is actually being done by black boxes,” Delangue said, referring to AI systems whose inner workings are opaque, “rather than open-source systems.”

Hugging Face’s rules do not prohibit AI projects that produce sexually explicit results. However, they prohibit sexual content involving minors or that is “used or created for the purpose of harassment, bullying, or without the express consent of the individuals depicted.” Earlier this month, the New York-based company released an update to its content policy, emphasizing “consent” as a “core value” that governs how people can use the platform.

As Google and OpenAI continue to shed secrets about their most powerful AI models, Meta has emerged as a surprising corporate advocate for open-source AI. In February, LLaMA was released, a language model less powerful than GPT-4 but more customizable and cheaper to run. Meta initially withheld key parts of the model code and planned to restrict access to authorized researchers. But by early March, these parts, called the model’s “weights,” were leaked on public forums, making LLaMA freely available to all.

“Open source is a positive force for advancing technology,” said Deo of Meta. “So we shared LLaMA with members of the research community to help us evaluate, make improvements, and iterate together.”

Since then, LLaMA has become perhaps the most popular open-source model for technologists looking to develop their own AI applications, Nishihara said. But it’s not the only one. In April, software company Databricks released an open-source model called Dolly 2.0. And last month, an Abu Dhabi-based team released an open-source model called Falcon that rivals LLaMA in performance.

Marzyeh Ghassemi, an assistant professor of computer science at MIT, said she’s a proponent of open-source language models, albeit with caveats.

Ghassemi said it’s important to make the architecture behind high-performing chatbots public, as it gives people the opportunity to question how they’re built. For example, if a medical chatbot were built on open-source technology, they think researchers would be able to see if the data it’s being trained on contains sensitive patient information, which would not be possible with closed-loop chatbots.

However, she recognizes that openness comes with risks. If people can easily modify language models, they can quickly create chatbots and image makers that produce high-quality disinformation, hate speech, and inappropriate material.

Ghassemi said there should be regulations governing who can change those products, such as a certification or certification process.

“In the same way that we give people permission to use a car,” she said, “we have to think about similar frameworks.” [for people] …to actually build, improve, test, and edit these openly trained language models.”

Some executives at companies like Google, which keeps its chatbot Bard under wraps, see open-source software as an existential threat to their business as the grand language models available to the public become almost as competent as theirs.

“We’re not in a position to win that [AI] “Neither does the arms race or OpenAI,” a Google engineer wrote in a memo published by technology site Semianalysis in May. “I’m talking about open source, of course. In plain language: They lap us … While our models still have a slight advantage in terms of quality, the gap is closing surprisingly quickly.”

The debate over whether AI will destroy us divides Silicon Valley

Nathan Benaich, General Partner at Air Street Capital, a London-based venture investment firm focused on AI, noted that many of the tech industry’s greatest advances over the decades have been enabled by open-source technologies — including today’s AI language models.

“If there are only a few companies” developing the best-performing AI models, “they will only focus on the biggest use cases,” Benaich said, adding that overall the diversity of investigations is a boon to society.

Gary Marcus, a cognitive scientist who testified before Congress on AI regulation in May, countered that accelerating AI innovation might not be a good thing given the risks the technology could pose to society.

“We don’t offer open-source nuclear weapons,” Marcus said. “Current AI is still pretty limited, but things could change.”