Fake Explicit Images of Taylor Swift We Need Artificial Intelligence

Fake Explicit Images of Taylor Swift: We Need Artificial Intelligence Laws, Experts Warn – Noovo Info

And experts see this situation as a wake-up call that we now need real regulation of AI.

This text is a translation of an article from CTV News.

Mohit Rajhans, media and technology consultant at Think Start, claimed in an interview with CTV News on Sunday that “we have become the Wild West online” when it comes to generating and distributing AI-generated content.

“Artificial general intelligence is here, and now it’s up to us to figure out how to regulate it.”

The fake Taylor Swift images circulating on X reportedly took up to 17 hours to be removed.

The terms “Taylor Swift”, “Taylor Swift AI” and “Taylor AI” currently result in error messages when a user tries to search for them, based on which they evaluate the security of the social network.

But the singer's “deepfake” images were viewed tens of millions of times before social networks intervened. “Deepfakes” are AI-generated images and videos that show fake situations with real people. The big danger behind this phenomenon is that the images are much more realistic than an image retouched with Photoshop.

“If this technology is not regulated, there can be a lot of harassment and misinformation,” Rajhans said.

Taylor Swift's case is part of a worrying trend in which AI is used to create pornographic images of people without their consent, a practice known as “revenge porn” that is used primarily against women and girls.

Although AI has been abused for years, Rajhans explains that there is a “Taylor effect” that is waking people up and making them aware of the problem.

“What has happened is that by using Taylor Swift's image for everything from selling products that she has nothing to do with to manipulating her image in various sexual acts, more people are becoming aware of the spread of this technology became aware,” he said.

Even the White House is listening, saying Friday that action must be taken.

In a statement Friday, White House spokeswoman Karine Jean-Pierre said the release of fake sexually explicit photos of Taylor Swift was “alarming” and that legislative measures would be considered to better address these situations in the future.

“Of course there should be laws to solve this problem,” she replied.

SAG-AFTRA, the union that represents thousands of actors, said in a statement Saturday that it supports a proposed law called the Deepfakes Prevention Act introduced by U.S. Rep. Joe Morelle last year.

“The development and distribution of false images – especially of a lascivious nature – without a person’s consent must be illegal,” the union said in its press release.

Ms Jean-Pierre added that social networks “play an important role in enforcing their own rules” to prevent the spread of “intimate, non-consensual images of real people”.

For his part, Mr. Rajhans mentioned on Sunday that it was clear that platforms needed to do more to combat deepfakes.

“We have to hold companies accountable,” he added. “There are significant financial penalties associated with some of these companies. They made a lot of money from social media.”

He pointed out that if people upload a song that doesn't belong to them, there are ways to flag it on the sites.

“Why don’t they use this technology now to moderate social media so deepfakes can’t spread,” he said.

A 2023 report on deepfakes found that 98% of all deepfake videos online were pornographic in nature – and 99% of the people targeted by deepfake pornography were women. South Korean singers and actresses were disproportionately targeted, accounting for 53% of victims of deepfake pornography.

The report highlighted that the technology allows users to create a free 60-second deepfake porn video in under half an hour.

“The sheer pace of progress in the world of AI works against us when it comes to managing the impact of this technology,” Rajhans said.

“It's so commonplace that you and I just make memes and share them and no one can know if it's real or if it's something that's been recreated,” he said.

“It's not just about Taylor Swift. This is about harassment, this is about spreading false information, this is about an entire culture that needs to be educated about how this technology is being used.”

It's unclear how long it will take for Canadian legislation to restrict deepfakes.

The Canadian Security Intelligence Service called deepfakes “a threat to Canada’s future” in a 2023 report. In it, he concluded that “collaboration between partner governments, allies, scientists and industry experts is critical both to maintaining the integrity of globally distributed information and to combating the malicious application of scalable AI.”

Although a proposed regulatory framework for AI systems in Canada called the Artificial Intelligence and Data Act is currently being considered in the House of Commons, it would not come into force this year. If the bill receives royal assent, a consultation process will begin to clarify AIDA, with the framework not coming into force until 2025 at the earliest.