Study Shows People Dislike Received Responses Generated by AI System

According to a study by Home Security Heroes – Developpez.com, 98% of deepfakes are used in pornographic content

Study Shows People Dislike Received Responses Generated by AI System
Deepfakes were perhaps the first example of the true capabilities of AI, and they have only gotten better over the years. While the first deepfakes to go viral were relatively harmless, like the many videos in which people impersonated Tom Cruise, it didn’t take long for the technology’s more sinister aspects to come to light.

Its usefulness for political disinformation, or disinformation in general, has already caused controversy. However, the real danger of “deepfakes” lies elsewhere. Home Security Heroes’ State of Deepfakes 2023 study found that 98% of all deepfakes fall into a specific genre: explicit content.

It is not surprising that the production of adult videos uses technologies capable of perfectly imitating a person’s appearance, but the fact that the vast majority of deepfakes are used in this genre gives rise to concerns Worries. 98% of people whose images are used in explicit deepfakes are women, but 48% of viewers are men. More importantly, 74% of men said they see nothing wrong with explicit deepfakes, or at least see no reason to feel guilty.

Another alarming statistic from this report: The number of deepfakes online increased by 550% between 2019 and 2023, with all but a few being used in the adult content industry. Websites that specialize in explicit deepfakes have received nearly 5 million unique visitors since the start of the year, but they also included seven of the top 10 most popular adult video streaming sites.

If it were simply a matter of replacing adult actresses with AI, the debate would be different. Of course, most explicit “deepfakes” do not imply consent. Countless women have been blackmailed with deepfakes depicting them in various compromising scenarios, and many of these cases can be viewed as a form of assault. Even minors are not safe from deepfakes, as a current case in Spain involving 30 young girls between the ages of 12 and 14 shows.

With women being the main victims and nearly three in four men failing to recognize how unethical deepfakes are, it is more important than ever that governments enact regulations to mitigate the significant psychological harm they can cause. Unfortunately, almost no state has passed a law on deepfakes. Even fewer have made it a crime to create an explicit deepfake without consent, forcing many victims to resolve cases in civil court.

Source : Report from Home Security Heroes

And you ?

Tinder travaille sur un new subscription mensuel a 500 dollars What is your opinion on this topic?

Tinder travaille sur un new subscription mensuel a 500 dollars Do you think the results of this Home Security Heroes study are credible or relevant?

See also

Tinder travaille sur un new subscription mensuel a 500 dollars NSA, FBI and CISA release cybersecurity fact sheet on deepfake threats

Tinder travaille sur un new subscription mensuel a 500 dollars According to a study by Integrity360, 68% of IT decision makers are concerned about the increase in deepfakes and 59% believe that AI is increasing the number of cyberattacks

Tinder travaille sur un new subscription mensuel a 500 dollars According to the World Economic Forum, the number of deepfake content online is increasing by 900% annually