How pedophiles sell child abuse images created with artificial intelligence

How pedophiles sell child abuse images created with artificial intelligence Época NEGÓCIOS

1 of 3 How Pedophiles Sell AIPowered Child Abuse Images Photo: Via BBC How Pedophiles Sell AIPowered Child Abuse Images Photo: Via BBC

Pedophiles use the technology of artificial intelligence (AI) is said to be creating and selling realistic child sexual abuse material, a BBC investigation has found.

Some of them have access to these images by paying for account subscriptions on mainstream content sharing sites like Patreon.

  • How Andreesen Horowitz wants to convince the US government that AI does not need regulation
  • Before investing in AI, every company needs to know what problem they want to solve

Patreon states that it has a “zero tolerance” policy towards these types of images on its website.

Britain’s National Council of Chiefs of Police (NPCC) said it was “outrageous” that some platforms were making “huge profits” without taking “moral responsibility” for their content.

GCHQ the UK government’s intelligence, security and cyber agency responded to the report by saying: “Child abusers use every technology and some believe the future of child abuse material lies in AIgenerated content.”

The creators of the abuse images use AI software called Stable Diffusion, which generates images for use in art or graphic design.

AI enables computers to perform tasks that normally require human intelligence.

With Stable Diffusion software, users can describe in words the images they want and the program then creates the image.

However, the BBC has found that the program is being used to create realistic images of child sexual abuse, including the rape of babies and children.

UK police online child abuse teams say they are already encountering this type of content in their investigations.

2 of 3 Journalist Octavia Sheepshanks says the internet has been flooded with AIgenerated images Photo: Via BBC Journalist Octavia Sheepshanks says the internet has been flooded with AIgenerated images Photo: Via BBC

Researcher and freelance journalist Octavia Sheepshanks has been investigating this issue for several months. She contacted the BBC through the children’s charity NSPCC to disclose her findings.

“Ever since AIgenerated images became possible, the internet has been flooded with images. It doesn’t just affect very young girls. [pedófilos] talk about little kids,” she says.

Under UK law, a computergenerated ‘pseudoimage’ depicting child sexual abuse is treated as a real image. Possession, posting or transmission of this type of content is illegal in the UK.

  • NPCC Child Protection Officer Ian Critchley says it’s wrong to say no one was harmed because no real children were depicted in such “synthetic” images.
  • He warns that a pedophile “can move down the offense scale from thought to synthetic image and then actual abuse of a child”.
  • According to the BBC investigation, the abuse images are shared in a threestep process:
  • Pedophiles create images with AI software;
  • They promote images on platforms like Japanese photosharing site Pixiv;
  • These accounts have links that direct customers to more explicit images that they can pay to view in accounts on sites like Patreon.

Some of the image creators post on a popular Japanese social media platform called Pixiv, which is mostly used by artists sharing manga and anime.

However, because the site is hosted in Japan, where sharing drawings of sexualized children is not illegal, artists use the platform to publicize their work in groups and via hashtags which index subjects by keywords.

A spokesman for Pixiv says the company is prioritizing this issue. He claimed in May that Pixiv had banned all photorealistic depictions of sexual content involving minors.

The company says it has strengthened its monitoring systems and is committing significant resources to counteracting issues related to AI developments.

Sheepshanks told the BBC that her research suggests that users appear to be producing child abuse images on an industrial scale.

“The volume is huge. People [os criadores] “We want to take at least 1,000 pictures a month,” she says.

User comments on individual images on Pixiv make it clear that they have a sexual interest in children, with some users even offering to provide nonAI generated abuse images and videos.

Sheepshanks has been monitoring a few groups on the platform.

“Within these 100member groups, people are sharing things like, ‘Look, here’s a link to real things,'” she says.

Different price levels

Many of the Pixiv accounts have links in their bios to socalled “uncensored content” on US contentsharing site Patreon.

Patreon is valued at around $4 billion and claims to have more than 250,000 YouTubers most of whom are legitimate accounts of wellknown celebrities, journalists and writers.

Fans can support creators by subscribing monthly to get access to blogs, podcasts, videos and images, starting at $3.85 (approx. R$18) per month.

However, our investigation found that Patreon accounts sell obscene photorealistic AIgenerated images of children, at varying prices depending on the type of material requested.

One user wrote on his account, “I train my girls on the PC,” adding that they showed “submission.” Another user offered “exclusive uncensored art” for $8.30 a month.

The BBC sent Patreon an example that confirmed the platform was “semirealistic” and violated our policies. The platform said the account was immediately removed.

Patreon said it has a “zero tolerance” policy, emphasizing, “Creators may not fund content that deals with sexual topics involving minors.”

The company said the rise of AIgenerated malicious content online is “real and disturbing,” adding that it has identified and removed “increasing amounts” of such content.

“We have already banned artificial child exploitation material generated by AI,” the platform said, adding that it has dedicated teams, technologies and partnerships to “keep youth safe.”

3 of 3 NPCC’s Ian Critchley is concerned the tide of realistic AI or ‘synthetic’ images could slow identification of real victims of abuse Photo: Via BBC NPCC’s Ian Critchley is concerned the tide of realistic AI or ‘synthetic’ images can images delaying the identification of real victims of abuse Photo: Via BBC

The AI ​​imager Stable Diffusion was born as a global collaboration between scientists and several companies led by the British company Stability AI.

Several versions have been released, with limitations in the code that control the type of content that can be created.

But last year, an earlier “opensource” version was released to the public, allowing users to remove all filters and use the software to create any image including illegal ones.

Stability AI told the BBC that it “prohibits any misuse for illegal or immoral purposes on our platforms and our policies make it clear that this includes child sexual abuse material”.

“We strongly support law enforcement efforts against those who misuse our products for illegal or nefarious purposes,” says Stability AI.

As artificial intelligence advances at a rapid pace, questions are being raised about the future risks it could pose to people’s privacy, human rights and security.

yeah [nome completo omitido por motivos de segurança]GCHQ Head of Mission on Child Sexual Abuse told the BBC: “GCHQ supports law enforcement to stay ahead of emerging threats such as AIgenerated content and ensure there is no safe space for offenders.”

The NPCC’s Ian Critchley says he’s concerned the flood of realistic AI, or “synthetic” imagery, could slow down the process of identifying real abuse victims.

He explains: “This creates an additional need in terms of policing to find out where in the world a real child is being abused and not that it is an artificial or synthetic child.”

Critchley believes this is a pivotal moment for society to define the future of the Internet.