SAG AFTRA leader Actors should get the same AI protections as

SAG-AFTRA leader: Actors should get the same AI protections as studios

Duncan Crabtree-Ireland, national managing director of SAG-AFTRA

Duncan Crabtree-Ireland, national managing director of SAG-AFTRA.

Getty Images

Consent, credit and compensation.

Those were the conditions that SAG-AFTRA national executive director Duncan Crabtree-Ireland and John August, a member of the Writers Guild of America West negotiating committee, demanded for the work, likenesses and brands of guild members to be used to train artificial intelligence systems can. At a hearing before the Federal Trade Commission on Wednesday, they joined other representatives from various groups – including authors, voice actors and musicians – to warn against the advance of generative AI into the media and entertainment industry, which they say is undermining their work and the presentation increases fraud risks.

The rise of AI tools has worried developers, who have pushed policymakers to introduce safeguards around the technology’s use. In the absence of regulation, the WGA has entered into a contract with studios and streamers that provides members with some protection over how credit is earned and used. SAG-AFTRA has pushed for similar terms in its negotiations.

Crabtree-Ireland, who left the hearing early to return to negotiations with studios, said that human-generated content of actors, such as their likenesses, voices and performances, “reflects real and substantial work on their intellectual property.” which deserves legal protection. He highlighted a “double standard” in the potential use of AI by studios and other companies that want to use the technology.

“If a person decides to infringe the copyrighted content of one of these companies and distribute it without paying for the licensing rights,” Crabtree-Ireland said, “that person would face significant financial and legal consequences.”

He added: “Why isn’t the opposite true? Shouldn’t the people whose intellectual property was used to train the AI ​​algorithm be at least equally protected?”

Copyright law does not extend to the faces of actors or the voices of singers, but some states – such as California, New York and Florida – have laws protecting, among other things, the unauthorized commercial use of a person’s name, likeness and personality Protect person. It is intended to give people the exclusive right to profit from their identity. Music publishers are currently pushing for a federal publicity law to combat vocal imitation in AI tracks, which would likely help actors and other creators as well.

As part of their tentative deal with the Alliance of Motion Picture and Television Producers, the writers ensured that the use of generative AI tools will not impact their creditworthiness or compensation and must be disclosed by studios. August referred to these provisions of the agreement and compared Schreiber and other artists to small businesses “each of which competes in the market for the sale of their works.” To be successful, authors develop unique styles and brands that are essentially stolen by AI companies that indiscriminately scour the Internet for material to train AI systems, he explained.

“This is theft, not fair use,” August said, referring to the legal concept that copyrighted works can be used to create new creations as long as they are transformative. “Our work, which is protected by copyright and our own contractual rights, is used entirely without our permission, without attribution or compensation.”

For authors and authors, it’s not just about copying their scripts or books, which are then fed into so-called large language models that power the human-like AI bots that can create pitches and loglines in seconds. It’s also about AI companies profiting from their work by creating infringing material, which the FTC says could amount to an unfair method of competition. August said criminals “use stolen goods to undercut a seller’s prices,” such as AI-generated knockoffs of popular novels sold on Amazon.

This remains a point of contention for writers, he continued, since the WGA deal only covers their work for studios, while “the majority of the actual work on AI is done by companies like Google, Facebook and OpenAI,” which have no contractual relationship with the guild. August emphasized: “Public policy will play a crucial role in protecting our members.”

At the hearing, many of the concerns raised by the WGA were echoed by the Authors Guild’s political director, Umair Kazi. Most of his comments focused on using members’ works as training data for AI companies that advance the production of competing derivative works.

“It is inherently unfair to use copyrighted works to create highly profitable technology that is also capable of producing competing derivative works without the consent, compensation or acknowledgment of the creator,” Kazi said. “There is a serious risk of market dilution from machine-generated books and other works that can be mass-produced inexpensively, and will inevitably reduce the economic and artistic value of human-created works.”

For example, generative AI is already being used to imitate popular authors to create inferior e-books. Kazi continued, “Earlier this year, AI-generated books dominated Amazon’s bestseller list in the young adult fiction category.”

Last month, the Authors Guild – led by prominent authors such as George RR Martin, Jonathan Franzen and John Grisham – entered the legal battle against OpenAI. The group, which has more than 13,000 members, represents what is likely to be the strongest opponent suing the company in a case that could result in hundreds of millions of dollars in damages and an order requiring the company to destroy systems that are infringed upon by copyright protected works were trained.

A potential licensing market is a key factor in whether AI companies can benefit from fair use protection in lawsuits accusing them of copyright infringement. AI companies will likely face the Supreme Court’s recent decision in Andy Warhol Foundation for the Visual Arts v. Goldsmith, which effectively limits the scope of fair use. In this case, the majority emphasized that an analysis of whether an allegedly infringing work has been sufficiently converted must be balanced against the “commercial nature of the use.” If authors can demonstrate that OpenAI’s exploitation of their novels undermines their economic prospects of profiting from their works, for example by interfering with potential licensing agreements that the company could have entered into instead, then fair use is unlikely to be found. legal experts consulted, according to THR.

An opt-in system should be mandatory in any potential licensing system, August said. This means that creators would not be forced to opt out of having their work excluded from the training data.

Several speakers also alerted the FTC to the increase in fraud using AI tools. Tim Friedlander, president of the National Association of Voice Actors, pointed to deepfake ads featuring Tom Hanks and MrBeast. “Currently, only three seconds of source audio is required to create a realistic voice clone, and synthetic content can be used to deceive consumers into believing that trustworthy voices are communicating with them,” he said.

Just this week, Hanks and MrBeast took to social media to warn fans that companies are stealing their likenesses without consent to create AI versions of themselves for commercial purposes.

In one of the more disturbing claims, Sara Ziff, founder of the Model Alliance, said that modeling agencies are using AI deepfakes instead of hiring real models to achieve diversity goals.

“A digital model created in 2017 by the world’s first fully digital modeling agency using AI has emerged as a base for high-end brands such as BMW and Louis Vuitton,” said Ziff, who noted that Levi also did the same this year announced the use of AI-generated models to increase the appearance of diversity. “Critics have called this a form of digital blackface.”

The scope of potentially deceptive business practices that the FTC is concerned about also includes actions that some companies may take to undermine workers. Commissioner Alvaro Bedoya pointed out that studios make background actors scan their likenesses for future use.