Artificial intelligence could produce soap operas to rival the BBC

Artificial intelligence could produce soap operas to rival the BBC in just FIVE YEARS, the director of Slow Horses reveals – as he tells MPs that British stories can be “enabled” by technology

A director of hit Apple TV series Slow Horses has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years.

James Hawes spoke to politicians as part of the UK Culture, Media and Sport Committee's inquiry into British film and high-end television and the impact of technology on the industry.

During the investigation, James shared the findings of a forum held following the news that the BBC's long-serving Soap Doctors would be sacked at the end of the year.

The soap struggled with declining ratings after an attempt to move the soap to a primetime slot failed to attract new audiences.

James, the deputy chairman of Directors UK, told MPs: “One of the members there started talking about AI and that got me looking at how long it would take for a show like Doctors to be completely powered by generative AI and can be done to me.” did a survey with various VFX people….'

A director of Slow Horses on Apple TV has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years (pictured: EastEnders' Jacqueline Jossa).

A director of Slow Horses on Apple TV has told MPs that artificial intelligence could create soaps to rival the BBC in less than five years (pictured: EastEnders' Jacqueline Jossa).

“I then spoke with some members of the legal team that advised SAG(-AFTRA) and (the) Writers Guild (of America) the summer before I got here.”

“And the best estimate is that between three and five years old, someone can say, 'Create a scene in an emergency room where a doctor comes in, he's having an affair with a woman, so they're flirting, and someone else.'” Dies on the table” and it will start creating it and you will build these and it will be generative AI.

“It may not be as polished as we were used to, but that’s how close we are, and I find that hard to believe with all the creatives involved.”

“I think the genie is out of the bottle, I think we have to live with it.” I think it's incredibly helpful too.

“I think there are all parts of storytelling and British storytelling that can be filled by this, but we have to protect the rights holders.”

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+.

“When Apple actually picked it up,” he said, “they wondered whether it was just too quirky and too British and whether it would catch on, even though of course we're known for the spy genre.”

“Gary Oldman's connection (and) subsequent success shows that even 'quirky Brits' can travel, and it is now the longest-running repeat series on Apple.”

James Hawes spoke to politicians as part of the UK Culture, Media and Sport Committee's inquiry into British film and high-end television and the impact of technology on the industry

James Hawes spoke to politicians as part of the UK Culture, Media and Sport Committee's inquiry into British film and high-end television and the impact of technology on the industry

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+

James also told MPs that Slow Horses, a series he directed, was rejected by several British broadcasters before being picked up by AppleTV+

“It showed that we can think beyond British parochialism or transform smaller British stories into ones that have an outsider perspective and universal themes.”

“I think that's really important. We need to be aware of the balance, the critical balance between the benefits of foreign investment and the existence of our own domestic industry.”

Developments in artificial intelligence, including the rise of ChatGPT, have threatened several industries in recent years.

Earlier this week, a “terrifying” new tool, Sora, capable of creating hyper-realistic videos from text, sparked warnings from experts.

Sora, unveiled by Open AI on Thursday, shows striking examples such as drone footage of Tokyo in the snow, the waves crashing against the cliffs of Big Sur, and a grandmother enjoying a birthday party.

Experts warn that the new artificial intelligence tool could wipe out entire industries such as film production and lead to a surge in deep fake videos ahead of the crucial US presidential election.

“Generative AI tools are evolving so quickly and we have social media, which is an Achilles heel in our democracy and it couldn’t have happened at a worse time,” Oren Etzioni, founder of TruMedia.org, told CBS.

“As we try to sort this out, we face one of the most consequential elections in history,” he added.

The quality of AI-generated images, audio and video content has risen rapidly over the past year, with companies such as OpenAI, Google, Meta and Stable Diffusion racing to develop more advanced and accessible tools.

“Sora is capable of generating complex scenes with multiple characters, specific types of movement, and precise details of the subject and background,” OpenAI explains on its website.

“The model understands not only what the user asked for in the prompt, but also how these things exist in the physical world.”

The tool is currently being tested and evaluated for potential security risks. A date for a public release has not yet been set.

The company has revealed examples that are unlikely to be offensive, but experts warn that the new technology could spark a new wave of extremely lifelike deepfakes.

“We're trying to build this plane as we fly it, and it's going to land no later than November, and we don't have the Federal Aviation Administration, we don't have the history, and we don't have any.” “We have to have the tools, to do this,” Etzioni warned.

Sora “will make it even easier for malicious actors to create high-quality video deepfakes and give them more flexibility to create videos that could be used for offensive purposes,” Dr. said. Andrew Newell, chief scientific officer at identity verification company iProov. said CBS.

“Anchors or people who create short videos for video games, education or advertising will be the most affected,” Newell warned.

Deep fake videos, including those of a sexual nature, are increasingly becoming a problem for both private individuals and those with public profiles.

“Look where we just got to in a year of image generation.” “Where will we be in a year?” Michael Gracey, a film director and visual effects expert, told the Washington Post.

Earlier this week, a

Earlier this week, a “terrifying” new tool, Sora, capable of creating hyper-realistic videos from text, sparked warnings from experts

Another AI-generated video of Tokyo in the snow has shocked experts with its realism

Another AI-generated video of Tokyo in the snow has shocked experts with its realism

“We will take several important security measures before making Sora available in OpenAI’s products,” the company wrote.

“We are working with red teamers – subject matter experts in areas such as misinformation, hateful content and bias – who will controversially test the model.”

Adding: “We are also developing tools to detect misleading content, such as a detection classifier that can detect when a video was created by Sora.”

Deep fake images gained additional attention earlier this year when AI generated sexual images of Taylor Swift that were shared on social media.

The images come from the website Celeb Jihad and show Swift engaging in a series of sexual acts while wearing Kansas City Chiefs memorabilia and at the stadium.

The star was “furious” and considered legal action.

President Joe Biden has also spoken about the use of AI and revealed that he has fallen for deepfakes of his own voice.

“It’s already happening.” AI devices are being used to deceive people. “Deep fakes use AI-generated audio and video recordings to smear reputations,” Biden said, and “spread false news and commit fraud.”

“AI allows fraudsters to record your voice for three seconds.” I watched one of mine a couple of times — I said, “When the hell did I say that?” Biden told a crowd of officials.

He then spoke about technology's ability to deceive people through fraud. IT experts also warn about the potential for misuse of AI technology in the political sphere.

On Friday, several major technology companies signed a pact to take “appropriate precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok have vowed to take preventative measures.

“Everyone is aware that no technology company, no government, no civil society organization has the ability to deal alone with the emergence of this technology and its possible nefarious uses,” said Nick Clegg, president of global affairs at Meta Signature.