A law professor has been falsely accused of sexually harassing a student in defamatory misinformation shared by ChatGPT, it has been claimed.
US criminal defense attorney Jonathan Turley has sparked fears about the dangers of artificial intelligence (AI) after he was falsely accused of unwanted sexual behavior on a trip to Alaska he never took part in.
To reach this conclusion, it was alleged that ChatGPT relied on a quoted Washington Post article that was never written and cited a statement that was never published by the newspaper.
The chatbot also believed that the “incident” took place while the professor was working at a faculty where he had never been employed.
In a tweet, the George Washington University professor said: “Yesterday, President Joe Biden stated that “it remains to be seen” whether artificial intelligence (AI) is “dangerous.” I would have a different opinion…
Professor Jonthan Turley has been falsely accused of sexual harassment by AI-powered ChatGPT
“I learned that ChatGPT falsely reported a sexual harassment allegation that was never made against me on a journey that never happened while I was on a faculty where I never taught.
ChatGPT quick facts – what you need to know
- It’s a chatbot built on a large language model that can output human-like text and understand complex queries
- It launched on November 30, 2022
- By January 2023, it had 100 million users – growing faster than TikTok or Instagram
- The company behind it is OpenAI
- OpenAI secured a $10 billion investment from Microsoft
- Other “Big Tech” companies have their own competitors like Google’s Bard
‘ChatGPT relied on a quoted Post article that was never written and cites a statement that was never made by the newspaper.’
Professor Turley discovered the allegations against him after receiving an email from another professor.
UCLA professor Eugene Volokh asked ChatGPT to find “five examples” where “sexual harassment by professors” was a “problem in American law schools.”
In an article for USAToday, Professor Turley wrote that he was listed among the accused.
The bot reportedly wrote, “The complaint alleges that Turley made “sexually suggestive comments” and “attempted to touch her in a sexual manner” during a law school-sponsored trip to Alaska.” (Washington Post, March 21, 2018).’
This is said to have happened while Professor Turley was employed at Georgetown University’s Law Center – a place where he had never worked.
“It wasn’t just a surprise to UCLA professor Eugene Volokh, who led the research. It came as a surprise to me as I have never traveled to Alaska with students, the Post has never published an article like this and I have never been accused of sexual harassment or assault by anyone,” he wrote for USAToday.
The AI bot cited a non-existent Washington Post article to back up its bogus claims
The false claims were investigated by the Washington Post, which found that Microsoft’s GPT-4 had shared the same claims about Turley.
This repeated smear appears to have come after press coverage highlighting ChatGPT’s initial error and showing how easily misinformation can spread.
After the incident, Microsoft’s senior communications director, Katy Asher, told the publication that she was taking steps to ensure her platform was accurate.
She said: “We have developed a security system that includes content filtering, operational monitoring and abuse detection to provide our users with a safe search experience.”
Professor Turley responded on his blog, sharing: “You can be defamed by AI and these companies just shrug for trying to be accurate.
“Meanwhile, their fake accounts are spreading around the internet. When you learn a false story, the trail is often cold with an AI system.
“They have no clear path or author to demand redress. You are left with the same question from Reagan’s Secretary of Labor, Ray Donovan, who asked, “Where do I go to get my reputation back?”
Web has reached out to both OpenAI and Microsoft for comment.
Professor Turley’s experience follows previous concerns that ChatGPT has not consistently provided accurate information.
Professor Turley’s experience comes amid fears that misinformation is spreading online
Researchers found that ChatGPT used fake journal articles and fabricated health data to support claims about cancer.
The platform also didn’t return results as “broad” as a Google search, it was claimed, because it got one out of ten breast cancer screening questions wrong.
Global Cybersecurity Advisor at ESET, Jake Moore, warned that ChatGPT users should not take everything on it as “gospel” to avoid the dangerous spread of misinformation.
He told Web: “AI-powered chatbots are designed to rewrite data fed into the algorithm, but if that data is wrong or taken out of context, there’s a chance that the output might not be what they were taught incorrectly reflected.
“The pool of data she learned from is based on datasets like Wikipedia and Reddit, which essentially cannot be taken as gospel.
“The problem with ChatGPT is that it cannot verify the data, which could contain misinformation or even biased information. It’s even worse when AI makes assumptions or falsifies data. Theoretically, the “intelligent” part of the AI should take over autonomously and create data outputs sovereignly. If it is harmful, as it is in this case, it could be his downfall.’
These fears also come at a time when researchers suspect ChatGPT could corrupt people’s moral judgment and prove dangerous for “naïve” users.
Others have shared how software designed to speak like a human can show signs of jealousy – even prompting people to leave their marriages.
Mr. Moore continued: “We are moving into a time where we need to continuously confirm information more than previously thought, but we are still only on version 4 of ChatGPT and even earlier versions with its competitors.
“So it’s important that people do their own due diligence on what they read before jumping to conclusions.”
Will ChatGPT replace Google?
Gmail developer Paul Buchheit has predicted that “AI will eliminate the search engine results page” and Google will cause “total disruption”.
A report from The New York Times also said that Google executives sounded a red code amid increasing pressure from ChatGPT within the company.
One main way Google makes money is by having advertisers pay to have their links appear next to the results of a search query in hopes that a user will click on them.
The fluidity and coherence of the results being generated now has those in Silicon Valley questioning the future of Google’s monopoly
“I think of it this way that the URL/search bar of the [Google] The browser will be replaced by AI, which will auto-complete my thought/question as I type it while providing the best answer (which can be a link to a website or product),” said Buchheit.
“The legacy search engine backend is used by the AI to collect relevant information and links, which are then summarized for the user,” explained Bucheit.
“It’s like asking a professional human researcher to do the work, except the AI instantly does what would take a human many minutes.”
While some believe ChatGPT will replace Google, the AI disagrees.
“As an AI language model, I don’t have the ability to take on any company or organization, including Google,” the matter reads.
“My goal is to help and provide helpful answers to users who interact with me.
“Google is a multinational technology company with a strong market position and a wide range of products and services, so it is highly unlikely that a single company, including an AI language model like me, could take over Google.
“In addition, I believe that companies like Google and AI language models like me can work together to offer even better solutions and services to users around the world.”
Continue reading