University of South Carolina student caught using ChatGPT to write

University of South Carolina student caught using ChatGPT to write philosophical essay

A South Carolina college philosophy professor warns that we should expect a spate of scams with ChatGPT — an OpenAI chatbot powered by artificial intelligence — after he caught one of his students using it to create an essay.

Darren Hick, a philosophy professor at Furman University in Greenville, South Carolina, wrote a lengthy Facebook post this month describing problems with the advanced chatbot and the “first plagiarist” he caught on a recent 500-word assignment to write about Hume and the Paradox of Terror.

ChatGPT, trained on a huge sample of text found on the web, can understand human speech, hold conversations with humans, and generate detailed text that many say is human-like and quite impressive.

“ChatGPT replies within seconds with a response that looks like it was written by a human — also by a human with a good sense of grammar and an understanding of how essays should be structured,” Hicks wrote.

Darren Hick, a philosophy professor at Furman University in Greenville, South Carolina, wrote a lengthy Facebook post this month describing problems with the advanced chatbot and the

Darren Hick, a philosophy professor at Furman University in Greenville, South Carolina, wrote a lengthy Facebook post this month describing problems with the advanced chatbot and the “first plagiarist” he caught on a recent assignment

“The first clue that I was looking into AI is that despite the essay’s syntactic coherence, it didn’t make sense.”

Hicks noticed a number of other warning signs.

“It said some true things about Hume, and it knew what the paradox of horror is, but after that it was just bull,” he wrote. ‘ChatGPT is also annoying when quoting, another flag.’

Hicked explained that AI would be a “game changer” for introductory courses.

“Although every time you prompt ChatGPT there’s at least a slightly different response, I’ve noticed some consistencies in essay structuring,” he wrote. “That will be enough for me to hoist more flags in the future. But again, ChatGPT is still learning, so it may be good.’

“Expect a flood, folks, not a trickle,” Hick warned. “I expect to implement a policy that says if I believe material submitted by a student was produced by AI, I will throw it away and give the student an impromptu oral exam on the same material. Until my school develops a standard for dealing with such things, that’s the only way I can think of.’

A number of teachers and professors have warned about the capabilities of AI chatbots in recent weeks.

Kevin Bryan, an associate professor of strategic management at the University of Toronto who ran an AI-based entrepreneurship program and follows the industry closely, said he was “shocked” by ChatGPT’s capabilities after testing it by using the AI ​​made numerous exams write answers.

“You can’t give takeaway exams/homework anymore,” Bryan said at the start of a Twitter thread detailing the AI’s capabilities.

1672173329 58 University of South Carolina student caught using ChatGPT to write

“While every time you prompt ChatGPT there’s at least a slightly different response, I’ve noticed some consistency in how essays are structured,” Hick wrote. “That will be enough for me to hoist more flags in the future. But again, ChatGPT is still learning, so it may be good.

However, not everyone is ready to hold a funeral for student essays.

In Plagiarism Today, Jonathan Bailey explained that the college essay – which has been declining in popularity for years – is actually not dead.

“Despite the challenges, there are still times when an essay is a suitable assessment tool. Even if it is no longer the standard or the gold standard, the essay is likely to remain a tool that instructors use to assess students’ understanding of the material,” Bailey wrote.

“AI will not be the death of the essay, but it can transform it. It may change the prompts used, the exposures to be graded, and the overall approach to the concept.’

For its part, OpenAI released a statement: “The dialog format allows ChatGPT to answer follow-up questions, admit its mistakes, challenge false premises, and deny inappropriate requests.”

If you liked this story, you might like…

New California law bans Elon Musk’s Tesla from advertising its vehicles as “fully self-driving.”

Apple’s iPhone business is facing a “defining moment” as China’s Covid outbreak threatens supply chain chaos in the coming months

The FCC could fine a robocall firm that made over 5 billion fraudulent calls in three months $300 million

What is OpenAI’s ChatGPT chatbot and what is it used for?

OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogues, answer follow-up questions, admit mistakes, challenge false premises, and reject inappropriate requests.

Initial development involved human AI trainers providing the model with conversations in which they played both sides – the user and an AI assistant. The version of the bot available for public testing tries to understand the questions asked by users and responds with detailed answers similar to human-written text in a conversational format.

A tool like ChatGPT could be used in real-world applications like digital marketing, creating online content, responding to customer service requests, or as some users have discovered, even debugging code.

The bot can answer a variety of questions while imitating the human speaking style.

A tool like ChatGPT could be used in real-world applications like digital marketing, creating online content, responding to customer service requests, or as some users have discovered, even debugging code

A tool like ChatGPT could be used in real-world applications like digital marketing, creating online content, responding to customer service requests, or as some users have discovered, even debugging code

As with many AI-driven innovations, ChatGPT does not come without concerns. OpenAI has recognized the tool’s tendency to respond with “plausible-sounding but wrong or nonsensical answers,” a problem it finds difficult to fix.

AI technology can also perpetuate societal prejudices such as race, gender and culture. Tech giants like Alphabet Inc.’s Google and Amazon.com have previously acknowledged that some of their projects experimenting with AI were “ethically sensitive” and had limitations. Several companies have had to step in and fix AI devastation by humans.

Despite these concerns, AI research remains attractive. Venture capital investments in AI development and operations companies surged to nearly $13 billion last year, and up to $6 billion through October this year, according to data from PitchBook, a Seattle-based fund-tracking company.