The ChatGPT creator confirms a bug that allowed some users

The ChatGPT creator confirms a bug that allowed some users to snoop on the chat histories of others

The creator of ChatGPT has confirmed that a bug in the system has allowed some users to snoop on other people’s chat histories.

OpenAI CEO Sam Altman confirmed last night that the company has a “significant issue” threatening the privacy of conversations on its platform.

The revelations came after several social media users shared ChatGPT conversations online that they had not participated in.

As a result, users were blocked from viewing chat history between 8:00 and 17:00 (GMT) yesterday.

Mr. Altman said: “We had a significant issue in ChatGPT due to a bug in an open source library that has now been released with a fix and we have just completed validation. A small percentage of users were able to see the titles of other users’ conversation history.’

On Monday, it was confirmed that a

On Monday, it was confirmed that a “small percentage” of ChatGPT users were able to see other people’s chat history

ChatGPT quick facts – what you need to know

  • It’s a chatbot built on a large language model that can output human-like text and understand complex queries
  • It launched on November 30, 2022
  • By January 2023, it had 100 million users — faster than TikTok or Instagram
  • The company behind it is OpenAI
  • OpenAI secured a $10 billion investment from Microsoft
  • Other “big tech” companies like Google have their own competitors like Google’s Bard

ChatGPT was founded in Silicon Valley in 2015 by a group of American angel investors, including current CEO Sam Altman.

It’s a large language model trained on a huge amount of text data so that it can generate responses to a given prompt.

People all over the world have used the platform to write human-like poems, lyrics and various other written works.

However, a “small percentage” of users this week were able to see chat titles in their own conversation history that didn’t belong to them.

On Monday, one person on Twitter warned others to be “careful” with the chatbot that had shown them other people’s talking points.

A picture of her list showed a range of titles, including “Girl Chases Butterflies,” “Books on Human Behavior,” and “Boy Survives Solo Adventure,” but it was unclear which of these weren’t hers.

They said: “If you use #ChatGPT, be careful! There is a risk that your chats will be shared with other users!

“Today I was presented with another user’s chat history. I couldn’t see any content, but I could see the titles of their recent chats.’

Sam Altman, CEO of OpenAI, confirmed that ChatGPT had a

Sam Altman, CEO of OpenAI, confirmed that ChatGPT had a “significant” issue yesterday

Users were blocked from viewing chat history between 8:00 and 17:00 (GMT) yesterday

Users were blocked from viewing chat history between 8:00 and 17:00 (GMT) yesterday

One person on Twitter warned others to be

One person on Twitter warned others to be “careful” with the chatbot showing them other people’s topics of conversation

Tips to keep chatbots at bay:

Don’t get personal: AI chatbots are designed to learn from every conversation, honing their skills in “human” interaction, but also creating a more accurate profile of you that can be saved. If you’re concerned about how your information might be used, avoid giving them personal information.

Phishing Galore: Artificial intelligence likely offers online scammers additional opportunities, and you can expect an increase in phishing attacks as hackers use bots to create increasingly realistic scams. Traditional phishing features like bad spelling or grammar in an email may be on the rise, so instead check the sender’s address and look for inconsistencies in links or domain names.

Use an antivirus: Hackers have successfully manipulated chatbots to create simple malware, so it pays to have a tool like Threat Protection, which can warn you about suspicious files and protect you if you download them.

(Source: NordVPN)

During the incident, the user added that he faced many errors related to the network connection, in addition to the errors that the history could not be loaded.

According to the BBC, another user also claimed he could see conversations written in Mandarin and another titled “Chinese Socialism Development”.

Although ChatGPT’s features have been temporarily disabled to fix the problem, others claim that compromising data could make other problems worse.

Marijus Briedis, Cybersecurity Expert at NordVPN said: “This is a monumental own goal by OpenAI. Rising interest in artificial intelligence chatbots like ChatGPT and Google’s Bard has enticed millions to try the technology. However, this mistake will make many take a step back.

“If we are going to increasingly rely on AI-powered tools in the future, trust in them is key. The integrity of the information we share with chatbots and other devices is key to that trust and should never be compromised.

“Our recent research on the dark web revealed that cybercriminals have been circling ChatGPT for months to see how it can be used to improve scams like phishing and malware. News of the security breach will encourage them that user data is another way to benefit from this revolutionary technology.’

Other companies have also raised concerns about the online language model.

Last month, JP Morgan Chase joined the likes of Amazon and Accenture to restrict use of the AI ​​chatbot ChatGPT among the company’s roughly 250,000 employees over privacy concerns.

One of the biggest shared concerns was that data could be used by ChatGPT’s developers to improve algorithms, or that engineers could access sensitive information.

ChatGPT’s privacy policy states that it may use personal information related to “use of the Services” to “develop new programs and services.”

However, it is also claimed that this personal data may be anonymized or aggregated before any service analysis takes place.

What is OpenAI’s ChatGPT chatbot and what is it used for?

OpenAI states that their ChatGPT model, trained using a machine learning technique called Reinforcement Learning from Human Feedback (RLHF), can simulate dialogues, answer follow-up questions, admit mistakes, challenge false premises, and reject inappropriate requests.

Initial development involved human AI trainers providing the model with conversations in which they played both sides – the user and an AI assistant. The version of the bot available for public testing tries to understand the questions asked by users and responds with detailed answers similar to human-written text in a conversational format.

A tool like ChatGPT could be used in real-world applications like digital marketing, creating online content, responding to customer service requests, or as some users have discovered, even debugging code.

The bot can answer a variety of questions while imitating the human speaking style.

A tool like ChatGPT could be used in real-world applications like digital marketing, creating online content, responding to customer service requests, or as some users have discovered, even debugging code

A tool like ChatGPT could be used in real-world applications like digital marketing, creating online content, responding to customer service requests, or as some users have discovered, even debugging code

As with many AI-driven innovations, ChatGPT does not come without concerns. OpenAI has recognized the tool’s tendency to respond with “plausible-sounding but wrong or nonsensical answers,” a problem it finds difficult to fix.

AI technology can also perpetuate societal prejudices such as race, gender and culture. Tech giants like Alphabet Inc.’s Google and Amazon.com have previously acknowledged that some of their projects experimenting with AI were “ethically sensitive” and had limitations. Several companies have had to step in and fix AI devastation by humans.

Despite these concerns, AI research remains attractive. Venture capital investments in AI development and operations companies surged to nearly $13 billion last year, and up to $6 billion through October this year, according to data from PitchBook, a Seattle-based fund-tracking company.