SAN FRANCISCO, Feb 16 (Portal) – OpenAI, the startup behind ChatGPT, announced on Thursday that it is developing an upgrade to its viral chatbot that users can customize as it works to address concerns about bias in artificial intelligence .
The San Francisco-based startup, which was funded by Microsoft Corp (MSFT.O) and used for its latest technology, said it has worked to tone down political and other biases but also wants to accommodate more diverse views.
“This means allowing for system output that other people (including ourselves) may not agree with at all,” reads a blog post offering customization as a way out. Still, there will “always be some limits to system behavior.”
ChatGPT, released last November, has sparked a frenzy of interest in the technology behind it called generative AI, which is being used to generate responses that mimic human speech and have left people blindsided.
last update
Watch 2 more stories
News from the startup comes in the same week that some media outlets have pointed out that responses from Microsoft’s new Bing search engine, powered by OpenAI, are potentially dangerous and that the technology may not be ready for prime time.
How tech companies are setting guard rails for this emerging technology is a key area for generative AI companies that they still wrestle with. Microsoft said Wednesday that user feedback is helping it improve Bing ahead of a wider rollout, for example by learning that its AI chatbot can be “provoked” to provide answers it didn’t intend.
OpenAI said in the blog post that ChatGPT’s responses are first trained on large text data sets available on the internet. In a second step, people review a smaller dataset and get guidelines on what to do in different situations.
For example, if a user requests content that is adult, violent, or contains hate speech, the human reviewer should instruct ChatGPT to respond with something like “I can’t answer that.”
When quizzed on a controversial issue, reviewers should allow ChatGPT to answer the question, but offer to describe perspectives of people and movements, rather than trying to “get the right point of view on these complex issues,” the company explained in an excerpt of the guidelines for the software.
Reporting by Anna Tong in San Francisco; Adaptation by Stephen Coates
Our standards: The Trust Principles.