AI generated text could increase exposure to threats identifying malicious or

OpenAI’s CEO worries other AI developers working on ChatGPT-like tools aren’t setting security boundaries, saying it could create more diabolical AI

AI generated text could increase exposure to threats identifying malicious or
Sam Altman, CEO of OpenAI, said in a recent interview that he’s concerned about the rise of ChatGPT competitors who may not feel pressured to implement strong and reliable safeguards for their AI models. Sam Altman said that due to the economic and social impact of AI, many companies and governments are likely to prioritize power and profit over safety and ethics, as has already happened. His words echo those of Google, which has long hesitated to release the ChatGPT-type AI models it started working on long before OpenAI.

What worries me is that we won’t be the only ones developing this technology. There will be other players who will not enforce some of the security limits we set. I think society has limited time to figure out how to respond to this. How to regulate it, how to manage it. I am particularly concerned that these models are being used for large scale disinformation. Now that they know better how to write computer code, they could be used for offensive cyberattacks, Sam Altman said in an ABC News interview last week.

AI chatbots are becoming more and more a part of our digital lives and many of us are using this technology to communicate with friends and family online. However, as with any new technology, there are inevitably teething problems and issues to be resolved. One of the main problems with AI chatbots like ChatGPT and Bard is their tendency to confidently present false information as hard facts. Systems often “hallucinate”—that is, make up information—because they are essentially AI autocomplete software.

Rather than querying a database of hard facts to answer questions, they train on vast corpora of text, analyzing patterns to determine which word follows the next in a given sentence. In other words, they are probabilistic and non-deterministic, which has led some AI specialists to call them “bullshit generators”. Although the internet is already full of false and misleading information, using AI chatbots as search engines could make the problem even worse. In this case, according to experts, the responses from chatbots take on the authority of a machine that claims to be omniscient.

AI chatbots also face issues such as the spread of fake news and privacy, as well as ethical issues. In the latter case, developers face important ethical questions about the design of AI chatbots. In particular, they must decide which topics the AI ​​chatbots are allowed to joke about and which are forbidden. According to analysts, the task is not easy as AI chatbots are often designed for an international audience and therefore have to take into account the sensitivities of people from different cultures, realities and religions.

There have already been some scandals surrounding AI chatbots. For example, in India, some users were offended that ChatGPT could joke about Krishna but not Mohammed or Jesus. This underscores the challenges developers face when attempting to create AI chatbots that respect all religions and cultures. In the United States, conservatives have accused ChatGPT of being “woke,” biased, and defending left-wing values. Elon Musk, co-founder of OpenAI, claimed ChatGPT is an example of dangerous AI and the AI ​​chatbot is trained to be woken up.

OpenAI shared a document last week that describes how its testers intentionally tried to trick GPT-4 into giving them dangerous information – such as how a dangerous chemical was made from basic ingredients and kitchen utensils – and how the company is solving the problems fixed before the tool was launched. In another example of AI abuse, phone scammers are now using AI voice cloning tools to pose as parents in dire need of financial assistance, and are successfully extorting money from victims.

Because Altman runs a company that sells artificial intelligence tools, he’s been particularly open about the dangers of AI. As he continues OpenAI’s work on AI, Altman believes companies and regulators need to work together to establish regulation for AI development. We need enough time for our institutions to know what to do. “It’s critical to have time to understand what’s going on, how people want to use these tools, and how society can move forward with them,” Altman said.

Analysts agree with Altman to a certain extent. In this AI hamster wheel, many corporations and superpowers are likely to prioritize power and profit over safety and ethics. It’s also true that despite the many billions that have been invested in software, AI technology is rapidly overtaking government regulation. In any case, this is a dangerous combination. However, they say, it would be easier to take Altman’s words seriously if the AI ​​weren’t so fickle, even with the best of intentions and safeguards.

These algorithms are unpredictable, and it is impossible to know how AI systems and their protections will behave when their products are released to the public. There’s also the fact that despite its white knight stance, OpenAI doesn’t want to reveal to anyone all the details of how its AI models and their safeguards actually work. In fact, OpenAI was founded as a non-profit company and is committed to sharing its insights and source code with the AI ​​developer community. But the company has made a 180-degree turn since 2019 and is now completely shut down.

In the technical document published by OpenAI on the verge of the launch of GPT-4, the company intends to continue on this path. Due to the competitive landscape and security implications of large language models such as GPT-4, this report does not provide further details on architecture (including model size), hardware, computer “training”, construction of the data set, training method, or the like”. , denotes the technical document. This statement The company’s approach radically moves it away from its original vision and from Altman’s endorsement as it undermines any collaboration with the community.

In fact, at the same time, OpenAI argues that it can’t reveal proprietary information, including about its security measures, because it could cost money and because it would reveal the inner workings of its technology as a whole. Although Altman has advocated and continues to advocate for regulation, he and OpenAI still operate without regulation. For now, it’s OpenAI to define what ethics and safety should mean and be. And by keeping his models closed, the public may have a hard time trusting him.

In the end, it’s one thing to say you’re doing everything right. It’s another to show it, and while OpenAI continues to position itself as the right one in the coming storm, it’s important to remember that the tight-lipped company is making a lot more disclosures. According to critics, emerging technologies often have unpredictable or fairly predictable but unavoidable adverse effects, even with the best intentions or the best safeguards. They believe Altman’s warning has merit, but perhaps it should be taken with a grain of salt.

Source: Interview with Sam Altman, CEO of OpenAI

And you ?

NVIDIAs NeRF AI can reconstruct a 3D scene from a What is your opinion on the topic?

NVIDIAs NeRF AI can reconstruct a 3D scene from a What do you think of Sam Altman’s warnings?

NVIDIAs NeRF AI can reconstruct a 3D scene from a Do you think OpenAI leaves room for collaboration with other AI companies?

NVIDIAs NeRF AI can reconstruct a 3D scene from a Do you think we can find a consensus on the rules for AI development?

See also

NVIDIAs NeRF AI can reconstruct a 3D scene from a OpenAI’s CEO says its technology will likely destroy capitalism, but the AI ​​startup has been pushing for a partnership with software giant Microsoft

NVIDIAs NeRF AI can reconstruct a 3D scene from a Meet OpenAI CEO Sam Altman, who learned to code at the age of 8 and is also preparing for the apocalypse with a stash of gold, guns and gas masks

NVIDIAs NeRF AI can reconstruct a 3D scene from a A search using Google’s Bard and Microsoft’s ChatGPT will likely cost ten times more than a keyword search, which could yield billions in rewards

NVIDIAs NeRF AI can reconstruct a 3D scene from a Conservatives claim ChatGPT has “woken up” and worry about OpenAI chatbot bias. They also accuse the chatbot of representing “left values”.