AI generated text could increase exposure to threats identifying malicious or

Anthropic launches Claude, an AI model described as more – Developpez.com

AI generated text could increase exposure to threats identifying malicious or
Anthropic, a startup co-founded by former OpenAI employees, launched an AI model called Claude last week to compete with ChatGPT. The company says its AI can be tasked with a range of tasks, including searching through documents, summarizing, writing and coding, and answering questions on specific topics. Additionally, Anthropic argues that “Claude is much less likely to produce harmful results and is easier to manipulate” than other AI chatbots. She added that two versions of the AI ​​model, Claude and Claude Instant, are available.

Anthropic markets itself as an AI security and research company that strives to build reliable, interpretable, and controllable AI systems. The startup benefits from the backing of Google, from which it received $300 million in funding earlier this year. This makes Google the latest giant in the tech industry to put its money and computing power to work for a new generation of companies trying to make their mark in the burgeoning field of generative AI. Microsoft has been doing this since 2019, investing $1 billion in OpenAI, and then several more billions earlier this year.

In January, Anthropic announced that it was working on Claude, a large language model (LLM) intended to compete with OpenAI’s ChatGPT (GPT-3.5). Claude is comparable to OpenAI’s ChatGPT, but according to Anthropic, its AI chatbot is superior to OpenAI’s in a number of important respects. The entire system was in closed beta and few people had access, but the company announced the general availability of its tool last week. Two versions of the AI ​​model, Claude and Claude Instant, are now available to a limited group of early adopters and Anthropic Associates.

Claude can be used via an API or a chat interface in the Anthropic developer console. The API allows developers to remotely connect to Anthropic’s servers and add Claude’s parsing and text-completion capabilities to their applications. We believe Claude is the right tool for a variety of customers and use cases. “We have been investing in our infrastructure to serve models for several months and are confident in our ability to meet customer demand,” a spokesman for Anthropic said in a TechCrunch statement.

Anthropic’s AI model grew out of primary concerns about the future safety of AI, and the startup trained it using a technique it calls “constitutional AI.” The technique aims to provide a principles-based approach to aligning AI systems with human intent, allowing AIs like ChatGPT to respond to queries using a set of simple principles as a guide. To design Claude, the Anthropic team began by creating a list of about ten core principles that together form a kind of “constitution” (hence the term constitutional AI).

Similar to ChatGPT, Claude is capable of performing a variety of conversational and word processing tasks. It is designed to help users with summaries, research, co-writing, coding and more. According to Anthropic, one of Claude’s key differences is that it’s designed to deliver less harmful results than most other AI chatbots that have come before it. Claude is described as a “useful, honest and harmless AI system”. Anthropic adds that it’s easy to use and manage, so you can get the results you want with less effort.

Anthropic admits that Claude has his limitations, some of which came to light during the closed beta. For example, Claude would be less good at math and less good at programming than ChatGPT. Claude hallucinates, for example, making up a name for a chemical that doesn’t exist and giving questionable instructions for making weapons-grade uranium. It’s also possible to bypass Claude’s security filters through clever prompts, as is the case with ChatGPT. A beta user managed to get Claude to describe how to make meth at home.

Anthropic believes the hallucination issue is a sensitive issue for AI companies. The challenge is to create models that never hallucinate and are still useful. We can get into a sticky situation where the model thinks a good way to never lie is to say nothing at all, so there’s a compromise we’re working on. We’ve made progress in reducing hallucinations, but there’s still a long way to go,” said the Anthropic spokesperson. The company also plans to give developers the ability to customize Claude’s Constitutional Principles to suit their own needs.

Customer acquisition is another goal of the startup. Anthropic sees its core users as “startups making bold tech bets” alongside “bigger, more established companies.” It’s facing some pressure from investors to recoup the hundreds of millions of dollars invested in its AI technology. Besides Google, it has strong support from investors such as Sam Bankman-Fried, founder and former CEO of FTX, Caroline Ellison, former CEO of Alameda Research, Jim McClave, Nishad Singh, Jaan Tallinn and the Center for Research on Emerging Risks.

The connections between Anthropic and the protagonists of the FTX saga, in this case Caroline Ellison and Sam Bankman-Fried, are criticized. Anthropic reportedly received a hefty sum of $580 million from FTX executives who are now facing charges of fraud and securities fraud, among other charges. The question arose whether this money could be recovered by a court. Additionally, Anthropic and FTX are believed to be linked to the effective altruism movement, which former Google researcher Timnit Gebru calls “a dangerous hallmark of AI safety.”

The company has announced that Claude and Claude Instant will be sold under two different price lists. Claude is a high performance model while Claude Instant is lighter, cheaper and much faster. Fees are charged per million characters input and output. While OpenAI’s GPT-3.5-turbo is priced at $0.002 for 1,000 tokens (fragments of a word), Claude Instant is available for $0.42 per million characters input and $1.45 per million characters output . Claude-v1, the largest model, is priced at $2.90 per million characters input and $8.60 per million characters output.

Although there is no standard conversion between tokens and characters, analysts estimate that OpenAI’s ChatGPT API costs between $0.40 and $0.50 per million characters, meaning Claude is generally more expensive. However, Anthropic believes Claude has already been integrated into several products available through partners, including Instant Summary’s DuckAssist by DuckDuckGo, part of Notion AI, and a chat application called Poe, developed by Quora. Other launch partners – including Robin AI and AssemblyAI – are said to have built Claude into their products.

We use Claude to evaluate certain parts of a contract and suggest a new, more user-friendly alternative language to our customers. We found that Claude is very good at understanding language, including in technical areas such as legal language. He is also very adept at writing, summarizing, translating and explaining complex concepts in simple terms,” said Richard Robinson, CEO of Robin AI.

Anthropic has announced that it will be rolling out even more updates in the coming weeks to make Claude more useful, honest, and harmless. According to some analysts, given Google’s announced plans for Bard and the PaLM API, it seems unlikely that Google will rely on Anthropic for AI solutions in its products, but funding from an OpenAI competitor could be in Google’s strategic interest going forward .

Source: Anthropic

And you ?

NVIDIAs NeRF AI can reconstruct a 3D scene from a What is your opinion on the topic?

NVIDIAs NeRF AI can reconstruct a 3D scene from a What do you think of the method Anthropic used to create Claude?

NVIDIAs NeRF AI can reconstruct a 3D scene from a What do you think of Claude’s limitations compared to ChatGPT?

NVIDIAs NeRF AI can reconstruct a 3D scene from a Do you think Claude can tail ChatGPT?

NVIDIAs NeRF AI can reconstruct a 3D scene from a Do you think AI chatbots can overcome instant injection problem?

See also

NVIDIAs NeRF AI can reconstruct a 3D scene from a Google is investing $300 million in AI startup Anthropic, founded by former OpenAI researchers. The company has developed its own all-purpose chatbot, a ChatGPT rival called Claude

NVIDIAs NeRF AI can reconstruct a 3D scene from a Anthropic, a startup founded by former OpenAI employees, is introducing an AI chatbot called Claude to compete with ChatGPT, mil would be good at joking but bad at coding

NVIDIAs NeRF AI can reconstruct a 3D scene from a OpenAI launches GPT-4, a multimodal AI that the company claims is state-of-the-art. It is said to be 82% less likely to be tricked by instant injection than GPT-3.5

NVIDIAs NeRF AI can reconstruct a 3D scene from a Article summaries generated by ChatGPT manage to fool scientists, they can’t always tell the difference between AI-generated summaries and original summaries