Microsoft has removed the waitlist for the AI-powered Bing chat feature. Now anyone can access the GPT 4 based chatbot which is unlocked once users log into the new Bing. We’re happy to confirm that the new Bing that we’ve customized for search runs on GPT-4. If you’ve been using Bing’s new preview for the past five weeks, you’ve already experienced a first iteration of this powerful model. As OpenAI brings updates to GPT-4 and beyond, Bing will benefit from these improvements, Microsoft Bing Team.
Microsoft Bing is a search engine developed by the Microsoft company. It was opened to the public on June 3, 2009. At the time of its publication in 2008, it revealed a change in Microsoft’s business strategy that separated its search engine from its suite of Windows Live applications. Microsoft recently announced that the new chatbot Bing AI will be integrated into the stable version of its Edge web browser.
The feature was reportedly first rolled out in February 2023 as a developer preview rather than a public release. By going to Bing’s new online portal, clicking “Join the Waiting List” and signing in with a Microsoft account, users could instantly access the GPT-4 powered chatbot.
Although the new Bing is now available to everyone, it is still in preview and users will need To to try it. By making Edge exclusive, Microsoft is also promoting the use of its own browser, which recently attracted 100 million daily active users. OpenAI GPT-4 is available separately on ChatGPT Plus (a paid version of ChatGPT). It’s the fourth in the GPT series.
ChatGPT, low at GPT-3
GPT-3 is an autogressive language model that uses deep learning to create human-like text. It is the third-generation GPT-n-series speech prediction model developed by OpenAI, a San Francisco-based artificial intelligence research lab, and consists of for-profit OpenAI LP and its parent company, the non-profit OpenAI Inc.
ChatGPT, which is based on GPT-3, has already proved both exciting and controversial, with many people expressing concerns about how the text-based tool might be used, but also how it might grow. As videos and other media are added to the mix, these concerns will only increase, with the possibility of easily creating fake videos being a concern.
GPT-4 is multimodal and little Kosmos-1
The OpenAI startup announced Tuesday that it was beginning the release of a powerful artificial intelligence model called GPT-4, paving the way for the spread of human-like technology and increased competition between its backer Microsoft and Microsoft’s Google of Alphabet. OpenAI, which created the sensational chatbot ChatGPT, said in a blog post that its latest technology is “multimodal,” meaning both images and text prompts can trigger it to generate content.
Microsoft has released its research paper, Language Is Not All You Need: Aligning Perception with Language Models. The model features a large multimodal language model (MLLM) called Kosmos-1. The article emphasizes the importance of integrating language, action, multimodal perception, and world modeling in advancing towards artificial intelligence. The research examines Kosmos-1 in different contexts.
Large Language Models (LLMs) have successfully served as a versatile interface for various natural language tasks [BMR+20]. The LLM based interface can be customized to any task as long as we are able to convert input and output to text. For example, the summary input is a document and the output is its summary. +Researchers can therefore feed the input document into the language model and then create the generated summary.
KOSMOS-1 is a large multimodal language model (MLLM) capable of perceiving general modalities, following instructions (i.e., zero-shot learning), and learning in context (i.e., few-shot learning). The goal is to align perception with the MLLMs so models can see and speak. More specifically, we follow METALM [HSD+22] to build the KOSMOS-1 model from scratch.
The model shows promising abilities in various generation tasks by performing common modalities such as non-OCR NLP, visual QA, and perceptual and visual tasks. The Microsoft research team also applied the model to a dataset from Raven’s IQ test to analyze and diagnose MLLMs’ nonverbal reasoning skills. The limits of my language mean the limits of my world, Ludwig Wittgenstein.
OpenAI co-founder Sam Altman said GPT-4 will require much more processing power than its predecessor. OpenAI is expected to implement ideas related to optimality in GPT-4 – although the extent cannot be predicted as their budget is unknown. However, Altman’s statements show that OpenAI should focus on optimizing variables other than model size. Finding the best set of hyperparameters, optimal model size, and number of parameters could lead to incredible improvements in all benchmarks.
According to analysts, all language model predictions fall apart when these approaches are combined into a single model. Altman also said people wouldn’t believe how much better models can be without necessarily being bigger. It could indicate that scaling efforts are over for now.
GPT-4 brings a text input function and introduces a visual element
With GPT-4, the text input feature will be available for ChatGPT Plus subscribers and software developers with a waitlist, while the image input feature remains a preview of the research. This much-anticipated launch shows that office workers can turn to increasingly better AI to accomplish new tasks, and that tech companies are competing to capitalize on these advances.
We spent six months making the GPT-4 safer and better aligned. According to our internal assessments, GPT-4 is 82% less likely to respond to unauthorized content requests and 40% more likely to provide factual responses than GPT-3.5, OpenAI. The company took into account the feedback from ChatGPT users to improve the behavior of GPT-4. In addition, more than 50 experts have been recruited to provide initial feedback on areas such as AI safety and security.
In its latest version, OpenAI introduces the visual element, which allows using images in queries. The algorithm can intelligently read the image, understand the context, and provide an answer on the fly.
You can ask him to explain the storyline of Cinderella in a single sentence, where each word must begin with the next letter of the alphabet from AZ, without repeating any letter, says OpenAI. Caitlin Roulston, Director of Communications at Microsoft, said: During this preview period, we are conducting various tests that could speed up access to the new Bing for some users. We remain in the preview version and you can register on Bing.com.
Microsoft’s waitlist change comes just a day after the company confirmed that its Bing AI chatbot works with GPT-4, OpenAI’s next-generation AI language model.
The removal from the waitlist also comes a day before Microsoft is hosting an event showcasing AI additions to its Office productivity software. Microsoft’s ChatGPT-like AI works in Office apps like Teams, Word, and Outlook. Microsoft has also added its Bing AI chatbot to a new sidebar in its Microsoft Edge browser.
Microsoft first announced its new Bing AI last month and opened a waiting list the same day. The company gradually opened up the waiting list while limiting the number of questions users could ask per session per day. These restrictions were introduced to prevent the chatbot from adopting unstable behavior. Bing Chat users can now ask 15 questions per session, up to a maximum of 150 per day.
Source: microsoft
What is your opinion on the topic?
Do you think AI can help Microsoft search engine Bing compete with Google’s engine?
See also:
Bing, Microsoft’s search engine, plans artificial intelligence ads that allow paid links in responses to search results
AI-powered search engine Bing is now available on mobile and Skype, and will soon support voice input, Microsoft says