Hello everybody
I would like to resolve recent issues with problematic text and image responses in the Gemini (formerly Bard) app. I know some of his responses have offended our users and shown bias – to be clear, this is completely unacceptable and we were wrong.
Our teams have been working around the clock to address these issues. We're already seeing significant improvement across a variety of prompts. No Al is perfect, especially in this emerging phase of industry development, but we know the bar is high for us and we will stick to it, no matter how long it takes. And we will review what happened and make sure we fix the issue at scale.
Our mission to organize the world's information and make it widely accessible and usable is sacrosanct. We always strive to provide users with helpful, accurate and unbiased information in our products. That's why people trust them. This must be our approach for all our products, including our new Al products.
We will drive a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evaluations and red-teaming, and technical recommendations. We are reviewing all of this and will make the necessary changes.
While we learn from what went wrong here, we should also build on the product and technical announcements we made in Al over the past few weeks. This includes some fundamental advances in our underlying models, e.g. B. our breakthrough of 1 million long context windows and our open models, both of which have been well received.
We know what it takes to build great products that are used and loved by billions of people and companies, and our infrastructure and research expertise gives us an incredible springboard for the AL wave. Let's focus on what's most important: building helpful products that earn our users' trust.