Coming soon Another day, another rogue AI chatbot on the web.
Last week Meta released Blenderbot 3, a talkative language model, as an experiment on the web and it did as well as one would expect.
BlenderBot 3 was quick to confirm that Donald Trump is still US President and will remain US President beyond 2024, and is spreading anti-Semetic views when asked controversial questions, Business Insider reveals. In other words, BlenderBot 3 tends to spread fake news and hold biased opinions of racial stereotypes like all language models trained from texts found on the internet.
Good morning everyone, especially the Facebook https://t.co/EkwTpff9OI researchers who have to rein in their Facebook-hating, election-denying chatbot today pic.twitter.com/wMRBTkzlyD
— Jeff Horwitz (@JeffHorwitz) August 7, 2022
Meta warned netizens that its chatbot could make “untrue or offensive statements” and is keeping the live demo online to gather more data for its experiments. People are encouraged to like or dislike BlenderBot 3’s responses and to notify researchers if they think a particular message is inappropriate, nonsensical, or rude. The goal is to use this feedback to develop a safer, less toxic and more effective chatbot in the future.
Google search snippets to stop the spread of fake news
The search giant has introduced an AI model to make the text boxes that sometimes pop up when users type questions into Google search more accurate.
These descriptions, known as feature snippets, can be useful when users are looking for specific facts. For example, type “How many planets does the solar system have?” will output a feature snippet that says “eight planets”. Internet users don’t have to click random webpages and read information to get the answer, feature snippets do it automatically.
But Google’s answers aren’t always accurate, and have sometimes given a specific date for a fictional event like the assassination of Abraham Lincoln by cartoon dog Snoopy, according to The Verge. Google said its system, the Multitask Unified Model (MUM), should reduce feature snippet generation for wrong questions by 40 percent; often no text descriptions are displayed at all.
“By using our latest AI model, our systems can now understand the notion of consensus, which is when multiple high-quality sources on the web all agree on the same fact,” it explained in a blog post.
“Our systems can compare snippet callouts (the word or words mentioned above the featured snippet in a larger font) to other high-quality sources on the web to determine if there’s a general consensus for that callout, even if the sources use different words or concepts to describe the same thing.”
OpenAI’s DALL-E 2 helped create a Heinz ketchup ad
Heinz, the US grocery giant, has partnered with a creative agency to create an ad using AI images generated by OpenAI’s DALL-E 2 model to promote its most iconic product: ketchup. The ad is the latest in Heinz’s Draw Ketchup campaign, but instead of turning to humans for their sketches, Rethink, a Canadian advertising agency, turned to machines.
“So, like a lot of our briefings, the brief was to showcase Heinz’s iconic role in today’s pop culture,” Mike Dubrick, executive creative director of Rethink, told The Drum this week. “The next step was to present the idea of the brand. After the briefing, we rarely wait until the formal presentation when we’re sharing something we think is great.”
The end result is a clever ad with a clear and simple message: for many types of text prompts containing the word “ketchup”, DALL-E 2 generates what looks unmistakably a Heinz bottle. Based on the company slogan “It must be Heinz”. You can view the ad below.
Youtube video
DALL-E 2 also recently helped an artist create a magazine cover for Cosmopolitan; it’s another example of how these text-to-image tools can be used commercially in the creative industries. ®