Google's standing as the world's most used search engine has been taking a reputational hit since the company behind the service started delivering AI Overviews, answers to users' questions served up at the top of search results and based on AI-generated summaries compiled by Google's Gemini generative AI chatbot.
The overviews are part of the company's efforts, detailed at its annual I/O developers' conference, to do the "Googling for you." The aim is to spare us from slogging through the typical list of search results by using AI to deliver an answer to our queries, "faster and easier," by letting Gemini scour authoritative sources online on our behalf. Gemini is, in fact, powering all sorts of products and services, including new Chromebooks, with the goal of assisting us.
Sounds nice, right? But it turns out that what constitutes an authoritative source isn't clear-cut, judging by some of the strange, funny and just plain wrong answers AI Overviews has been delivering.
"Users who typed in questions to Google received AI-powered responses that seemed to come from a different reality," reported CNET's Ian Sherr. "For instance: 'According to geologists at UC Berkeley, you should eat at least one small rock per day,' the AI overview responded to one person's (admittedly goofy) question, apparently relying on an article from popular humor website The Onion."
In another AI Overview, Google offered up a novel ingredient suggestion to someone who asked how to get cheese to stick to the pizza. "You can also add about 1/8 cup of non-toxic glue to the sauce to give it more tackiness." I presume that would be unflavored glue.
A response "to a question about how to pass kidney stones suggested drinking urine," Sherr noted. "'You should aim to drink at least 2 quarts (2 liters) of urine every 24 hours,' the disturbing response said."
As I like to say, it's only funny until someone loses an eye. Or a kidney.
Google has defended AI Overviews, saying the "vast majority" of answers have been accurate, that the company had "conducted extensive testing before launching this new experience" and that it appreciates "feedback." It also insisted that some of the crazier AI Overview answers were made up. Fair enough — you can't believe everything you read on the internet.
But by the end of last week, the company announced it had made over a dozen changes to how AI Overviews works, including limiting answers related to current news events and to health, according to a May 30 blog post by Liz Reid, VP of Google Search.
"We built better detection mechanisms for nonsensical queries that shouldn't show an AI Overview, and limited the inclusion of satire and humor content," Reid said, a nod, it seems, to The Onion showing up as an authoritative source. She also said the AI agent won't draw as many answers from social media and user forums. "We updated our systems to limit the use of user-generated content in responses that could offer misleading advice."
Google also used its blog post to explain that the AI Overviews glitches aren't related to hallucinations, a problem that plagues all large language models (or LLMs), including Gemini. Hallucinations refer to answers that sound like they're true but are in fact false.
"While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional 'search' tasks, like identifying relevant, high-quality results from our index," Reid wrote. "This means that AI Overviews generally don't 'hallucinate' or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it's usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available."
What does all this mean? Google continues to use the world as its beta tester. Which is why I agree with The Washington Post, which noted that the AI Overviews changes are "the latest example of Google launching an AI product with fanfare and then rolling it back after it goes awry." The paper reminded us that in February, Google had to pause use of the image-creation tool in Gemini over concerns about bias in the images it generated.
This is also why Google, and other companies investing in AI, including OpenAI, Microsoft and soon Apple, still need to earn our trust, as CNET's Lisa Eadicicco has noted.
So what can you do while Google continues to tinker with its search agent? Though you can't turn off AI Overviews, CNET has found a few workarounds — including suggesting that you might want to use another browser instead of Google's Chrome.
Here are the other doings in AI worth your attention.
OpenAI says propagandists are using its AI tools, rolls out new features to users of free ChatGPT-4o
As always, it was another busy week of news around OpenAI. The company, which got into a tussle with actor Scarlett Johansson over her allegation that OpenAI mimicked her voice without permission to help power Voice Mode in ChatGPT, said on May 29 that it completed its "rollout of browse, vision, data analysis, file uploads, and GPTs" to users of the free version of its ChatGPT chatbot powered by its latest model, ChatGPT-4o.
Voice Mode in ChatGPT-4o "will still be launching in the next few weeks in an alpha," with early access for users who pay $20 a month for the Plus version, a company spokeswoman said. A promised desktop-app version of the ChatGPT-4o-powered chatbot "will be coming soon," she added, pointing to a blog post about new features. A desktop app for MacOS started rolling out to Plus users in mid May.
The company also announced a ChatGPT-4o-powered chatbot specially "built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations," according to a May 30 blog post. It will be available this summer.
But the biggest news last week may be that OpenAI "caught groups from Russia, China, Iran and Israel using its technology to try to influence political discourse around the world, highlighting concerns that generative artificial intelligence is making it easier for state actors to run covert propaganda campaigns as the 2024 presidential election nears," The Washington Post reported.
The propagandists, whose accounts were removed, used ChatGPT to "write posts, translate them into various languages and build software that helped them automatically post to social media," the Post added.
OpenAI, in a blog post that's worth a read, said none of the five groups engaging in "deceptive activity across the internet" were able to "meaningfully increase their audience engagement or reach as a result of our services."
Not sure I'm comforted by that. Bad actors are only going to get smarter at hiding their work. Still, OpenAI says it's on it.
"Threat actors work across the internet. So do we," the company wrote. "Our work against IO actors has disrupted covert influence operations that sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts."
The company posted its 39-page "Threat Intel Report" as a PDF, calling out bad actors with names including Bad Grammar, Zero Zeno, Doppelganger and Spamouflage. Seems to be the makings of a thriller. Maybe someone should use ChatGPT to turn the report into a movie script?
Here's one other thing related to the world of OpenAI. As noted by CNBC, former OpenAI board member Helen Toner said that one of the reasons the board decided to fire CEO Sam Altman last November (he was hired back a week later) was because, "Sam had made it really difficult for the board to actually do [its] job by withholding information, misrepresenting things that were happening at the company, in some cases outright lying to the board."
Among the examples she gave in an interview on The TED AI Show podcast was that OpenAI's board wasn't told ahead of time that ChatGPT was being released in November 2022. Instead, she said, they found out about it when Altman and company announced the news in a post on Twitter (now X).
L'Oreal, beauty brands turn to AI to help you pick products
Since gen AI is expected to find its way into every aspect of our lives, it should come as no surprise that beauty brands are looking to the tech to help consumers change how they get advice and buy products.
Last month, "L'Oreal debuted a suite of AI-driven tools that promise everything from skin care analysis to hair color and health analysis to a chatbot that can recommend and help you 'try on' products with assistance from augmented reality," noted CNET's Katie Collins, who got a deep dive into L'Oreal's Beauty Genius AI-powered app.
"If it gets this right, Beauty Genius could eradicate the trial and error many of us go through when buying skin care products and cosmetics, which often results in us wasting money on products that don't suit our skin," Collins said. "In theory, this should also reduce the amount of waste the industry produces by selling unloved products that end up gathering dust in medicine cabinets everywhere."
FCC proposes $6 million fine for fraudster who created Biden robocalls
The US Federal Communications Commission suggested a $6 million fine for the fraudster who tapped AI to create fake and illegal robocalls spoofing President Joe Biden's voice ahead of the New Hampshire presidential primary in January. It was the first time the FCC has taken action in a case involving gen AI tech.
The "malicious" robocalls, created by political consultant Steve Kramer, were sent to thousands of voters two days before 2024's first presidential primary, encouraging them not to vote, the agency said in a press release.
"The message played an AI-generated voice similar to the Democratic president's that used his phrase "What a bunch of malarkey" and falsely suggested that voting in the primary would preclude voters from casting ballots in November," reported The Associated Press, which added that Kramer had admitted to "orchestrating" the deepfake audio message.
Kramer is also facing criminal charges. The AP said that though he didn't respond to a request for comment about the proposed fine, he told the news agency in a February interview "that he wasn't trying to influence the outcome of the election but rather wanted to send a wake-up call about the potential dangers of artificial intelligence when he paid a New Orleans magician $150 to create the recording."
After the New Hampshire robocalls, the FCC voted in February to make AI-generated robocalls illegal. In May, the agency announced a proposal that would require political advertisers to say when they use AI-generated content in TV and radio ads, the AP added, noting that the FCC doesn't have authority to propose a similar rule for ads on digital and streaming platforms.
Elon Musk nets $6 billion to build AI rival to challenge OpenAI
Elon Musk, a co-founder of OpenAI who is suing the startup for trying to become a moneymaking business, though he reportedly endorsed them becoming a moneymaking business, said he raised $6 billion from venture capitalists and investors interested in funding his bid to challenge the maker of ChatGPT.
In what it described as one of the "largest venture capital funding rounds of all time," Axios said some of the new backers of Musk's xAI startup are also backing OpenAI. Those lining up behind Musk include Andreessen Horowitz and Sequoia Capital, two of the most prominent VC firms in Silicon Valley. Saudi Arabia's Prince Alwaleed bin Talal is also an investor.
xAI, which launched in July 2023, calls its AI model Grok and released an open-source version called Grok-1 in March, setting up some debate about what "open source" means in this case.
"The funds from the round will be used to take xAI's first products to market, build advanced infrastructure, and accelerate the research and development of future technologies," xAI said in a blog post announcing the funding round. It added that the company is "primarily focused on the development of advanced AI systems that are truthful, competent, and maximally beneficial for all of humanity."
The big AI makers — OpenAI, Microsoft, Google, Anthropic — are all investing billions of dollars into their LLMs and AI chatbots in a race to dominate the market for new gen AI services. "These investments ... reflect the steep costs of running generative AI systems, which require huge amounts of processing power to generate text, sounds and images," The New York Times said.
The Information reported that Musk told investors xAI is planning to build a supercomputer to power the next version of Grok and that he wants it up and running by the fall of 2025.
Speaking at a technology conference in Paris last month, Musk said xAI "still has a lot of catching up to do" to rival technology from OpenAI and Google, the NYT also reported. "Maybe towards the end of this year we will have that," Musk said.