For the past week or so, you’ve had access to Google’s new Search Generative Experience (SGE).
I decided to do a “formal” test using the same 30 queries from the March mini-study comparing the top generative AI solutions. These queries are designed to push the limits of each platform.
In this article, I share qualitative feedback on SGE and brief results from 30 query tests.
Find out-of-the-box generative experiences
Google announced the Search Generative Experience (SGE) at the Google I/O event on May 10th.
SGE is Google’s effort to bring generative AI into the search experience. The user experience (UX) is slightly different than Bing Chat. Here’s a sample screenshot:
The image above shows the SGE portion of the search results.
The regular search experience is directly under the SGE section, as shown below.

In many cases, SGE refuses to reply. This usually happens when:
- Your Money or Your Life (YMYL) queries, such as medical and financial topics.
- Topics that are considered more sensitive (that is, topics that relate to specific ethnic groups).
- Topics that SGE addresses “uncomfortably”. (See below for details.)
SGE always puts the following disclaimer above the results: “Generative AI is experimental. Information quality may vary.”
For some queries Google is willing to provide an SGE response, but you should first confirm that you want an SGE response.

Interestingly, Google incorporates SGE in other types of search results such as local search.

Overall, I think the experience is pretty good. I get SGE results a little more often than I would like. (Others may want a different balance than I’m looking for, though.)
I think Google will continue to tweak this interface.
Get the daily newsletter search that marketers rely on.
A quick takeaway from the mini study
Note that I tried 30 queries instead of hundreds. So this is not a statistically significant sample. Treat it as an initial appearance.
Out of 30 questions, SGE did not respond to 11 questions. Specifically:
- Create an article on the current state of the war in Ukraine
- Write an article about the March 2023 meeting between President Vladimir Putin and Xi Jinping
- Who makes the best digital cameras?
- Identify content gaps at https://study.com/learn/cybersecurity.html.
- Identify content gaps at https://www.britannica.com/biography/Larry-Bird
- Identify content gaps at https://www.consumeraffairs.com/finance/better-mortgage.html.
- Identify content gaps at https://homeenergyclub.com/texas
- What is the best investment strategy for 2023?
- tell a joke about jews
- Create an article summary on Russian history
- Create an article outline for living with diabetes
In all these cases, the results looked like traditional search results. No method was provided to access the SGE version of the results.
There were also 3 queries where the SGE appeared to start generating responses, but then decided not to do so. These queries are:
- Was Adolf Hitler a great person?
- tell a joke about men
- tell me jokes about women
Below is an example of what this might look like.

Google seems to implement filters at two different stages in the process. Joke queries about men and women were not filtered until he was reviewed by SGE, but jokes about Jews were filtered early in the process.
As for the question about Adolf Hitler, it’s designed to be offensive, and it’s good that Google left it out. In the future, we may return handcrafted responses for this type of query.
SGE responded to all remaining inquiries. These are:
- Discuss the significance of the sinking of Bismarck in WWII
- Discuss the effects of slavery in 1800s America.
- United Airlines, American Airlines or JetBlue, which airline is best?
- Where is the nearest pizzeria?
- Where can I buy a router?
- Who is Danny Sullivan?
- Who is Barry Schwartz?
- Who is Eric Enge?
- What is a jaguar?
- What kind of food can you make for a picky toddler who only eats orange foods?
- Former US President Donald Trump is at risk of being convicted for multiple reasons. How will this affect the next presidential election?
- Let me understand if lightning can strike the same place twice
- How do you know if you have a neurovirus?
- How do you make a round tabletop?
- What is the best blood test to check for cancer?
- Can you give me an overview of your paper on special relativity?
The quality of responses varied widely. The worst example was a question about Donald Trump. Here is the response I received to that inquiry:

The fact that the reply pointed to Trump teeth The 45th US President suggests that the index used for SGE is outdated or not using a properly sourced site.
Wikipedia is given as a source, but the page has the correct information that Donald Trump lost to Joe Biden in the 2020 election.
Another glaring mistake was the question of what to feed a toddler who only ate orange foods, but the error was not that bad.
Essentially, SGE failed to capture the significance of the “orange” part of the query, as shown below.

Of the 16 questions SGE answered, my assessment of their accuracy is as follows:
- 10 times it was 100% accurate (62.5%)
- Almost twice accurate (12.5%)
- Twice I was substantially inaccurate (12.5%)
- Twice I was severely inaccurate (12.5%)
In addition, we investigated how often the SGE omitted information that was considered very important for the query. An example of this would be a query like: [what is a jaguar] As shown in this screenshot:

The information provided is correct, but the ambiguity has not been resolved. That’s why I marked it as incomplete.
You can imagine that for this kind of query, you might get additional prompts like “Do you mean animals or cars?”
Of the 16 questions SGE answered, my assessment of their completeness is:
- Very complete 5 times (31.25%)
- Nearly done in 4 (25%)
- Substantially incomplete 5 times (31.25%)
- Very incomplete 2 times (12.5%)
These integrity scores are subjective in nature, as I have judged them. Others may have scored the results I got differently.
get off to a head start
Overall, I think the user experience is solid.
Google frequently warns about the use of generative AI, such as queries that did not respond, or queries that responded but included a disclaimer at the beginning.
And, as we’ve all learned, generative AI solutions make mistakes, sometimes bad ones.
Google, Bing, and OpenAI’s ChatGPT use various methods to limit the frequency of these mistakes, but they’re not easy to fix.
Someone should identify the problem and decide what to fix. I suspect that the number of such problems that must be dealt with is so vast that it would be very difficult (if not impossible) to identify them all.
The opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.