If ain’t broke, don’t fix it – that’s a good motto to live your life by. Even more so if you own one of the most valuable pieces of technology ever created – Google Search
Key Takeaways: Generative Search Experience & The Death of Search
- Google’s SGE may not improve search results and could damage its reputation.
- Google missed a chance to highlight ChatGPT’s flaws and its own strength in reliable information.
- SGE is invasive compared to Bing and Brave’s AI features.
- AI trained on copyrighted content raises intellectual property concerns.
- AI outputs can compete with original articles, potentially violating copyright law.
- Overreliance on AI could lead to the decline of real journalism and information diversity.
SGE is kind of a big deal. Both for Google as a company and a brand, and for people like me and you – its “users”. Search is changing rapidly and change is always messy but Google’s current SERPs are in the worst condition most of us have ever seen.
Is Generative Search Experience going to make anything better? Probably not. But we’ll get to that in a minute, covering all the current issues with it, the fact that most LLM models are fundamentally flawed (when it comes to reliable information delivery), and what it means for the web at large.
I’m talking bad information, lawsuits, IP and copyright infringements, thousands of brands and businesses going to out of business. And, perhaps the worst bit: a lower quality search experience.
Open AI’s Role in The Creation of SGE
For the last 12 months or so, Google’s SERPs have been terrible. I’ve written about the sorry state Google’s search at length, so I won’t rehash that here.
The rise of AI has been a huge challenge for Google. It has affected its indexing, its ability to crawl the web, and has forced Google into issuing update after update in a bid to remove obviously spammy pages from its SERPs.
Has any of it worked? Well, Forbes is still ranking for everything under the sun, parasite SEO is alive and well as of April 2024, and thousands of honest, trustworthy publishers have been punished and/or removed from the SERPs.
Google did get rid of plenty of pure AI sites, a good thing because it was getting crazy. But this will not stop this kind of behaviour, not when a 2000 page website can be thrown up in a matter of hours.
Who’s to blame? Google? Of course; it is the only one with its finger on the button. When it comes to its index, the buck stops with Google.
But then there’s Open AI. The company that effectively started the AI hype train. Without Open AI bursting onto the scene, SGE might not have even happened.
You see, Google, rightly or wrongly, assumed that ChatGPT, following its launch and massive adoption rate, represented something of an existential threat to its business. And it was this perception that got us to where we are now: staring down the prospect of Google SGE.
ChatGPT’s Limitations, Google’s Missed Opportunity
Personally, I don’t think ChatGPT is as big a threat as Google initially thought. ChatGPT is inaccurate and terrible for reliable information. It’s great at tasks, not so much facts, as millions of students, bloggers, and internet readers have discovered during the past 18 months or so.
An Open AI search engine that leverages ChatGPT could very well be a threat to Google, and one is apparently in the works, but Google is still Google and it will remain the dominant search engine for many years to come. People don’t just switch out habitual behaviour like “Googling Stuff”, it takes time.
Which leads me to my next point…
Where the hell were Google’s PR team when ChatGPT launched?
Yes, Open AI’s chatbot was impressive when it launched but it wasn’t accurate, it wasn’t trustworthy, it could not be relied on give the best possible information to its users. And this fact remains as true today, as it did back when Open AI first opened up ChatGPT to consumers.
Reliability, trustworthy information retrieval and dissemination. That’s Google’s job. It’s something it has built its brand around over the course of the last two decades.
More than Wikipedia, more than anything else, Google – for all of its foibles and missteps – is the place where over 90% of the planet’s population get their information.
Rather than scrambling to make its own version of ChatGPT, Google would have been wiser spending a few hundred million on adverts highlighting ChatGPT’s shortcomings, focusing on the fact that it is NOT reliable for information retrieval and, generally speaking, cannot be trusted as a proper source of information.
A great example of why SGE is dumb.
— Lily Ray 😏 (@lilyraynyc) April 2, 2024
Because AI has no ability to understand the fact that THIS WAS A PLAY ON WORDS pic.twitter.com/vFuOsXIsTE
And to make matters worse, SGE is so heavy-handed, so invasive – it will soon appear for all searches and queries. Bing’s implementation of generative AI is much less invasive. Ditto Brave browser’s Leo AI.
They’re there if YOU want them, but they don’t take over the entire screen by default.
SGE’s Fallout & Potential Consequences
But rather than doing this, rather than focussing on why its search and its algorithms are safer and better for information retrieval that ChatGPT will ever be, Google decided to try and copy it with its own LLM which, of course, suffers from much the same problems as Open AI’s – it hallucinates, it gets things wrong, and it likes to serve up spam.
So the obvious question here is this: will SGE end up on Google’s never-ending heap of failed projects or will Google throw caution to wind, along with its hard-earned reputation, and press on with a product that I’d argue 90% of its current users don’t really want or need?
Sadly, it would appear that Google is doing the latter. Although you can turn off generative AI in Google Search – so that’s something.
And make no mistake: the fallout from this is going to be enormous.
Generative AI models are often trained on copyright-protected content from news publishers without permission or compensation. This unauthorized use harms publishers by reducing revenues, tarnishing brands, and undermining relationships with readers.
News Media Alliance
And if that wasn’t enough to worry you. Think about this: AI is a black box. It cannot really be controlled, not in the way it needs to be for the level of usage Google has in mind.
Amit Singhal, who was Head of Search at Google until 2016, has always emphasized that Google will not use artificial intelligence for ranking search results. The reason he gave was that AI algorithms work like a black box and that it is infeasible to improve them incrementally.
Then in 2016, John Giannandrea, an AI expert, took over. Since then, Google has increasingly relied on AI algorithms, which seem to work well enough for main-stream search queries. For highly specific search queries made by power users, however, these algorithms often fail to deliver useful results. My guess is that it is technically very difficult to adapt these new AI algorithms so that they also work well for that type of search queries.
While the old guard in Google’s leadership had a genuine interest in developing a technically superior product, the current leaders are primarily concerned with making money. A well-functioning ranking algorithm is only one small part of the whole. As long as the search engine works well enough for the (money-making) main-stream searches, no one in Google’s leadership perceives a problem.
Naturally, this would be a good time for a competitor to capture market share. Problem is, the infrastructure behind a search engine like Google is gigantic. A competitor would first have to cover all of the basic features that Google users are used to before they would be able to compete on better ranking algorithms.
Hacker News
The room for error is massive and the resulting fallout – from PR disasters to lawsuits – has the potential to not only ruin Google’s reputation for being, well… Google. But also see it involved in a never-ending slew of law suits with publishers and content creators.
Generative AI & Copyright Law – It’s About To Get Messy…
A generative AI model or LLM is only as good as its training data, and Google and OpenAI have trained their models on the internet, using data and publications’ content (meaning copyrighted, intellectual property) with consent, attribution or renumeration.
And it doesn’t matter if you spend $500,000 or $500 a month on unique, human-written content. With AI models scraping the internet, all media companies from large scale publishers to bedroom bloggers are in the same boat.
Many SEO experts have long been speculating this is why Google’s most recent slew of algorithm updates have advised publishers to focus on long-form, detailed coverage of topics that focus on first-hand experience – this type of content is better for training AI models.
Less than 65% of searches result in clicking through to the underlying source. That percentage is only going to increase with narrative search results.
Indeed, marketing experts expect click-through rates for generative search responses to be even lower than already declining rates for organic results.
Particularly for informational searches, Google will aggregate (or flat-out plagiarize) from the search results and give users much of what they’re looking for. Users may find all the information they need directly on the search page, so there’s no need to click on the source website.
NMA Report
This is causing a lot of issues and concerns over AI’s place in our society. If these Big Tech firms can just steal another businesses intellectual property and then use it as its own, the long term effects of this are obviously going to be enormous and cannot be overstated.
Unchained from constraints to serve as no more than an electronic reference or bridge to a primary source, narrative search results can provide users with sufficient content (full key portions and highlights of expressive content), that substitutes for any need to read the original. As a recent New Yorker article explains, the goal of large language models, like OpenAI’s ChatGPT and Google’s Bard is to ingest the Web so comprehensively that it might as well not exist.
News Media Alliance
The News Media Alliance argues the unauthorized ingestion of publisher content into commercial AI training likely exceeds fair use and violates publishers’ exclusive rights under copyright law. The copying is not transformative as it exploits the expressive content, occurs on a massive scale, and the AI outputs can directly compete with and substitute for the original articles.
In other words, AI models like ChatGPT and Google’s SGE are nothing more than fancy plagiarism machines, albeit ones that are now increasingly trained to push the corporate ethics of billion dollar companies like Google.
Where are the checks and balances? What happens when all the information you get is not from independent sources but from Google’s machine brain? How is that a good thing? It’s about as close to a dystopian nightmare as I can think of.
But here’s the real-kicker: what happens when all the content sites go out of business and all you’re left with is TikTok influencers and generic, cookie cutter AI answers, and Quora feeds?
Real journalism cannot survive something like this. More and more publishers will go behind paywalls and smaller publications will simply cease to exist.
And then who will the AI models steal from? Talk about a digital ouroboros.
Leave a Reply