...
🔍

ChatGPT, Sora & The Race To The Bottom

AI is making life easier for content creators, but at what cost? How long before 100% human-created content becomes a truly unique selling point for brands? 


AI is useful. OpenAI’s ChatGPT and other platforms like Claude have made creating written content, previously a long-winded and relatively costly process, nothing more than a click of a button. Anyone can do it. All you need is an OpenAI account, a PC, and the ability to write a basic prompt. 

And now, with things like Sora, video content will be getting the same treatment. Pretty soon, YouTube and other video platforms will be inundated with AI videos, clogging up your feed with useless nonsense. 

Hollywood is hell bent on making AI the driving force of its creative output, much to the chagrin of its actors, writers, and people that actually care about film. Sounds insidious, doesn’t it? The reason it does is because it is – AI is to creativity, what porn is to sex. 

And no one seems to know what to do about this. 

OpenAI isn’t going anywhere. Google is spending hundreds of billions on AI over the course of the next 10 years, and even Apple is getting in on the act. Pretty soon, AI will be in your phone, in your smart speaker, and providing you with 90% of your information. 

And this is a bit of an odd spot to find ourselves in. It almost feels like we’ve sold our soul to something we don’t yet fully understand.

Marketers Love It, Readers Most Likely Hate It

AI has made marketing orders of magnitude easier. A spammer can now throw up a 1000 page website with the click of a button. You can spam social networks like Facebook and LinkedIn and Reddit with AI content. Even YouTube has its fair share of 100% AI-generated content. 

And you know what? It all sucks. I’m no luddite and I do not have a problem with AI being used a tool, but when it replaces the creative process of a human being, stops them thinking about what they’re writing or shooting or conveying, you’re in a bad place. 

This is the place where art goes to die. 

It’s bad enough that influencers (paid shills that’ll promote anything for money) clog up 90% of internet traffic but now we have a situation where Google’s reactive algorithm can no longer tell the difference between good content and AI content spun up for the sole purpose for making money. 

This is why Google’s SERP has been terrible for the past 12 months. 

Google’s bots and crawlers are being inundated with trillions more requests than they were 18 months ago, and the reason? AI. 

And then, in a bid to counteract this, Google issues algorithm updates and thousands of legit sites get punished, many of whom went out of business in 2023, while AI content farms continue to gain traction in a never-end game of digital whack a mole. 

To make matters worse, AI platforms like ChatGPT 4 are getting better and better, making it harder to spot purely AI content. Being convincing is one thing but being accurate is another thing entirely. 

No matter how impressive an AI’s ability to write is, the fact remains that these Large Language Models (LLMs) make stuff up and hallucinate. They also cannot create, they can only copy and assimilate. 

In this respect, they’re basically just fancy content spinners, taking already published works – either a book or a website the LLM was trained on – and then piecing together words and sentences in a logical order. This is why the New York Times is suing OpenAI. 

The Times is the first major American media organization to sue the companies, the creators of ChatGPT and other popular A.I. platforms, over copyright issues associated with its written works. The lawsuit, filed in Federal District Court in Manhattan, contends that millions of articles published by The Times were used to train automated chatbots that now compete with the news outlet as a source of reliable information.

The suit does not include an exact monetary demand. But it says the defendants should be held responsible for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of The Times’s uniquely valuable works.” It also calls for the companies to destroy any chatbot models and training data that use copyrighted material from The Times.

I suspect there’ll be more and more law suits too because, when it really comes down to it, AI in its current form isn’t entirely ethical. If the model is trained on someone else’s hard work and investment, why should OpenAI get to benefit from it without so much as a citation? 

There Is No Solution To AI Spam

If you like human written content that is nuanced, well thought out, thought provoking and written from the perspective of an actual person, the internet you know and love is going away. Pretty soon, AI will creep its way into every facet of your life, from your Google searches to your phone. 

You won’t be able to escape it and, I suspect, more and more people will revert to things like Feedly, Medium, Reddit, and newsletters from sites and content creators they like and/or respect. 

Google’s SERP, increasingly, is becoming completely worthless. Parasite SEO and AI content has it completely compromised. I never thought I’d switch to Bing but here I am in 2024 using it more and more, following failed searches in Google. 

Rather than doubling down on methods to counteract this wave of spam, Google is now actively investing in technology – and a user experience – that is based on EXACTLY the same thing: content spinning, content theft, and plagiarism. 


Google is now scraping the entire web and offering up a “snippet” answer to questions and queries typed into its search engine – it’s called Google GSE. Keep in mind that this kind of behaviour, if it were done by a publisher with a website, would get that publisher banned from being indexed in Google. 

Google says you cannot steal other peoples’ content and pass it off as your own. That’s Google guidelines 101. And yet here we are with Google SGE, Google’s very own plagiarism machine doing just that. 

Google is not the company it once was; it is now living in fear, having its own existential crisis, as future behemoths like OpenAI begin to step on its toes. 

But whereas Google has always traditionally been a curator of information, a middle-man if you will, OpenAI and generative AI tech in general is the opposite: it is a content creator, it makes things for people to consume. 

And regardless of what anyone thinks about AI, whether it is a force for good or bad or just something else entirely, I really do not want to live in a world where 90% of the information I consume is written by a freaking machine. 

I’ve even had to return books on Kindle because they were obviously just puked out of ChatGPT 3 with zero human editing.

That’s the end of art, the end of human creativity, and probably the start of some kind of new dark age, where a select few humans scramble around online, hungry for something written and created by an actual human being. 

Richard Goodwin

Richard Goodwin is a leading UK technology journalist with a focus on consumer tech trends and data security. Renowned for his insightful analysis, Richard has contributed to Sky News, BBC Radio 4, BBC Radio 2, and CNBC, making complex tech issues accessible to a broad audience.

Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
Scroll to Top