Ex-Google CEO Eric Schmidt is a bit concerned about AI. And if he is, we probably should be too…

Google’s bet the house on AI, bringing it to Search, Android, Workspace and pretty much every other vertical it operates in. Apple will do more or less the same in 2024. And OpenAI and Microsoft are all-in on AI too. 

What could possibly go wrong? According to ex-Google CEO Eric Schmidt, quite a lot – and he’s not alone in his fears about freeing the AI genie from its bottle. 

Schmidt sat down for an interview recently and was pressed on AI, Google’s role in its propagation, and how he sees things playing out over the next few years. 

Here’s everything you need to know about Schmidt’s predictions, fears, and potential solutions for the dawning AI age. 

🚀 Rapid AI Advancements

  • AI technology is evolving at breakneck speed, outpacing human adaptability.
  • “It’s important for everybody to understand how fast this is going to change,” Schmidt warns.

📉 Decreasing Costs, Increasing Quality

  • Training AI models is becoming cheaper and more efficient, leading to better and more accurate outputs.

⚖️ Profound Negatives

  • Issues like algorithmic bias, attribution, and copyright disputes remain unsolved.
  • Legal, ethical, and cultural questions are expected to arise across various fields.

🔒 Extreme Risks

  • AI could enable massive loss of life if companies like OpenAI, Google, Microsoft, and Anthropic aren’t properly regulated.
  • Financial incentives of AI firms may not align with human values.

🌍 Need for Global Regulations

  • Governments must lead with regulations, though it’s challenging to foresee and prevent every potential misuse of AI.
  • Open-source models and differing national commitments complicate regulation efforts.

🛡️ National Security Concerns

  • Positive impacts on U.S. national security will be slow due to government procurement processes.
  • Future security issues will depend on rapid innovation.

⚔️ AI in Warfare

  • AI could make military decisions faster, potentially reducing human oversight.
  • Current laws require human control, but future scenarios might involve automated targeting.

🇨🇳 China’s AI Ambitions

  • China, despite starting late, is advancing rapidly in the AI race.
  • Challenges include finding sufficient Chinese language data and restrictions on advanced chips.
  • Schmidt predicts China will eventually lead the AI race.

🌀 Proliferation of AI

  • As training costs decrease, more nations and groups will develop AI, potentially with harmful intentions.
  • AI-powered psychological and information warfare could disrupt future elections.

🛠️ Potential Solutions

  • Control open-source AI use and improve regulations on social media companies to curb manipulation.
  • Encourage innovation in detecting false AI-generated content.
  • The cyber marketplace will eventually adapt, creating economies around detecting and countering AI misuse.

Bottom Line?

Eric Schmidt’s insights highlight the urgent need for regulation, ethical considerations, and innovation to navigate the rapid advancements and potential dangers of AI technology.

Innovation is one thing. But the world – at large – is not ready for the kinds of shakeups that AI is bringing. Just look at what’s happened to Google’s Search product – it’s gone to the dogs. 

Thousands of businesses have been removed from Google’s index inside its HCU update, thousands more jobs will be removed from the market as more and more firms adopt AI models to replace staff. 

And then things will just continue to snowball. AI isn’t stopping. Big Tech firms need to ensure their shareholders get growth (by any means necessary). It doesn’t matter if the world isn’t ready, AI – in one form or another – is coming and there’s nothing anyone can do to stop it. 

Even the governments of the world, you know, the leaders of our countries, are having a hard time keeping Big Tech in check. And if Big Tech doesn’t care what the government thinks, you can bet they don’t really care about your job / feelings / livelihood either. 

Notify of
Inline Feedbacks
View all comments