UK and European iPhone users will miss out on several new features in iOS 18, thanks to EU regulation
TL;DR
No Apple Intelligence For UK / EU iPhone Users
- Apple’s AI technology is designed for high personalization and secure processing, either on-device or in Apple’s cloud servers. The company is collaborating with OpenAI for certain aspects while emphasizing security.
- Critics, including Elon Musk, question Apple’s security claims given OpenAI’s business model and approach to privacy.
- Apple argues that the EU’s Digital Markets Act (DMA) data-sharing requirements would compromise their products’ integrity and user privacy.
- As a result, Apple is withholding three features from EU users: iPhone Mirroring, SharePlay Screen Sharing enhancements, and Apple Intelligence suite.
- This decision creates a disparity in iPhone capabilities between EU countries and the rest of the world.
- The UK’s situation is uncertain due to its recent Digital Markets, Competition and Consumers Bill, which resembles the EU’s DMA. Apple is still assessing how this might affect feature rollout in the UK.
Apple’s first big roll of the dice with AI is called Apple Intelligence, bringing the company’s software and hardware back inline with its peers – all of whom are now deeply invested in AI.
Apple’s upcoming AI technology is designed to be highly personalized and processed either on the device or in Apple’s secure cloud servers, although it is working with OpenAI for “aspects” of it but, again, Apple says security is of the utmost importance.
Critics – including Elon Musk – claim it is impossible for Apple to make these claims given the nature of OpenAI’s business and its approach to privacy and security.
DMA “Comprises Integrity” of Apple Products
Apple believes that adhering to the DMA’s data-sharing requirements would compromise the integrity of their products and put user privacy at risk.
As a result, Apple has decided to withhold three key features from EU users: iPhone Mirroring, enhancements to SharePlay Screen Sharing, and the new Apple Intelligence suite.
This means that iPhones sold in EU countries may have different capabilities compared to those sold in other parts of the world.
The situation in the UK is less clear. While no longer part of the EU, the UK recently passed its own Digital Markets, Competition and Consumers Bill, which shares many similarities with the EU’s DMA.
This new law could potentially affect the rollout of Apple’s new features in the UK as well, though Apple is still evaluating the situation.
The Problem With Current AI Models
LLMs like OpenAI’s ChatGPT and Google’s Gemini have been courting controversy since they first hit the market – and rightly so too.
They are all built and trained on third-party, intellectual property without attribution, consent, or renumeration for the owners / creators of said content.
This is a problem. It’s a problem because, as many have noted on X, it is effectively stealing content from creators and then leaving said content creators out of pocket – this applies to small blogs and mega-corporations like the New York Times.
And this, for many, myself included, as someone that works as a content creator and publisher, is the 800 pound gorilla in the room right now. It isn’t ethically correct to openly steal content or IP, be it music, the written word, whatever, and then attempt to pass it off as something unique.
Indeed, there are current lawsuits being filed against AI music companies by made record labels in the US who are pushing for a trial by jury – so no out of court settlements. Sounds like they’re aiming to make an example of these companies, right?
The result of these lawsuits, if the judge rules in favor of the record labels, could have significant implications for firms like Google, OpenAI, and Meta.
What Most People Don’t Know About AI
The AI models we have now, so ChatGPT, Claude, Gemini, are NOT artificial intelligence in the way most people think. They’re about as far from being sentient or thinking for themselves as your underpants. They’re complex, sure, but they’re just word / syntax spinners, nothing more – that’s why they need to be trained on data.
It is also why they cannot create, they can only replicate (and spin) what they already know. This applies to the written word, code, music, and even stuff like images – none of it is original, it’s all spun content.
And where do these AI companies get their training data? You guessed it, the internet, from blogs and websites. This is why Reddit recently sold its soul to Google for cash – Google wants to train its AI model on Reddit’s myriad subreddits. Again, without consent from the users.
Does Google, OpenAI, Claude or Meta have the right to use third-party content, without consent, to train its AI models? This is a huge legal debate that is now happening. The fact there is even a “debate” about it tells you everything you need to know about Big Tech’s influence on society at large.
Big Tech will make billions from AI, but the creators that made the content it is trained on? They’ll be picking up income support cheques this time next year, missing mortgage payments, closing businesses, and be forced to go work someplace else.
Leave a Reply