Elon Musk is not happy about Apple partnering with OpenAI for Apple Intelligence. But does he have a point?
OpenAI and Elon Musk have history. Musk was instrumental in OpenAI’s genesis, seeding capital and guidance to get the then research-only project off the ground. Fast forward a few years and OpenAI is now a profit-focussed enterprise, something Musk wasn’t too happy about.
The idea of OpenAI, as the name suggests, was to create an environment where researchers and AI pioneers could come together to test, create and push the limits of artificial intelligence. Musk got involved because he saw the potential but also because it was pitched as a non-profit, not just another start-up with Big Tech aspirations.
Now, the cynical amongst would just say, well, Elon’s just got sour grapes over the rapid success of OpenAI. But if you look a little deeper into how the company operates, its alarmingly high staff turnover (notably of high-ranking execs), and its approach to data privacy, Musk’s concerns start to look more reasonable.
Why Is OpenAI and ChatGPT A Concern?
From a data security and privacy perspective, OpenAI’s approach to how it handles and uses data is murky at best. One study highlighted myriad risks associated with the rise of LLMs like ChatGPT, the most concerning of which relate to how these companies handle your data.
Here’s a brief overview of just a selection of the concerns raised by researchers:
- Data Exploitation: The extensive use of public data, which may include personal information, raises concerns about unauthorized data usage and privacy violations.
- Personal Input Exploitation: ChatGPT’s training involves user inputs, potentially leading to privacy leaks if sensitive information is inadvertently used or exposed.
- Emerging Privacy Attacks: Novel attacks like inference and probing can exploit LLMs to extract private information, highlighting the need for robust privacy preservation techniques.
- Lack of Transparency: Users lack insight into how their data is managed and shared, raising concerns about potential misuse and insufficient regulatory oversight.
OpenAI has said it will no longer train its models using public data, but this concession was made after the fact. By this point it had already done extensive training and profiling, using its users’ data. What happens to that data? You cannot request access to the data OpenAI has collected about you.
Does Elon Musk Know Something We Don’t?
Musk is an insider. He owns X, Tesla, and has been part of the Silicon Valley elite for decades. Musk and Sam Altman have worked together on projects, including what would become the OpenAI we know today. Is it possible that Musk knows something about the company that we don’t?
Musk has access to information and gossip that mere mortals like us couldn’t get anywhere near. Add to this the OpenAI whistleblowers, high-ranking ex employees of the company, who raised some truly alarming concerns about AI’s potential for harm and OpenAI’s efforts to downplay its negative aspects, and things do not look good at all.
OpenAI is already falling foul of GDPR laws in Europe, Italy has banned it entirely, and The European Union isn’t happy at all about how the company operates which begs the question: why was Apple so keen to partner with the company in the first place?
Creatives Hate AI, Apple Courts Creatives. What Could Go Wrong?
Add in the general discontent about generative AI models like ChatGPT from the creative community, a community Apple courts constantly, and the deal between the two companies seems a little off brand for Apple. Not least since Apple, more than any other big tech company in the mix these days, has the resources to build its own, “secure” AI model.
And then there’s the danger element: ChatGPT – and LLM models in general – are flakey as hell, spewing false – and often dangerous – information as fact. Google made this mistake with its GSE (now known as AI Overviews) and I don’t think Apple has a viable method of fairing any better once iOS 18 starts rolling out.
If you put crap in, you get crap out. AI models, while useful in some applications, have no place in public facing information discovery systems like search engines and mobile operating systems – they’re just too unreliable, and when you’re a company that serves as many people as Apple and Google do, things will inevitably get messy very quickly.
Latest Posts
View All →
Leave a Reply