Opinion: Apple’s Strongest Card Is Privacy And Security; Is It Enough To Win The AI Game?
People all over the world were eager to see what Apple was going to show at the annual Worldwide Developers Conference (WWDC) a few days ago. After seeing OpenAI’s new GPT4-o and Google Gemini’s AI updates, the expectations were high.
Apple had already hinted that developing a new AI model capable of competing in the AI assistant race was not within the company’s capabilities. Then, the rumors that Apple would partner with OpenAI turned out to be true.
Siri is not as smart as ChatGPT, but combining forces—even if this made Elon Musk rage—is a big move to help keep Siri afloat.
However, giving Siri access to ChatGPT (with the user’s consent) is somewhat disappointing coming from a company known for its innovation in technology. It also lacks the “magic” to ignite excitement and make it truly stand out as one of the best AI assistants of the moment.
Considering Apple isn’t paying for the GPT4-o service, I find myself wondering: as a business partnership, the information exchange would go the other way around, too, right? What benefits, other than exposure and mass distribution, is OpenAI getting from this partnership?
Maybe Musk was onto something when he shouted—or at least I read it that way—on X: “If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation.”
The general fear of data exposure and control among AI-wary users is one of the reasons why Apple is trying so hard to convince everyone that its core values are the same as its most powerful card in the AI game: safety and privacy. During the WWDC, all Apple experts emphasized safety measures and repeated that privacy is among their major concerns.
The new Apple Intelligence will be aware of every single move made by users “without collecting personal data.” At least that’s how Apple explained the advantages of the new AI features at the WWDC, but to me, it sounds like a contradiction. Is this a wise strategy? Or is it just a desperate attempt to gain trust at a time when this technology is advancing so fast there’s little regulation in place, and it’s nearly impossible to keep up?
Can Apple truly guarantee privacy and safety, or is it just a business strategy to gain lost ground?
Apple, OpenAI, People, And Privacy
Just a few days ago, 7 former OpenAI employees, 4 current ones, and 2 Google DeepMind workers published a public warning expressing their concerns over the advancement of AI in terms of safety and security risks, including spreading misinformation, worsening social inequalities, and gaining uncontrollable autonomy that could result in human extinction.
OpenAI has already gone through multiple layoffs—including CEO Sam Altman—related to safety concerns. After resigning, former OpenAI researcher Jan Leike posted on X that “safety culture and processes have taken a back seat to shiny products.”
Even if the new alliance goes well, Apple hasn’t proven to be bulletproof against data breaches, either. Even though Apple has a strong security reputation, the company has been involved in several security breaches, like the iOS Zero-Click Exploit in 2021 and in 2023. And now, threat actor IntelBroker is claiming to have access to three of Apple’s internal tools systems.
Apple’s “privacy” card in the AI game could weaken even more if we consider that users haven’t been prioritizing safety and security either. According to a recent study by the Pew Research Center, 70% of Americans don’t trust AI companies with their data, but also 6 out of 10 skip reading privacy policies.
This suggests that many users care about privacy but aren’t taking the necessary steps to learn about or advocate for what they deem acceptable in this sense.
While partnering with OpenAI might boost Siri’s capabilities, it also places Apple at the crossroads of innovation and ethical responsibility. It will definitely be a delicate balance between enhancing user experience while safeguarding the privacy and trust that form its brand identity.
It seems like a Sisyphean task to continue advocating user privacy in an AI ecosystem that is increasingly complex and challenging to secure.
Leave a Comment
Cancel