Apple put privacy at the forefront of its AI announcements during its Worldwide Developers Conference.
“With Apple Intelligence, powerful intelligence goes hand in hand with powerful privacy,” said Apple software executive Craig Federighi on Monday during the WWDC keynote address. He said that any given user “should not have to hand over all the details of [their] life to be warehoused and analyzed in someone’s AI cloud.”
The company appears to be tackling privacy with more vigor than its rivals. AI assistants from Meta, Google, OpenAI, and Microsoft have all been cited for problematically scooping up users’ data. Microsoft’s new AI feature called Recall is being investigated by European regulators worried that the company is invading consumers’ privacy. Even Google itself said its AI bot Gemini shouldn’t be trusted with your secrets.
Unlike other AI software, Apple said its AI tools will (in most cases) be powered by the hardware that is actually housed within its products. When customers want to use more complex AI software on their iPhones, iPads or Macs that requires more computational power, Apple will use a new process it’s calling Private Cloud Compute. Apple says that means it will sometimes send its users’ data to remote servers powered by Apple’s hardware, but the company will encrypt and then delete user data when it’s no longer needed.
“Your data is never stored or made accessible to Apple. It’s used exclusively to fulfill your request,” said Federighi. Even when users opt to use ChatGPT through Siri on Apple, the company said it took steps to protect users’ privacy by obscuring IP addresses and arranging a concession with OpenAI: The chatbot maker won’t store Siri users’ requests.
Neil Elan, senior counsel at Stubbs Alderton & Markiles LLP, said Apple’s new privacy standards “could establish a standard that other AI operators would then need to be held to.” Because of Apple’s dominant market position — the tech giant is the most valuable company in the world — Apple could “cause a lot of other AI providers and operators to increase their security processes and protections and adopt similar safeguards.”
Market pressures are effectively the only way AI companies may be motivated to be more accountable to their users when it comes to safety and security. That’s because there are currently no federal regulations regarding AI, though Elan said it’s just a matter of time.
“It’s going to make its way into federal legislation. That is a fact,” he said.