How Apple plans to protect your AI data


It’s no secret that Apple is developing artificial intelligence features that will be rolled out with iOS 18 and macOS 15. When you update your iPhone, iPad, and Mac later this year, you might find a more natural-sounding Siri, or the ability to generate emojis based on what you're talking about in Messages. Cool, but how will Apple protect your data when AI handles all these nifty new features?

While reports suggest Apple will run many of these features on devices, at least on its new products, rumors suggest the company is planning to outsource much of the processing to the cloud. This is not unusual for other companies in the industry: most AI processing currently occurs in the cloud, simply because AI processing is so intensive. That's why companies continue to boost the capabilities of NPUs, or neural processing units, which are specialized processors designed to handle artificial intelligence functions. Apple has been using NPUs for years, but earlier this year hyped up the new M4 chip's powerful NPUs, while Microsoft launched a new AI-PC standard with its Copilot+ PC series.

Running AI on devices is safer

Of course, it may not matter to you whether your AI feature is running on your phone or in the cloud, as long as the feature works properly. The problem is, however, that running these features on the device provides an inherently more secure experience. By pushing processing to the cloud, companies risk exposing user data to anyone with access, especially if the service doing the processing needs to first decrypt the user data. Exposure risks include employees of the companies involved, but also bad actors who might try to break into the company's cloud servers and steal any customer information they can find.

This is already an issue with services like ChatGPT, and is why I recommend not sharing any personal information with most cloud-based AI services: your conversations are not private, and are all fed to these servers, both for storage and use For training artificial intelligence models. Companies like Apple that are invested in user privacy prefer to use on-device solutions whenever possible because they can prove that isolating user data to a phone, tablet, or computer prevents others from stealing it.

How Apple will use 'Secure Enclave' to protect AI data

While newer Apple hardware should be powerful enough to run the artificial intelligence features the company is developing, for older devices or features that are too power-hungry, it may be forced to turn to cloud-based servers to run on them all. However, if The Information's report cited by Android Authority is accurate, the company may have found a solution: Secure Enclave.

Secure Enclave is part of the hardware in most Apple products used today. It's part of the SoC (System on a Chip), separate from the processor, and its job is to store your most sensitive information, such as encryption keys and biometric data. This way, if the main processor is compromised, the secure enclave ensures that bad actors cannot access its data.

According to The Information, Apple is developing an artificial intelligence cloud solution that will send all artificial intelligence user data to the Secure Enclave of M2 Ultra and M4 Mac running in its server farms. There, these server Macs can process the request while retaining encryption and then send the results back to the user. In theory, this process keeps user data safe while also allowing older devices to access Apple's latest artificial intelligence features.

We won't know for sure if this is Apple's plan until they reveal what, if anything, they're working on at WWDC. If Apple stays silent about how it protects AI user data, we may never know for sure. But given that Apple bills itself as a company that cares about user privacy, this approach (or any method of ensuring cloud-based data is encrypted end-to-end) would make a lot of sense.