Apple Intelligence privacy sets a new standard, but it’s not perfect


Apple Intelligence privacy is stronger than that of any other AI company, but even its security protections aren’t perfect once ChatGPT gets involved.

That’s the argument made by the security chief at Inrupt, the privacy-focused company co-founded by the inventor of the world wide web, Tim Berners-Lee …

Apple Intelligence privacy protections

We’ve previously described Apple’s three-stage hierarchy for AI features:

  1. As much processing as possible is done on-device, with no data sent to servers
  2. If external processing power is needed, Apple’s own servers are the next resort
  3. If they can’t help, users are asked for permission to use ChatGPT

We’ve also explained the “extraordinary step” taken by Apple to protect the privacy of customers when tasks are handled by the company’s own AI servers, Private Cloud Compute.

Additionally, when requests are passed to ChatGPT, further protections apply. Apple anonymizes all ChatGPT handoffs, so OpenAI’s servers have no idea who has made a particular request, or who is getting the response.

Apple’s agreement with OpenAI also ensures that data from these sessions will not be used as training material for ChatGPT models.

But Inrupt says there are still privacy risks

Tim Berners-Lee is credited with inventing the world wide web. In 2018, he turned his attention to protecting the privacy of internet uses, co-founding a company called Inrupt to focus on giving users complete control over the use of their personal data. Progress since then has unfortunately been slow.

Inrupt’s chief of security architecture Bruce Schneier says that Apple Intelligence privacy protections are impressive, but not perfect.

Apple has put together a “pretty impressive privacy system” for its AI, says Schneier. “Their goal is for AI use—even in their cloud—to be no less secure than the phone’s security. There are a lot of moving parts to it, but I think they’ve done pretty well” […]

For some data, “you’re stuck with OpenAI’s rules,” says Schneier. “Apple strips identifying information when sending OpenAI the queries, but there’s a lot of identifying information in many queries.”

Essentially, the more personal an AI query is to you, the greater the risk that you could be identified by its content.

For example, if you were to ask ChatGPT to help you plan an itinerary for a city break, the fact that you are in a particular city on a particular weekend, and have listed specific interests, could well be enough to identify you to someone who knows you.

9to5Mac’s Take

No privacy-protection system is perfect, and it’s certainly true that the right combination of data in a ChatGPT session could identify you to someone who already knows a fair bit about you. Taking a somewhat cautious approach to the personal information you share in AI sessions is sensible.

That said, it should be noted that Inrupt has a vested interest in being less than totally convinced by anyone’s privacy precautions, because the company is pushing a particular model of its own.

The fact is that Apple is offering a higher standard of privacy protection than anyone else out there, and I’d expect that to remain the case as new ideas emerge.

FTC: We use income earning auto affiliate links. More.



Source link

Previous articlePeter Schiff Names Next ‘Critical’ Bitcoin (BTC) Support
Next articleBitcoin Dips to $56,952 as Market Reacts to Mt Gox and German BTC Moves – Market Updates – Bitcoin.com News