Apple’s iPhone computer vision has the potential to preserve privacy but also break it completely • The Register


    Opinion For a company built around helping people communicate, Apple sure has problems talking to folk. It pole-vaulted itself feet first into the minefield of Child Sexual Abuse Material (CSAM), saying that it was going to be checking everybody’s images whether they liked it or not.

    Then, when people took this to mean that it would be checking everybody’s images whether they liked it or not, Apple explained that it wasn’t like that at all and how could you think it?

    To be clear, it is and it isn’t like that at all. Apple won’t be scanning all your photographs, but an AI on your phone will scan all those you have sent and will send to iPhotos. It will generate fingerprints that purportedly identify pictures, even if highly modified, that will be checked against fingerprints of known CSAM material. Too many of these – there’s a threshold – and Apple’s systems will let Apple staff investigate. They won’t get the pictures, but rather a voucher containing a version of the picture. But that’s not the picture, OK? If it all looks too dodgy, Apple will inform the authorities, or dob you into the feds, if you prefer.

    And it’s true that, if everything is done exactly as Apple says, if the AI on the phone and the auto-threshold detection stuff in the cloud and the various policies, protocols and limits are all observed, you won’t get collared unless you are both very naughty and rather thick (because if you don’t send your images to iPhotos in the cloud, your AI won’t peek).

    The technology itself is agnostic – it can be used to scan for any class of image – and completely opaque to the user, but it will be used responsibly. Well, OK then.

    Let’s put that to one side for now, and celebrate some much more jolly Apple technology. In a blog post “Recognizing People in Photos Through Private On-Device Machine Learning” last month, Apple plumped itself up and strutted its funky stuff on how good its new person recognition process is. Obscured, oddly lit, accessorised, madly angled and other bizarrely presented faces are no problemo, squire.

    By dint of extreme cleverness and lots of on-chip AI, Apple says it can efficiently recognise everyone in a gallery of photos. It even has a Hawkings-grade equation, just to show how serious it is, as proof that “finally, we rescale the obtained features by ss and use it as logit to compute the softmax cross-entropy loss based on the equation below.” Go, look. It’s awfully science-y.

    The post is 3,500 words long, complex, and a very detailed paper on computer vision, one of the two tags Apple has given it. The other tag, Privacy, can be entirely summarised in six words: it’s on-device, therefore it’s private. No equation.

    That would be more comforting if Apple hadn’t said days later how on-device analysis is going to be a key component in informing law enforcement agencies about things they disapprove of. Put the two together, and there’s a whole new and much darker angle to the fact, sold as a major consumer benefit, that Apple has been cramming in as much AI as it can so it can look at pictures as you take and after you’ve stored them.

    We’ve all been worried about how mobile phones are stuffed with sensors that can watch what we watch, hear what we hear, track where we go and note what we do. The evolving world of personal data privacy is based around these not being stored in the vast vaults of big data, keeping them from being grist to the mill of manipulating our digital personas.

    But what happens if the phone itself grinds that corn? It may never share a single photograph without your permission, but what if it can look at that photograph and generate precise metadata about what, who, how, when, and where it depicts?

    This is an aspect of edge computing that is ahead of the regulators, even those of the EU who want to heavily control things like facial recognition. By the time any such regulation is produced, countless millions of devices will be using it to ostensibly provide safe, private, friendly on-device services that make taking and keeping photographs so much more convenient and fun.

    It’s going to be very hard to turn that off, and very easy to argue for exemptions that weaken the regs to the point of pointlessness. Especially if the police and security services lobby hard as well, which they will as soon as they realise that this defeats end-to-end encryption without even touching end-to-end encryption.

    So yes, Apple’s anti-CSAM model is capable of being used without impacting the privacy of the innocent, if it is run exactly as version 1.0 is described. It is also capable of working with the advances elsewhere in technology to break that privacy utterly, without setting off the tripwires of personal protection we’re putting in place right now.

    We’d know better if Apple were to put out a 3,500-word paper discussing these issues, its capabilities, intentions and safeguards, and the metrics by which such a major step forward in privacy intervention as the new CSAM detection can be judged a success.

    How many people will it catch? How many people could it endanger if different aspects of it were tweaked or subverted by state mandate or systematic failure? Talk to us, Apple.

    Until we get discussions of privacy, AI and ecosystem design, to the same depth as Apple is happy talking to us about how good its camera app is, we can’t tell. And we shouldn’t trust it – or anyone in this game – until we do. ®



    Source link