Apple to check iCloud photo uploads for child abuse images


    The Apple logo is seen at an Apple Store, as Apple’s new 5G iPhone 12 went on sale in Brooklyn, New York, U.S. October 23, 2020. REUTERS/Brendan McDermid

    Aug 5 (Reuters) – Apple Inc (AAPL.O) on Thursday said it will implement a system that checks photos on iPhones in the United States for matches with known images of child sexual abuse before they are uploaded to its iCloud storage services.

    If enough child abuse image uploads are detected, Apple will initiate a human review of and report the user to law enforcement officials, the company said. Apple said the system is designed to reduce false positives to one in one trillion.

    With the new system, Apple is trying to address two imperatives: Requests from law enforcement to help stem child sexual abuse, and the privacy and security practices that the company has made a core tenet of its brand. Other companies such as Facebook Inc (FB.O) use similar technology to detect and report child sexual abuse.

    Here is how Apple’s system works. Law enforcement officials maintain a database of known child sexual abuse images and translate those images into “hashes” – numerical codes that positively identify the image but cannot be used to reconstruct them.

    Apple has made its own implementation of that database using a technology called “NeuralHash” that is designed to also catch edited but similar of the original imagines. That database will be stored on iPhones.

    When a user uploads an image to Apple’s iCloud storage service, the iPhone will create a hash of the image to be uploaded and compare it against the database. Photos stored only on the phone are not checked, Apple said.

    The Financial Times earlier reported some aspects of the program. read more

    One key aspect of the system is that Apple checks photos stored on phones before they are uploaded, rather than checking the photos after they arrive on the company’s servers.

    On Twitter, some privacy and security experts expressed concerns that the system could eventually be expanded to scan phones more generally for prohibited content or political speech.

    “Regardless of what Apple’s long term plans are, they’ve sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content,” Matthew Green, a security researcher at Johns Hopkins University, wrote in response to the earlier reporters. “Whether they turn out to be right or wrong on that point hardly matters. This will break the dam — governments will demand it from everyone.”

    Reporting by Stephen Nellis in San Francisco
    Editing by Marguerita Choy

    Our Standards: The Thomson Reuters Trust Principles.



    Source link

    Previous articleBack 4 Blood early impressions: our thoughts on the spiritual successor to Left 4 Dead 2
    Next articleCoinbase users can now buy crypto with Apple Pay