Site icon News Bit

Child safety features built to withstand snooping, says Apple

NEW DELHI: Tech giant Apple’s new child sexual abuse material or CSAM detection features, announced on August 6, will have safeguards against governments trying to manipulate it. Specifically, the company said that the system will flag images only if they appear on at least two global CSAM databases. This is meant to prevent any single government or law enforcement agency from manipulating CSAM databases to surveil users.

Messaging giant WhatsApp’s chief executive, Will Catchart, had raised concerns about governments manipulating the feature. Cathcart said that the system could “very easily” allow the company to “scan private content for anything they or a government decides it wants to control”. He pointed out that different countries will have different definitions of what’s acceptable via a series of tweets on August 7.

“Apple generates the on-device perceptual CSAM hash database through an intersection of hashes provided by at least two child safety organizations operating in separate sovereign jurisdictions—that is, not under the control of the same government,” the company said in a new technical paper released last night. “Any perceptual hashes appearing in only one participating child safety organization’s database, or only in databases from multiple agencies in a single sovereign jurisdiction, are discarded by this process, and not included in the encrypted CSAM database that Apple includes in the operating system,” the paper, which is titled ‘Security Threat Model Review of Apple’s Child Safety Features’, claims.

The iPhone maker, on August 6, announced two new automated features to improve child safety on its devices. The first sends a notification to parents who give their children supervised accounts on Apple’s devices. The second is a CSAM detection software that matches images being uploaded to Apple’s cloud service — iCloud — against government CSAM databases, flagging them to law enforcement if offending content is found.

In the technical paper, the company also noted that notifications are never sent to law enforcement directly. If the system flags an image as offensive, it then goes to Apple’s human reviewers, who authenticate the alert and pass it onto the right child safety agencies in the concerned jurisdiction. The paper isn’t the only defense Apple has for its new systems, which have drawn criticism from multiple quarters.

“If and only if you meet a threshold of something on the order of 30 known child pornographic images matching, only then does Apple know anything about your account and know anything about those images, and at that point, only knows about those images, not about any of your other images,” Craig Federighi, Apple’s senior vice president of software engineering, told The Wall Street Journal in an interview yesterday. “This isn’t doing some analysis for; did you have a picture of your child in the bathtub? Or, for that matter, did you have a picture of some pornography of any other sort? This is literally only matching on the exact fingerprints of specific known child pornographic images,” he added.

The company’s new child safety features have received flak from privacy bodies like the Electronic Frontier Foundation (EFF), which called it a backdoor into Apple’s systems, something law enforcement and governments have long wanted. Whistleblower Edward Snowden also opposed the feature, as did other academics, politicians and even many of Apple’s own employees.

At the same time, the feature has also been applauded by some, like US Senator Richard Blumenthal and UK Health Secretary Sajid David. An open letter from the Five Eyes countries, India and Japan in October 2020 had asked tech companies to find ways around end-to-end encryption, a technology that keeps outsiders from viewing content on users’ devices, etc.

“In light of these threats, there is increasing consensus across governments and international institutions that action must be taken: while encryption is vital and privacy and cybersecurity must be protected, that should not come at the expense of wholly precluding law enforcement, and the tech industry itself, from being able to act against the most serious illegal content and activity online,” the letter said, citing terrorism and child sexual abuse as key areas of concerns.

Subscribe to Mint Newsletters

* Enter a valid email

* Thank you for subscribing to our newsletter.

Never miss a story! Stay connected and informed with Mint.
Download
our App Now!!

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! NewsBit.us is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@newsbit.us. The content will be deleted within 24 hours.
Exit mobile version