Report calls for web pre-screening to end UK’s child abuse ‘explosion’

Credit to Author: Lisa Vaas| Date: Mon, 16 Mar 2020 11:57:40 +0000

A UK inquiry into child sexual abuse facilitated by the internet has recommended that the government require apps to pre-screen images before publishing them, in order to tackle “an explosion” in images of child sex abuse.

The No. 1 recommendation from the independent inquiry into child sexual abuse (IICSA) report, which was published on Thursday:

The government should require industry to pre-screen material before it is uploaded to the internet to prevent access to known indecent images of children.

While most apps and platforms require users (of non-kid-specific services) to be at least 13, their lackluster age verification is also undermining children’s safety online, the inquiry says. Hence, recommendation No. 3:

The government should introduce legislation requiring providers of online services and social media platforms to implement more stringent age verification techniques on all relevant devices.

The report contained grim statistics. The inquiry found that there are multiple millions of indecent images of kids in circulation worldwide, with some of them reaching “unprecedented levels of depravity.”

The imagery isn’t only “depraved”; it’s also easy to get to, the inquiry said, referring to research from the National Crime Agency (NCA) that found that you can find child exploitation images within three clicks when using mainstream search engines. According to the report, the UK is the third greatest consumer in the world of the live streaming of abuse.

The report describes one such case: that of siblings who were groomed online by a 57-year-old man who posed as a 22-year-old woman. He talked the two into performing sexual acts in front of a webcam and threatened to share graphic images of them online if they didn’t.

How do we stem the tide?

The NCA has previously proposed that internet companies scan images against its hash database prior to being uploaded. If content is identified as a known indecent image, it can then be prevented from being uploaded.

Apple, Facebook, Google, Dropbox and Microsoft, among others, automatically scan images (and sometimes video) uploaded to their servers. The NCA says that, as it understands it, they only screen content after it’s been published, thereby enabling abusive images to proliferate.

The thinking: why not stop the images dead in their tracks before the offense occurs?

One reason: it can’t be done without disabling the end-to-end encryption in WhatsApp, for example, or other privacy-minded services and apps, according to Matthew Green, cryptographer and professor at Johns Hopkins University. Green explains that the most famous scanning technology is based on PhotoDNA: an algorithm developed by Microsoft Research and Dr. Hany Farid.

PhotoDNA and Google’s machine-learning tool, which it freely released to address the problem, have a commonality, Green says:

They only work if providers […] have access to the plaintext of the images for scanning, typically at the platform’s servers. End-to-end encrypted [E2E] messaging throws a monkey wrench into these systems. If the provider can’t read the image file, then none of these systems will work.

Green says that some experts have proposed a way around the problem: providers can push the image scanning from the servers out to the client devices – i.e., your phone, which already has the cleartext data.

The client device can then perform the scan, and report only images that get flagged as CSAI [child sexual abuse imagery]. This approach removes the need for servers to see most of your data, at the cost of enlisting every client device into a distributed surveillance network.

The problem with that approach? The details of the scanning algorithms are private. Green suspects this could be because those algorithms are “very fragile” and could be used to bypass scanning if they fell into the wrong hands:

Presumably, the concern is that criminals who gain free access to these algorithms and databases might be able to subtly modify their CSAI content so that it looks the same to humans but no longer triggers detection algorithms. Alternatively, some criminals might just use this access to avoid transmitting flagged content altogether.

Cryptographers are working on this problem, but “the devil is in the [performance] details,” Green says.

Does that mean that the fight against CSAI can’t be won without forfeiting E2E encryption? As it is, the inquiry is recommending fast action, suggesting that some of its recommended steps be taken before the end of September – likely not enough time for cryptographers to figure out how to effectively prescreen imagery before it’s published, as in, before it slips behind the privacy shroud of encryption.

The inquiry’s report is only the latest of a string of scathing assessments of social media’s role in the spread of abuse imagery. According to the report, social media companies appear motivated to “avoid reputational damage” rather than prioritizing protection of victims.

Prof Alexis Jay, the chair of the inquiry:

The serious threat of child sexual abuse facilitated by the internet is an urgent problem which cannot be overstated. Despite industry advances in technology to detect and combat online facilitated abuse, the risk of immeasurable harm to children and their families shows no sign of diminishing.

Internet companies, law enforcement and government [should] implement vital measures to prioritise the protection of children and prevent abuse facilitated online.

The UK and the US are on parallel paths to battle internet-facilitated child sexual abuse, though, at least in the US, privacy advocates view recent political moves as ill-disguised attacks on encryption and privacy. The EARN-IT Act is a case in point: now making its way through Congress, the bill has been introduced by legislators who’ve used the specter of online child exploitation to argue for the weakening of encryption.

One of the problems of the EARN IT bill: the proposed legislation “offers no meaningful solutions” to the problem of child exploitation, as the Electronic Frontier Foundation (EFF) says:

It doesn’t help organizations that support victims. It doesn’t equip law enforcement agencies with resources to investigate claims of child exploitation or training in how to use online platforms to catch perpetrators. Rather, the bill’s authors have shrewdly used defending children as the pretense for an attack on our free speech and security online.

You can’t directly compare British and US legal rights. But at least in the US, legal analysts say that the EARN IT Act, which would compel internet companies to follow “best practices” lest they be stripped of Section 230 protections against being sued for publishing illegal content, would be in violation of the First and Fourth Constitutional Amendments protections for, respectively, free speech and unreasonable search.

Private companies like Facebook voluntarily scan for violative content because they’re not state actors. If they’re forced to screen, they become state actors, and then they (generally; case law differs) legally need to secure warrants to search digital evidence.

Thus, as argued by Riana Pfefferkorn, Associate Director of Surveillance and Cybersecurity at The Center for Internet and Society at Stanford Law School, forcing scanning could actually lead, ironically, to court suppression of evidence of the child sexual exploitation crimes targeted by the bill.

How would it work in the UK? I’m not a lawyer, but if you’re familiar with British law, please do add your thoughts to the comments section.

Naked Security’s Mark Stockley saw another wrinkle in the inquiry’s recommendations about prescreening content: It reminded him of Article 13 of the European Copyright Directive, also known as the Meme Killer. It’s yet another legal directive that critics say takes an “unprecedented step towards the transformation of the internet, from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users.”

The directive will force for-profit platforms like YouTube, Tumblr, and Twitter to proactively scan user-uploaded content for material that infringes copyright… scanning that’s been error-prone and prohibitively expensive for smaller platforms. It wouldn’t make exceptions, even for services run by individuals, small companies or non-profits.

EU member states have until 7 June 2021 to implement the new reforms, but the UK will have left the EU by then. As the BBC reported in January, Universities and Science Minister Chris Skidmore has said that the UK won’t implement the EU Copyright Directive after the country leaves the EU.

How about the inquiry’s call for web pre-screening? Will it make it into law?

If it does, we’ll let you know.


Latest Naked Security podcast

LISTEN NOW

Click-and-drag on the soundwaves below to skip to any point in the podcast. You can also listen directly on Soundcloud.

http://feeds.feedburner.com/NakedSecurity

Leave a Reply