EU Plan to Force Messaging Apps to Search for CSAM Risks Millions of False Positives, Experts Warn TechCrunch - Latest Global News

EU Plan to Force Messaging Apps to Search for CSAM Risks Millions of False Positives, Experts Warn TechCrunch

Hundreds of security and privacy experts warned in an open letter on Thursday that a controversial push by EU lawmakers to legally require messaging platforms to scan citizens’ private communications for child sexual abuse material (CSAM) could lead to millions of false alarms per day.

Concern over the EU proposal has grown since the Commission proposed the CSAM scanning plan two years ago – independent experts, lawmakers across the European Parliament and even the bloc’s data protection officer were among those who raised the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection job to search for them known CSAM; They would also have to use unspecified detection scanning technologies to detect unknown CSAM and identify grooming activities taking place – which would lead to accusations of lawmakers indulging in magical technosolutionism thinking.

Critics argue that the proposal requires the technically impossible and will not achieve the stated goal of protecting children from abuse. Instead, they say, it will have a devastating impact on internet security and web user privacy by forcing platforms to comprehensively monitor all of their users and use risky, untested technologies like client-side scanning.

Experts say there is no technology that can meet regulatory requirements without causing far more harm than good. Nevertheless, the EU continues anyway.

The latest open letter addresses changes to the draft CSAM scanning regulation recently proposed by the European Council, which the signatories say do not address fundamental deficiencies in the plan.

The letter’s signatories – 270 at the time of writing – include hundreds of academics, including well-known security experts such as Harvard Kennedy School Professor Bruce Schneier and Dr. Matthew D. Green of Johns Hopkins University, as well as a handful of other researchers working for technology companies such as IBM, Intel and Microsoft.

In a previous open letter (last July) signed by 465 academics, he warned that the detection technologies on which the legislative proposal is based and forces platforms to adopt are “deeply flawed and vulnerable to attack” and to significant weakening The vital protection measures would result in end-to-end encrypted (E2EE) communications.

Little response to counter-proposals

Last fall, MEPs joined together to crack down on it with a fundamentally revised approach – which would limit scanning to individuals and groups already suspected of child sexual abuse; Limit it to known and unknown CSAM, eliminating the requirement to scan for grooming. and eliminate all risks to E2EE by restricting it to platforms that are not end-to-end encrypted. But the European Council, the other co-legislative body involved in EU lawmaking, has yet to weigh in on the matter, and where it lands will influence the final form of the law.

The last amendment on the table was tabled in March by the Belgian Presidency, which is leading discussions on behalf of representatives of EU member state governments. But in the open letter, the experts warn that this proposal still fails to address the fundamental deficiencies of the Commission’s approach, arguing that the revisions are still not complete create “unprecedented opportunities to monitor and control Internet users” and would “… a “Secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.”

The optimizations under discussion in the amended Council proposal include the proposal that identification orders can be more targeted through the application of risk categorization and risk reduction measures; and cybersecurity and encryption can be protected by ensuring that platforms are not required to provide access to decrypted data and by verifying detection technologies. But the 270 experts assume that this is a case of playing around on the verge of a security and data protection catastrophe.

From a “technical perspective, this new proposal, to be effective, will also completely undermine communications and system security,” they warn. Relying on “flawed detection technology” to identify cases of interest so that more targeted detection commands can be sent does not reduce the risk that the law will usher in a dystopian era of “massive surveillance” of web users’ news analysis.

The letter also addresses a Council proposal to limit the risk of false positives by defining a “person of interest” as a user who has already shared CSAM or attempted to foster a child – which is expected to be through a automated assessment would occur; For example, waiting for one hit on a known CSAM or two hits on an unknown CSAM/grooming before the user is officially recognized as a suspect and reported to the EU center that processes CSAM reports.

Billions of users, millions of false positives

The experts warn that this approach is still likely to result in a large number of false positives.

“It is highly unlikely that the number of false alarms due to detection errors will be significantly reduced unless the number of repetitions is so large that detection is no longer effective. Given the large volume of messages sent on these platforms (in the order of billions), one can expect a very large amount of false positives (in the order of millions),” they write, pointing out that the platforms likely to end up slapped With a detection job, there can be millions or even billions of users, such as WhatsApp, which is owned by Meta.

“Given that there is no public information about the performance of the detectors that could be used in practice, we imagine having a detector for CSAM and grooming, as specified in the proposal, with a false positive Rate of only 0.1%” (i.e. one in a thousand times non-CSAM is misclassified as CSAM), which is much lower than any currently known detector.

“Given that WhatsApp users send 140 billion messages per day, even if only one message in a hundred were tested by such detectors, there would be 1.4 million false positives every day. To reduce the number of false alarms to hundreds, one would statistically need to identify at least five repeats using different, statistically independent images or detectors. And that only applies to WhatsApp – if we consider other messaging platforms, including email, the number of retries required would increase significantly, to the point that CSAM sharing capabilities are no longer effectively reduced.”

Another council proposal to limit detection commands to messaging apps considered “high risk” is a useless revision, according to signatories, arguing it is still likely to “indiscriminately affect large numbers of people.” Here they point out that CSAM exchanges only require standard features such as image sharing and text chat – features that are widely supported by many service providers, meaning high-risk categorization will “undoubtedly impact many services.” becomes.

They also point out that E2EE adoption is increasing, which they say will increase the likelihood that services adopting it will be deemed high-risk. “This number could increase further with the interoperability requirements introduced by the Digital Markets Act, which will result in messages being transferred between low-risk and high-risk services. This could mean that almost all services could be classified as high risk,” they argue. (Note: Message interoperability is a core element of the EU’s DMA.)

A back door for the back door

When it comes to protecting encryption, the letter reiterates the message that security and privacy experts have been telling lawmakers for years: “Detection in end-to-end encrypted services, by definition, undermines encryption protections.”

“One of the goals of the new proposal is to ‘protect cybersecurity and encrypted data while keeping services that use end-to-end encryption within the scope of detection orders.'” As we explained, this is a A contradiction in terms,” they emphasize. “The protection provided by end-to-end encryption means that no one other than the intended recipient of a communication should receive any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before encryption, violates the definition of confidentiality that end-to-end encryption provides.”

In recent weeks, police chiefs across Europe have penned their own joint statement raising concerns about the expansion of E2EE and calling on platforms to design their security systems so that they can continue to detect illegal activity and report news content to law enforcement .

The intervention is widely seen as an attempt to pressure lawmakers to pass laws like the CSAM scanning regulation.

Police chiefs deny that they are calling for an encryption backdoor, but have not explained exactly what technical solutions they expect from the platforms to provide “lawful access” to those wanted. Squaring this circle puts a very lopsided ball back into the legislature’s playing field.

If the EU continues on its current course – assuming the Council does not change course as MEPs have demanded – the consequences will be “catastrophic”, the letter’s signatories warn. “It sets a precedent for internet filtering and prevents people from using some of the few tools available to protect their right to privacy in the digital space; It will have a chilling effect, particularly on teenagers who rely heavily on online services for their interactions. It will change the way digital services are used around the world and is likely to have a negative impact on democracies around the world.”

An EU source close to the Council could not provide insight into the current discussions between member states, but said there was a working group meeting on May 8 at which it was confirmed that the proposal for a regulation to combat sexual assault child abuse will be discussed.

Sharing Is Caring:

Leave a Comment