Why Britains new deal with Silicon Valley for stopping child abuse still has one big hole in it – Telegraph.co.uk

On Thursday, 25 of the world's biggest social media companies signed up to a new voluntary code of conduct on fighting child abuse, jointly negotiated with Britain, the US, Canada, Australia and New Zealand.

The code prompted a rare outbreak ofunity between Silicon Valley and world governments. Priti Patel, the home secretary, hailed it as a "landmark collaboration" that provided a "blueprint" for tougher action.

Sir Nick Clegg, these days Facebook's vice president of global affairs said: "These are horrible crimes that no one at Facebook takes lightly. We have a responsibility to keep children safe... these principles have our full support."

But for some of those companiesthere is an elephant in the room, something which prevents them from going as far as Western intelligence agencies would like.

That something is end-to-end encryption, a nigh-impregnable security measure which renders private messages invisible both to thecompanies and to prying governments.

Acting US secretary of homeland security Chad Wolf said as much in the press conference announcing the code, adding a note of rancour to the proceedings. While praising the new agreement, he warned of the danger of tech giants "going dark on online child sexual exploitation investigations", saying:

"We recognise [that] encryption is an essential cybersecurity tool in the hands of the right people, but like any tool it can be abused. Warrant-proof encryption can be used by criminals and child abusers to remain hidden from law enforcement.

"Should certain platforms go dark, our investigatory capabilities and lawful access will be significantly affected, especially when it comes to our ongoing fight against online child sexual exploitation. If platforms deploy warrant-proof encryption our leads would fall dramatically overnight."

Wolf's words would in theory apply to Snapchat, which encrypts privatemessages between pairs of users, and permanently deletes them from its servers after they have been opened by all the recipients. ATelegraph investigation found that the app's ephemeral nature was being exploited by child sex traffickers.

Wolf's words would also apply to Apple, which encrypts all messages between iPhone users as a matter of course. In future, they could even apply in future to Twitter, which has reportedly tested encrypted private messaging.

His main target, however, could only have beenFacebook. The world's biggest social media company is forging ahead with plans to encrypt the private messaging functions of Messenger, Instagram and WhatsApp, providing an unprecedented increase in privacy for almost three billion people across the world the total number who use at least one of Facebook's services at least once per month.

But those plans have kicked off a new battle between politicians and the Valley, similar to the one that the tech industry won back in 2014-16. Back then, the battleground was terrorism; today, it is child protection.

Western intelligence agencies have long argued that they need so-called "backdoors" (although they would not use the word) into encrypted messaging, urging them to add "virtual crocodile clips" and "virtual keys" strictly for the use of legitimatespooks.

Tech firms and privacy experts insist that any such backdoor could also be accessible to criminals and foreign governments, just as a secret Windows loophole first discovered by US spies was eventually found and exploited by the WannaCry cyber-attackers in 2017.

Today Facebook has become one of the world's biggest sources of child abuse reports, making up 90pc of the total submitted tothe US National Centre for Missing and Exploited Children (NCMEC). This is likely to be because of its sheer scale rather than any particular moral turpitude on the part of its users; even so, NCMEC claims that 71pc of those reports would have been lost under encryption.

In response to questions from The Telegraph, neither Twitter nor Snapchat would commit to ensuring that future encryption methods allow special access to governments. A spokesman for Apple told the Wall Street Journal that it endorsed the code.

Jennifer Stout, Snapchat's vice president of global public policy, said the company was "deeply committed" to stopping the "global threat" of child abuse, and that it welcomed the principles. She declined to commit to heeding Chad Wolf's demands.

Del Harvey, Twitter's vice president of trust and safety, said the company has a "zero-tolerance" stance on child abuse and called the principles "a valuable step in driving collective action". Twitter too declined to commit to making any future encryption crackable.

Facebook, meanwhile, stuck to its guns, saying: We believe companies and governments can work together to keep children safe online while still protecting peoples privacy and continuing to secure their messages with encryption."

A spokeswoman added that the company believes that the privacy benefits of encryption for billions of people will ultimately outweigh the drawbacks for law enforcement.

Facebook also argues that it can still catch online child abusers even in an encrypted world, and says that it is developing new methods to help it do so. It already removes about 250,000 WhatsApp accounts every month on suspicion of sharing child abuse images, despite that service being wholly encrypted, based on other information such as their messaging patterns and reports from other users.

Dig more deeply into Facebook's plans, however, and it isn't clear how the company can avoid a massive loss of intelligence.

Facebook is understandably wary of saying exactly how it spots child abuse, since it does not want to provide abusers with a how-to manual. A spokeswoman said that the company can match the behaviour patterns of previous accounts it has removed to new ones that have not yet trespassed so blatantly.

When asked to give examples of how this might be done, the spokeswoman said that Facebook could spot patterns characteristic of other types of bad behaviour, such as spam and fake accounts, and that the sharing of child abuse images can often be part of these.

But that is not the same as being able to spot patterns specific to people who actually do share child abuse material, or to people who prey on individual children on Facebook's services.

The spokeswoman also described how information from other, more public channels, such as the main Facebook app and Instagram, could be used to identify potential criminals.

In theory, this could become far easier in the future: Facebook's plans for encryption involve merging the private messaging functions of all its major apps into one service, which could mean merging people's accounts as well.

But the company is extremely cagey about whether this will happen. In Europe, it has faced legal challenges over the porting of data from WhatsApp to Facebook. The spokeswoman declined to say whether or to what extent accounts would be linked under the new plans.

These will not be the onlymethods Facebook relies upon. Thespokeswoman made clear that Facebook is still on the hunt for other patterns that could help it identify child abusers in future. Facebook will alsostill have access to user reports, too, meaning would-be predators could be dobbed in by chat partners, attempted victims or undercover detectives. The company might yet be able to make such reporting easier or more powerful.

Nevertheless, Facebook is currently offering few answers at least in public for how it would catch a more security-conscious child abuser. That suggests police forces around the world may need to prepare for quite a loss of visibility when its private channelsfinally do"go dark".

Read the original here:
Why Britains new deal with Silicon Valley for stopping child abuse still has one big hole in it - Telegraph.co.uk

Related Posts

Comments are closed.