WhatsApp end-to-end encrypted messages arent that private after all – Ars Technica

Enlarge / The security of Facebook's popular messaging app leaves several rather important devils in its details.

Yesterday, independent newsroom ProPublica publisheda detailed piece examining the popular WhatsApp messaging platform's privacy claims. The service famously offers "end-to-end encryption," which most users interpret as meaning that Facebook, WhatsApp's owner since 2014, can neither read messages itself nor forward them to law enforcement.

This claim is contradicted by the simple fact that Facebook employs about 1,000 WhatsApp moderators whose entire job isyou guessed itreviewing WhatsApp messages that have been flagged as "improper."

The loophole in WhatsApp's end-to-end encryption is simple: Therecipient of any WhatsApp message can flag it. Once flagged, the message is copied on the recipient's device and sent as a separate message to Facebook for review.

Messages are typically flaggedand reviewedfor the same reasons they would be on Facebook itself, including claims of fraud, spam, child porn, and other illegal activities. When a message recipient flags a WhatsApp message for review, that message is batched with the four most recent prior messages in that thread and then sent on to WhatsApp's review system as attachments to a ticket.

Although nothing indicates that Facebook currently collects user messageswithout manual intervention by the recipient, it's worth pointing out that there is no technical reason it could not do so. The security of "end-to-end" encryption depends on the endpoints themselvesand in the case of a mobile messaging application, that includes the application and its users.

An "end-to-end" encrypted messaging platform could choose to, for example, perform automated AI-based content scanning of all messages on a device, then forwardautomatically flagged messages to the platform's cloud for further action. Ultimately, privacy-focused users must rely on policies and platform trust as heavily as they do on technological bullet points.

Once a review ticket arrives in WhatsApp's system, it is fed automatically into a "reactive" queue for human contract workers to assess. AI algorithms also feed the ticket into "proactive" queues that process unencrypted metadataincluding names and profile images of the user's groups, phone number, device fingerprinting, related Facebook and Instagram accounts, and more.

Human WhatsApp reviewers process both types of queuereactive and proactivefor reported and/or suspected policy violations. The reviewers have only three options for a ticketignore it, place the user account on "watch," or ban the user account entirely. (According to ProPublica, Facebook uses the limited set of actions as justification for saying that reviewers do not "moderate content" on the platform.)

Although WhatsApp's moderatorspardon us,reviewershave fewer options than their counterparts at Facebook or Instagram do, they face similar challenges and have similar hindrances. Accenture, the company that Facebook contracts with for moderation and review, hires workers who speak a variety of languagesbut notall languages. When messages arrive in a language moderators are not conversant in, they must rely on Facebook's automatic language-translation tools.

"In the three years I've been there, it's always been horrible," one moderator told ProPublica. Facebook's translation tool offers little to no guidance on either slang or local context, which is no surprise given that the tool frequently has difficulty even identifying the source language. A shaving company selling straight razors may be misflagged for "selling weapons," while a bra manufacturer could get knocked as a "sexually oriented business."

WhatsApp's moderation standards can be as confusing as its automated translation toolsfor example, decisions about child pornography may require comparing hip bones and pubic hair on a naked person to a medical index chart, or decisions about political violence might require guessing whether an apparently severed head in a video is real or fake.

Unsurprisingly, some WhatsApp users also use the flagging system itself to attack other users. One moderator told ProPublica that "we had a couple of months where AI was banning groups left and right" because users in Brazil and Mexico would change the name of a messaging group to something problematic and then report the message. "At the worst of it," recalled the moderator, "we were probably getting tens of thousands of those. They figured out some words that the algorithm did not like."

Although WhatsApp's "end-to-end" encryption of message contents can only be subverted by the sender or recipient devices themselves, a wealth of metadata associated with those messages is visible to Facebookand to law enforcement authorities or others that Facebook decides to share it withwith no such caveat.

ProPublica foundmore than a dozen instances of the Department of Justice seeking WhatsApp metadata since 2017. These requests are known as "pen register orders," terminology dating from requests for connection metadata on landline telephone accounts. ProPublica correctly points out that this is an unknown fraction of the total requests in that time period, as many such orders, and their results, are sealed by the courts.

Since the pen orders and their results are frequently sealed, it's also difficult to say exactly what metadata the company has turned over. Facebook refers to this data as "Prospective Message Pairs" (PMPs)nomenclature given to ProPublica anonymously, which we were able to confirm in the announcement of a January 2020 course offered to Brazilian department of justice employees.

Although we don't know exactly what metadata is present in these PMPs, we do know it's highly valuable to law enforcement. In one particularly high-profile 2018 case, whistleblower and former Treasury Department official Natalie Edwards was convicted of leaking confidential banking reports to BuzzFeed via WhatsApp, which she incorrectly believed to be "secure."

FBI Special Agent Emily Eckstut was able to detail that Edwards exchanged "approximately 70 messages" with a BuzzFeed reporter "between 12:33 am and 12:54 am" the day after the article published; the data helped secure a conviction and six-month prison sentence for conspiracy.

Read the rest here:
WhatsApp end-to-end encrypted messages arent that private after all - Ars Technica

Related Post

Comments are closed.