Studying content moderation, disinformation, and deep fakes. Astroturfing, bias, and rumor control. – The CyberWire

At a glance.

Harvard University's Nieman Journalism Lab is assembling a guide to the different ways in which online platforms and services are responding to disinformation and misinformation. Produced by the Partnership on AI, the report seeks to catalogue "identifiable patterns in the variety of tactics used to classify and act on information credibility across platforms." The Partnership on AI's members include "Amazon, Facebook, Google, DeepMind, Microsoft, IBM, and Apple, as well as nonprofits like the ACLU and First Draft, news organizations like The New York Times, and academic and research institutes like Harvards Berkman Klein Center and Data and Society."

The Lab's first post on the topic "look[s] at several interventions: labeling, downranking, removal, and other external approaches. Within these interventions, we will look at patterns in what type of content theyre applied to and where the information for the intervention is coming from. Finally, we will turn to platform policies and transparency reports to understand what information is available about the impact of these interventions and the motivations behind them."

The University of Pittsburgh has announced the formation of the Pitt Disinformation Lab, a unit of the university's Institute for Cyber Law, Policy, and Security. The Lab aspires to go "beyond passive detection and reporting to create a new, community-centered system of malign influence warning, understanding, and response."

The Wall Street Journal reports that Facebook and Michigan State are conducting research they think promises to result in tools that might be able to reliably recognize deepfake images.

The Guardian has an account of an alleged attempt by a Republican-aligned US marketing firm, Rally Forge, to divide the opposing Democrats' vote by posts in social media that appeared to be in the interests of progressive (but not necessarily Democrat) candidates. The posts used images associated with progressive politics (red rose emojis, pictures of Senator Sanders and Representative Ocasio-Cortez) and various progressive memes against the corporate, two-party oligarchy and the corporate, capitalist wage system. Some Green Party candidates were endorsed by name, which triggered a Federal Election Commission inquiry into the posts' funding. The apparent goal was to shave the Democrats' share of the progressive vote by diverting it to third parties and candidates farther to the political left. Facebook permitted some of the posts but took down others as violating policies against inauthenticity, and it restricted Rally Forge's access to the platform. The Federal Election Commission looked into the matter but took no enforcement action. The Guardian regards both responses as inadequate to the astroturfing imposture and finds the incident troubling with respect to the FEC's ability to police political campaigning.

Other criticisms, these from the other end of the political spectrum, questioned, in effect, whether "non-partisan" should be interpreted as "viewpoint-neutral," "unbiased," "agenda-free," or other, similar, bona fides. The conservative American Principles Project (APP) complained to the Aspen Institute that the Institute's Commission on Information Disorder, begun in January with the mission of examining an American "public information crisis," is in the APP's view likely to produce a progressive hit piece in the guise of prescribing remedies for disinformation. The Commission is "elitist" and only nominally impartial, having a single Republican member. Two Commissioners are particularly singled out for mention in dispatches: Kathryn Murdoch, said to be a donor to PACRONYM, a progressive PAC itself accused of astroturfing during the 2020 election, and Prince Harry, who in an earlier Aspen session described the First Amendment to the US Constitution as "bonkers," which sounds as if he doesn't entirely approve of it.

These suggest, again, the inherent difficulties of deploying content moderation that most parties will find credible. OODA Loop has an essay on Chinese government trolling that suggests an alternative to content moderation: live with the disinformation. To censor, as Beijing does, is to cede to a government what's rightfully a citizen's responsibility: distinguish fact from opinion, truth from lies.

CISA, the US Cybersecurity and Infrastructure Security Agency, has taken up the old governmental function of rumor control (which is not censorship, but rather the classically liberal remedy of "more speech," albeit generously funded more speech) and published a graphic novel, Bug Bytes, that focuses on pandemic misinformation. It will be interesting to see how the novel's effectiveness will be measured, or at least assessed.

(To return to the Duke of Sussex for a moment, those curious as to what the First Amendment to the United States Constitution actually says that might have prompted the Duke of Sussex to judge it "bonkers" will find the text gratifyingly brief. We quote it in full: "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances." That's crazy talk. Or maybe the bonkers stuff is in the penumbra or someplace like that; it's always in the last place you look, isn't it?)

Continue reading here:
Studying content moderation, disinformation, and deep fakes. Astroturfing, bias, and rumor control. - The CyberWire

Related Posts

Comments are closed.