Professor’s two NSF grants aim to better sort social media content, identify online trolls | Binghamton News – Binghamton University

The discussions happening on social media, both healthy and unhealthy, drive a lot of the public discourse and news coverage in our 21st-century world. Some people use platforms such as Facebook, Twitter and Reddit to make positive connections, but others prefer to sow misinformation and hate.

More about Blackburn

Given the popularity of those platforms and similar ones, which see millions of posts each day, it can be difficult for researchers to wrap their heads around what is being shared and how it affects our opinions on political and social topics.

Assistant Professor Jeremy Blackburn a faculty member in the Department of Computer Science at Binghamton Universitys Thomas J. Watson College of Engineering and Applied Science since 2019 is devising ways to make online content easier to gather and sort, particularly from emerging social media platforms.

Blackburn recently received a five-year, $517,484 National Science Foundation CAREER Award for his project Towards a Data-Driven Understanding of Online Sentiment. The CAREER Award supports early-career faculty who have the potential to serve as academic role models in research and education.

The project includes four objectives:

A big focus is on images, Blackburn said. Can we infer the sentiment or the underlying meaning of an image? Images are used almost as much as text on the internet, and its hard to figure out what people are talking about if you cant understand the visual language theyre using.

Current algorithms classify the sentiment of an image by assessing it and assigning it an independent score, he said. For instance, one tweet may get a 0.4 on a predetermined happiness scale, while another one may get a 0.5 but what does that incremental difference mean for humans?

Instead, by showing two pieces of content and asking which is more positive, Blackburn hopes to get a better gauge of the emotion behind it. Complicating that endeavor, however, is knowing how images become memes among certain subsets of online commenters.

Were not interested in just saying whats in the image were interested in saying how its being used, he said. Were going from the adage of a picture is worth 1,000 words and treat it as a piece of vocabulary. We have ways that can capture the look of it, but were also going to treat it like a word as we do in a language model and place it where it was used.

For instance, if you tweet a picture, you may also include some words, and if we have enough of those samples, we can now figure out that someone is upset or sad or whatever the underlying meaning is. We can translate it into regular words.

Although the development of this new technology to monitor online sentiment could have many uses, such as the political and business realms, Blackburn has a specific goal that he hopes to achieve.

We could better understand violent content or hate speech online that is very coded, or we could identify misinformation so that people cant hide this type of behavior by using just images, he said. Thats my personal passion and the reason why Im developing it.

Another recently awarded NSF project takes aim at better detecting so-called troll accounts that disseminate false information as part of larger influence campaigns on social media.

The two-year, $220,000 grant a collaboration with Assistant Professor Gianluca Stringhini from Boston University will collect information about the troll accounts identified by Twitter and Reddit as belonging to disinformation campaigns spearheaded by countries that are U.S. adversaries.

These malicious users are different from bot accounts that automatically post the same message in multiple places. They are coordinated to interact with each other and take multiple sides of the same argument just to sow discord among anyone watching.

One example, Blackburn said, is two troll accounts arguing about Black Lives Matter versus All Lives Matter not as a matter of principle but merely to spark drama among other users.

Over time, the same troll account may take different positions on the same issue, because ultimately they dont have a particular opinion they just want to cause trouble, he said. They have to convince people to become engaged.

The data collected for this project will be used to train machine-learning algorithms to identify troll accounts by codifying patterns of interactions that are uncommon in real accounts. Social media platforms then would be able to shut down the trolling without needing someone to moderate every questionable post.

Towards a Data-driven Understanding of Online Sentiment is NSF award #2046590. Detecting Accounts Involved in Influence Campaigns on Social Media is NSF award #2114411.

More here:

Professor's two NSF grants aim to better sort social media content, identify online trolls | Binghamton News - Binghamton University

Related Posts

Comments are closed.