Twitter Outlines Evolving Approach to Algorithms as Part of New ‘Responsible Machine Learning Initiative’ – Social Media Today

It's amazing how commonplace the term 'algorithm' has now become, with machine learning, algorithmic-defined systems now being used to filter information to us at an increasingly efficient rate, in order to keep us engaged, keep us clicking, and keep us scrolling through our social media feeds for hours on end.

But algorithms have also become a source of rising concern in recent times, with the goals of the platforms feeding us such information often at odds with broader societal aims of increased connection and community. Indeed, various studies have found that what sparks more engagement online is content that triggers strong emotional response, with anger, for one, being a powerful driver of such. Given this, algorithms, whether intentionally or not, are basically built to fuel division, via the more practical business aim of maximizing engagement.

Sure, partisan news coverage also plays a part, as does existing bias and division. But algorithms have arguably incentivized such to a significant enough degree that such approaches now largely define, or at least influence, everything that we see.

If it feels like the world is more divided than ever, that's probably because it is, and it's likely due to the algorithms which, in effect, keep us angry all of the time.

Every platform is examining this, and the impacts of algorithms in various respects. And today, Twitter has outlined its latest algorithmic research effort, which it's calling its 'Responsible Machine Learning Initiative', which will monitor the impacts of algorithmic shifts with a view to removing various negative elements, including bias, from how it applies machine learning systems.

As explained by Twitter:

"When Twitter uses ML, it can impact hundreds of millions of Tweets per day and sometimes, the way a system was designed to help could start to behave differently than was intended. These subtle shifts can then start to impact the people using Twitter and we want to make sure were studying those changes and using them to build a better product."

The project will address four key pillars:

The broader view is that by analyzing these elements, Twitter will be able to both maximize engagement, in line with its ambitious growth targets, while also taking into account, and minimizing potential societal harms. Which may lead to difficult conflicts across the two streams -but Twitter's hoping that by instituting more specific guidance as to how it applies such, it can build a more beneficial, inclusive platform through its increased learning and development.

"The META team works to study how our systems work and uses those findings to improve the experience people have on Twitter. This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community."

The project will also include Twitter's ambitious 'BlueSky' initiative, which essentially aims to enable users to define their own algorithms at some point, as opposed to being guided by an overarching set of platform-wide rules.

"Were also building explainable ML solutions so you can better understand our algorithms, what informs them, and how they impact what you see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them. Were currently in the early stages of exploring this and will share more soon."

That's a far broader-reaching project, with complexities that could make it impossible for day-to-day application or use by regular people. But the idea is that by exploring specific elements, Twitter will be able to make more informed, intelligent, and fair choices as to how it applies its machine-defined rules and systems.

It's good to see Twitter taking this element on, even with the amount of challenges it will face, and hopefully, it will help the platform weed out some of the more concerning algorithmic elements, and create a better, more inclusive, less divisive system.

But I have my doubts.

The desires of idealists will generally always conflict in the demands of shareholders, and it seems like, at some stage, such investigations will lead to difficult choices that can only go one way or the other. But still, that's likely on a wider scale - maybe, by addressing at least some of these aspects, Twitter can build a better system, even if it's not perfect.

At the least, it will provide more insight into the effects of algorithms, and what that means for social platforms in general.

Read more:
Twitter Outlines Evolving Approach to Algorithms as Part of New 'Responsible Machine Learning Initiative' - Social Media Today

Related Posts

Comments are closed.