Understanding the ideological divide online

Understanding the ideological divide online

Algorithms used by social media companies in drawing in and retaining users have been the subject of scrutiny for several years now. Spanning multiple election cycles and a once-in-a-century pandemic, the debates centre around a simple question: Is social media dividing people? The consensus among experts and academics is yes, they create filter bubbles and echo chambers. Users are divided in silos of information that resonate with their own ideas and beliefs. And because of such siloed exposure, people are, the hypothesis goes, less likely to come across information that challenges their own attitudes. Over and over again, such exposure cycles can entrench trust, or distrust, in sources and institutions. This has had an outsized influence on electoral politics, especially on the world’s two most prominent democracies – India and the US. Starting from 2008, when Barack Obama first used social media for a political campaign to the divisive 2016 and 2020 elections that saw disinformation play a prominent role, the role of social media, and its spillover effects into real-world politics, has been hotly debated. It has also raised questions whether algorithm tweaks by tech giants can spark tension or violence, and even impact election results.

The studies, published this week in journals Science and Nature, were the outcome of social media behemoth Meta opening access to independent academics(REUTERS)

A landmark set of studies now adds more to this understanding and offers some pointers of what may — or in these specific cases, what may not — work. The studies, published in journals Science and Nature, were the outcome of Meta opening up access to independent academics. Among the major takeaways is that there exists a strong ideological segregation: Conservatives almost exclusively saw content that did not turn up on feeds of liberal users, and vice-versa. The median Facebook user received over 50% of their content from politically like-minded sources, compared to less than 15% from cross-cutting sources. The researchers made some tweaks to the algorithms that determine what people see. But their experiments found that these tweaks had little effect on how politically polarised people felt vis a vis their entrenched beliefs, and their trust in democratic systems. The researchers stressed that these experiments were done over three months, which may be too short a period, and that Facebook and Instagram were hardly likely to be the only sources of information people turned to. Whether these interventions are more or less effective is a question only longer trials will answer, but one thing is clear: companies such as Meta and X (formerly, Twitter) must allow this sort of access to truly understand what their products are doing to society and politics.

This has important bearings for India, which is heading into general elections next year. Roughly a decade after social media was first used by the Bharatiya Janata Party to corral millions of young supporters, the proliferation of social media has led to armies of followers, and trolls, on both sides of the political divide. Many of them deal in disinformation and in selective amplification. The social contexts and digital landscapes of the US and India may be different, but a better understanding of the link between social media and political ideology is imperative to counteracting the harmful effects of disinformation. Towards this goal, many such studies are needed in local geographical and social milieus to ensure factual information and diverse points of view reach all.

Experience unrestricted digital access with HT Premium

Explore amazing offers on HT + Economist