The world wide web was meant to unite us, but is tearing us apart instead. Is there another way?

By
Tribune Editorial Staff
October 17, 2025
5 min read
Share this post

The hope of the world wide web, according to its creator Tim Berners-Lee, was that it would make communication easier, bring knowledge to all, and strengthen democracy and connection. Instead, it seems to be driving us apart into increasingly small and angry splinter groups. Why?

We have commonly blamed online echo chambers, digital spaces filled with people who largely share the same beliefs, or filter bubbles, the idea that algorithms tend to show us content we are likely to agree with. However, these concepts have both been challenged by a number of studies. A 2022 study, which tracked the social media behaviours of ten respondents, found people often engage with content they disagree with, even going so far as to seek it out.

When an individual engages with a disagreeable post on social media, whether it’s “rage bait” or something else that offends you, it drives income for the platform. But on a societal scale, it drives antisocial outcomes. One of the worst of these outcomes is “affective polarisation”, where we like people who think similarly to us, and dislike or resent people who hold different views. Research and global surveys both show this form of polarisation is growing across the world.

Changing the economics of social media platforms would likely reduce online polarisation. But this won’t be possible without intervention from governments, and each of us.

How our views get reinforced online

Social media use has been associated with growing affective polarisation. Online, we can be influenced by the opinions of people we agree or disagree with – even on topics we had previously been neutral towards. For instance, if there’s an influencer you admire, and they express a view on a new law you hadn’t thought much about, you’re more likely to adopt their viewpoint on it.

When this happens on a large scale, it gradually separates us into ideological tribes that disagree on multiple issues: a phenomenon known as “partisan sorting”.

Research shows our encounters on social media can lead to us developing new views on a topic. It also shows how any searches we do to get more insight can solidify these emerging views, as the results are likely to contain the same language as the original post that gave us the view in the first place.

For example, if you see a post that inaccurately claims taking paracetamol during pregnancy will give your baby autism, and you search for other posts using the key words “paracetamol pregnancy autism”, you will probably get more of the same. Being in a heightened emotional state has been linked to higher susceptibility to believing false or “fake” content.

Dutch Politics fuel online hostility

Researcher Sjoerd Hartholt’s article in iBestuur.nl describes a loop that now shapes modern politics in the Netherlands: language used in parliament gets echoed on social platforms, then rebounds into the next debate. Utrecht University’s Data School documented this effect in the “Playing with Fire” study, which analyzed tens of millions of posts on X and Telegram alongside Dutch parliamentary debates. The researchers found that anger and dehumanizing terms rose after high profile exchanges, with politicians’ own posts helping to set the tone online. The result is a cycle of polarization and, in some cases, threats.

Hartholt highlights a practical detail with large consequences: some members post live during debates to reach bigger audiences than the debate itself. The Data School team shows how specific keywords, once introduced in the chamber or in politicians’ posts, can spread across networks within a day, shifting the language of public conversation. This is less about deliberation, more about performance and profiling.

The researchers also point to a change in platform governance that makes outside scrutiny harder. Since the takeover of Twitter and its transition to X, access to data has become costly, which complicates follow up research at scale. That weakens independent oversight of how platform dynamics shape democratic discourse.

The risks are not limited to the Netherlands. International bodies report rising intimidation of MPs, and growing harassment of journalists and public figures. Online targeting often spills into offline aggression. These trends suggest a shared vulnerability when political actors use social media to escalate rather than cool tensions.

Caribbean relevance is immediate. Regional leaders and media groups warn that misinformation erodes trust and public safety. Small states are highly networked and politically competitive, with added pressures during hurricanes, health alerts, and security incidents. In such contexts, posts by elected officials can move quickly from signaling to mobilizing, sometimes with unintended consequences.

Why are we fed polarising content?

This is where the economics of the internet come in. Divisive and emotionally laden posts are more likely to get engagement (such as likes, shares and comments), especially from people who strongly agree or disagree, and from provocateurs. Platforms will then show these posts to more people, and the cycle of engagement continues.

Social media companies leverage our tendency towards divisive content to drive engagement, as this leads to more advertising money for them. According to a 2021 report from the Washington Post, Facebook’s ranking algorithm once treated emoji reactions (including anger) as five times more valuable than “likes”.

Simulation-based studies have also revealed how anger and division drive online engagement. One simulation (in a yet to be peer-reviewed paper) used bots to show that any platform measuring its success and income by engagement (currently all of them) would be most successful if it boosted divisive posts.

Where are we headed?

That said, the current state of social media need not also be its future. People are now spending less time on social media than they used to. According to a recent report from the Financial Times, time spent on social media peaked in 2022 and has since been declining. By the end of 2024, users aged 16 and older spent 10% less time on social platforms than they did in 2022.

Droves of users are also leaving bigger “mainstream” platforms for ones that reflect their own political leanings, such as the left-wing BlueSky, or the right-wing Truth Social. While this may not help with polarisation, it signals many people are no longer satisfied with the social media status quo.

Internet-fuelled polarisation has also resulted in real costs to government, both in mental health and police spending. Consider recent events in Australia, where online hate and misinformation have played a role in neo-Nazi marches, and the cancellation of events run by the LGBTQIA+ community, due to threats.

Regulation is starting to lift the lid. In the European Union, the Digital Services Act now forces the biggest platforms to study their risks and share data with approved researchers. The United Kingdom and Australia have also broadened oversight of harmful content. These rules do not end polarization; they make secrecy and inaction more costly.

Platforms can take clear steps. Stop using raw engagement as the main goal; track informed reading and user satisfaction instead. Push likely false or dehumanizing posts lower in the feed, add labels and context, keep removals for the worst cases. Keep stable data access for independent audits, with privacy protections.

Users and institutions have roles too. Do not reply to rage bait; report and mute rather than quote and share. Teach in schools and public service training how ranking works, why novelty and outrage grab attention, and how to check sources fast. Parties and public offices can set cross party rules against dehumanizing language, with quick corrections when lines are crossed.

The web can still host civil disagreement. Change the incentives, add small frictions that favor accuracy and context, build habits that resist outrage. The goal is not to erase conflict; it is to keep disagreement from sliding into contempt.

For those of us who remain on social media platforms, we can individually work to change the status quo. Research shows greater tolerance for different views among online users can slow down polarisation. We can also give social media companies less signals to work from, by not re-sharing or promoting content that’s likely to make others irate.

Fundamentally, though, this is a structural problem. Fixing it will mean reframing the economics of online activity to increase the potential for balanced and respectful conversations, and decrease the reward for producing and/or engaging with rage bait. And this will almost certainly require government intervention.

When other products have caused harm, governments have regulated them and taxed the companies responsible. Social media platforms can also be regulated and taxed. It may be hard, but not impossible. And it’s worth doing if we want a world where we’re not all one opinion away from becoming an outcast.

𝘚𝘰𝘶𝘳𝘤𝘦: 𝘛𝘩𝘦𝘊𝘰𝘯𝘷𝘦𝘳𝘴𝘢𝘵𝘪𝘰𝘯.𝘤𝘰𝘮. 𝘈𝘥𝘢𝘱𝘵𝘦𝘥 𝘧𝘳𝘰𝘮 𝘎𝘦𝘰𝘳𝘨𝘦 𝘉𝘶𝘤𝘩𝘢𝘯𝘢𝘯 𝘢𝘯𝘥 𝘋𝘢𝘯𝘢 𝘔𝘤𝘒𝘢𝘺. 𝘉𝘶𝘤𝘩𝘢𝘯𝘢𝘯 𝘪𝘴 𝘋𝘦𝘱𝘶𝘵𝘺 𝘋𝘦𝘢𝘯, 𝘚𝘤𝘩𝘰𝘰𝘭 𝘰𝘧 𝘊𝘰𝘮𝘱𝘶𝘵𝘪𝘯𝘨 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘪𝘦𝘴, 𝘙𝘔𝘐𝘛 𝘜𝘯𝘪𝘷𝘦𝘳𝘴𝘪𝘵𝘺. 𝘔𝘤𝘒𝘢𝘺 𝘪𝘴 𝘈𝘴𝘴𝘰𝘤𝘪𝘢𝘵𝘦 𝘋𝘦𝘢𝘯, 𝘐𝘯𝘵𝘦𝘳𝘢𝘤𝘵𝘪𝘰𝘯, 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘢𝘯𝘥 𝘐𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯, 𝘙𝘔𝘐𝘛 𝘜𝘯𝘪𝘷𝘦𝘳𝘴𝘪𝘵𝘺.

Share this post

Sign up for our newsletter

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

By clicking Sign Up you're confirming that you agree with our Terms and Conditions.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.