🤖 AI & Software

How Algorithms Drive Polarization and Harm on Social Media

By Chris Novak9 min read1 views
Share
How Algorithms Drive Polarization and Harm on Social Media

A deep investigation reveals how social media algorithms prioritize engagement and financial gain over user safety, amplifying misinformation and harm.

Social media algorithms have become central to the way we interact with content online, but a new investigation exposes their darker side. Algorithms designed to maximize engagement and profit have contributed to global polarization, enabled harmful content, and taken a toll on individual and societal well-being. Experts, whistleblowers, and insiders from major companies like Meta and TikTok share insights into an “algorithmic arms race” that prioritizes financial goals over safety.

The problem with engaging content

At the core of the issue lies social media’s design for maximum engagement. These platforms, including Facebook (now Meta), TikTok, and even Twitter (now X), depend on users spending as much time as possible engaging with their content. Algorithms ensure this by prioritizing outrage and emotional responses because those drive stronger reactions—more clicks, shares, and watch time.

Advertisement

One former Meta researcher revealed how this engagement-first attitude resulted in safety being deprioritized. When Meta rolled out Instagram Reels to compete with TikTok, internal studies showed a notable increase in harmful comments, including bullying and harassment. However, safeguards to address such risks were insufficient at the feature’s launch due to the company’s "move fast and break things" philosophy.

TikTok’s rapid rise and its consequences

TikTok’s emergence during the pandemic marked a turning point in the competition among platforms. Its highly addictive short-form video content introduced a new level of intensity in user engagement and forced competitors like Meta to race to adapt. Yet, according to whistleblowers, this rush came at significant costs to user safety.

One whistleblower highlighted how TikTok prioritized political content moderation in specific regions, possibly to curry favor with regulatory bodies. The findings suggested that cases involving harmful behavior, such as harassment of minors, were sometimes ranked lower in priority than cases involving politicians—a troubling revelation about the platform’s internal priorities.

Comparison of algorithmic priorities

CriteriaExamplesPriority Level
Politicians' contentA post comparing a politician to a chickenHigh (P1)
Harm to minorsSexualized images spread about a teenagerLower (P2)

Such prioritization demonstrates the tension between protecting brand interests and ensuring user safety. Insiders suggest these choices stem from TikTok’s desire to avoid regional bans or tighter regulations, as its Chinese ownership places it under heightened scrutiny.

The algorithmic black box

One of the most troubling aspects of modern algorithms is their opacity. Even engineers and data scientists who build and manage these systems often cannot fully explain how they function. This lack of transparency makes it challenging not only to address issues but also to ensure accountability.

Rufanding, a former engineer for TikTok and YouTube, admitted that algorithms often operate as "black boxes." While they react to user engagement, they do so without full awareness of the content they are promoting. This means that harmful materials, such as conspiracy theories or misogynistic content, can be amplified under the radar.

Why are algorithms so unpredictable?

  • Reactive design: Algorithms prioritize content based on engagement signals rather than content quality.
  • Volume of data: With billions of posts daily, it is impossible to monitor everything manually.
  • Limited proactive measures: Companies rely on after-the-fact moderation rather than screening harmful content before it spreads.

Transparency and accountability challenges

Insiders point to a shift in the social media industry’s priorities over the years. Transparency tools, such as the now-defunct platform CrowdTangle, initially gave the public insight into what was going viral on Facebook. However, insiders claim Meta phased out these tools as it turned its focus to maximizing engagement. Companies also face external pressures, including litigation and regulatory scrutiny, which can further impact their decision-making.

For example, Elon Musk recently released the algorithmic code for X as part of a lawsuit involving content moderation practices. While transparency initiatives like this are rare, even the exposed code highlighted how algorithms inherently favor outrage-provoking content—a systemic issue across platforms.

What can be done?

Experts agree that improving content moderation and transparency will be key to mitigating algorithmic harm. Proactive measures could include:

  • Stricter pre-moderation: Review and remove harmful content before it spreads widely.
  • User controls: Allow users more options to filter the type of content shown to them.
  • Improved transparency: Platforms could make their algorithms public or provide better feedback on how content is chosen for amplification.
  • Focus on downsizing reach: Instead of banning problematic content outright, limit how widely it can be shared.

These solutions, while challenging to implement, could balance engagement with ethical responsibilities.

Conclusion

The investigation into social media algorithms highlights how these systems, designed to maximize engagement, have often caused harm and division. From enabling political manipulation to amplifying conspiracy theories, platforms like TikTok, Meta, and others have prioritized growth and profit over user safety. As society becomes more aware of the role algorithms play in shaping discourse, companies must take responsibility and implement changes that ensure both ethical standards and transparency.

Until significant efforts are made to regulate and design safer algorithms, these issues are likely to persist. As insiders reveal, the fight for engagement is too often won at the cost of individual and societal well-being.

Advertisement
C
Chris Novak

Staff Writer

Chris covers artificial intelligence, machine learning, and software development trends.

Share
Was this helpful?

Comments

Loading comments…

Leave a comment

0/1000

Related Stories