Algorithmic Harms on Social Media: Navigating Online Safety Challenges

April 24, 2023
Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Algorithmic Harms on Social Media: Navigating Online Safety Challenges

Algorithms are increasingly playing a significant role in facilitating connections on social media platforms, from powering recommendations that help businesses connect with users, to amplifying movements like #MeToo that enable positive socio-cultural shifts. However, they have also been deployed as vectors of harm, both intentionally and unintentionally. In this blog we take a quick look into what some of these harms might be, what governments are doing to mitigate them, and what can be done to ensure that algorithms are developed with safety, fairness and ethics in mind.

Key Takeaways:

  • Malicious actors are using generative AI to create deepfakes, voice clones and synthetic media, that amplify inaccurate and harmful narratives, leading to increasing fears that Large Language Models like ChatGPT and Bard may also be misused to manufacture misleading content.
  • Social media algorithms may unintentionally cause harm due to inadequate ethical or risk management frameworks, algorithmic overdependence, and reinforcing stereotypes on marginalised communities, leading to filter bubbles and echo chambers.
  • Governments worldwide are moving rapidly to mitigate the harms caused by social media algorithms through regulations such as the EU AI Act, Digital Services Act, and legislation in the US.
  • Lawsuits like Gonzalez v. Google, which accuses YouTube of helping radicalise individuals on ISIS propaganda, may set a precedent for holding platforms liable for algorithmic recommendations.
  • It remains to be seen whether the measures being taken will effectively reduce harms or stifle innovation.

Understanding algorithmic harms

Malicious actors have resorted to using generative AI to create deepfakes and voice clones to peddle inaccurate and harmful narratives. From President Zelenskyy purportedly asking his citizens to lay down their weapons at the height of the Ukraine-Russia conflict to instances of doctored intimate photos of women, such forms of synthetic media are being generated and amplified across the internet with minimal friction every day. It is therefore no surprise that there is increasing scepticism in the research community that Large Language Models like ChatGPT and Bard may also be misused to manufacture conspiracy theories. Coupled with cases of bot accounts and troll-farms that seek to amplify such types of disinformation, hate speech and computational propaganda, social media algorithms may risk subverting public dialogue and democratic processes like elections, thereby widening trust-deficits.

Algorithmic harms on social media may also occur unintentionally due to inadequate, or absent ethical or risk management frameworks. For example, algorithms trained on biased data may reinforce stereotypes on race, gender and sexuality, preventing marginalised communities from effective representation online. Further, recommendation algorithms – which power search engines and help users discover content suited to their preferences – risk inadvertently exposing users to content glorifying eating disorders, child pornography, violent extremism, and self-harm. Despite best efforts to curb their prevalence, automated feedback loops continue to recommend a fraction of such low-quality material to users, surrendering them to algorithmic overdependence – where individuals rely too heavily on algorithms to make decisions, without fully considering their potential risks. Algorithmic overdependence, in turn, may funnel users into filter-bubbles, or echo chambers of one-dimensional and at times, harmful narratives.

Social media platforms are being sued for algorithmic harms

There has been a slew of lawsuits against algorithms used by social media platforms, highlighting their potential to cause harm, both online and offline. For example, according to the Gonzalez vs. Google case, petitioners argued that YouTube’s recommendations helped radicalise individuals on ISIS propaganda, resulting in them being sued under Title 18 of the U.S. Code § 2333 of the Antiterrorism Act (ATA). Further, a Seattle school district is blaming algorithms for exacerbating mental health issues among teenagers. Consequently, citing a 2021 investigation where teenage girls reportedly developed eating disorders/relapses after TikTok promoted extreme diet videos to them, the District sued leading social media platforms for allegedly addicting their children on problematic content.

This is not to say that all AI is harmful

Social media platforms can now detect and moderate harmful content like Child Exploitative Imagery (CEI) and Violent Extremism at scale using classifiers, create safe experiences for young users using age-assurance technologies, and even detect clusters of inauthentic accounts through automated signals. Algorithms are being deployed to identify indicators of imminent harm – like search keywords and patterns in videos watched – to send help at the earliest possible instance. Indeed, recent developments in enhancing algorithmic explainability, equity and fairness on social media have been a product of AI research and innovation.

However, risks remain. Lacking human nuance and context, automated content moderation technologies may misidentify and excessively remove harmless content, impairing freedom of speech. Detection algorithms trained on limited datasets may suffer from social biases, with a study for example finding some classifiers labelling content as hate-speech by African Americans at a higher rate than other users. Similarly, algorithms using Natural Language Processing (NLP) to identify text may not classify non-English content accurately as they are usually trained in English – posing implications on the global equity of content moderation decisions.

Regulatory shifts

In light of such potential risks posed by social media algorithms, governments worldwide are moving rapidly to mitigate these harms. Although the EU AI Act is likely to become the global gold standard for AI regulation and targets a range of high-risk applications of AI, it does not explicitly seek to regulate social media algorithms. However, recent iterations in its text to classify biometric applications like face-filters (used in applications like Snapchat) and recommender systems that interact with children as high-risk AI systems are signals of increasing regulatory concern.

Nevertheless, other regulation in the EU will have important implications for the safety of social media algorithms. For example, the Digital Services Act (DSA) is seeking to impose risk assessments (Article 26), mitigation measures (Article 27) and independent third-party audits (Article 28) on Very Large Online Platforms (VLOPs), with failure to comply with them resulting in large penalties. Transparency obligations on recommendation algorithms are being enforced through the Ranking Transparency Guidelines, such that algorithmic decisions are explainable and well-communicated to users. Moreover, Italy – a member state, has adopted more drastic measures by temporarily banning ChatGPT, citing no legal basis for the LLM’s data collection practices.

The US is also seeing concerted bipartisan efforts to regulate these harms. Legislation like the Algorithmic Justice and Online Platform Transparency Act (2021) and the Filter Bubble Transparency Act (2019) are leading examples in this regard – and like the playbook followed by broader US AI regulation (notably the Algorithmic Accountability (AAA) and Stop Discriminations by Algorithms (SDAA) Acts), will subject companies to mandatory transparency measures and algorithmic audits. Finally, lawsuits such as Gonzalez v. Google may also set a precedent for whether platforms would be liable for harms caused by algorithmic recommendations – potentially changing the landscape of the internet.

It remains to be seen whether these endeavours will be fit-for-purpose and effectively reduce harms or be counter-productive and inadvertently stifle innovation. What is certain in the short-term however, is an uptick in such incidents, heightened public scrutiny and increasing government measures to regulate them.

What’s next

There is a pressing need to develop trustworthy AI systems that are embedded with ethical principles on fairness and harm mitigation from the get-go. With regulatory efforts taking momentum globally, businesses of all sizes will need to act early, and proactively to be compliant.

At Holistic AI, we have pioneered the fields of AI ethics and AI risk management and have carried out over 1000 risk mitigations. Using our interdisciplinary approach that combines expertise from computer science, law, policy, philosophy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.

To find out more about how Holistic AI can help you get compliant with upcoming AI regulations, schedule a demo with us.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

Discover how we can help your company

Schedule a call with one of our experts

Schedule a call