Recommendation systems have become ubiquitous in our digital lives, influencing the content we consume, the products we purchase, and the information we encounter online. Fuelled by vast amounts of user data, these algorithms have the power to personalise experiences, making suggestions tailored to our preferences and interests. Indeed, the benefits of personalisation are immense – from connecting users to the products most suited to them, to creating exponential efficiencies for platforms that deploy them. However, if unchecked, these systems may have implications for user privacy, autonomy and agency and therefore warrant careful consideration of their ethical implications and potential risks. In this blog, we delve into the basics of recommendation systems, po`ssible risks associated with their deployment and what governments worldwide are doing to address them.
Recommendation systems, or recommenders, are tools that filter, cluster and rank information, suggesting relevant content or products to users based on their preferences. These tools do so based on pre-defined criteria or signals which may include past user behaviour, platform interactions, purchase trends and demographic information, among others. Recommendations and rankings are a complex process, and there are generally a series of algorithms that carry out these functions.
Principally, there are three techniques that recommendation systems rely on:
Although they can have benefits for both platform providers and consumers, recommendation systems can pose several risks, particularly in relation to privacy and bias. These risks can open up providers not only to financial and reputational risks, but also legal action.
As recommendation systems leverage large quantities of user data to carry out their functions, they may be prone to privacy risks. Data containing personal identifiers may be collected by such systems without obtaining explicit consent, causing loss of user agency. If not fortified adequately through data protection and cybersecurity mechanisms, these datasets may run the risk of being de-anonymised and misused by bad actors to granularly-profile users – which was evidenced by the Cambridge Analytica scandal of the mid-2010s.
Considering that many recommendation systems heavily rely on collaborative-filtering techniques, safeguarding users from the system's potential (and, at times, harmful) inferences becomes a complex undertaking. This can consequently pose challenges in protecting users from the types of conclusions the system can derive about them, potentially impeding their digital autonomy.
If not trained properly, recommendation and ranking systems may be programmed with a series of algorithmic biases which might impede their effectiveness. These biases can vary, based on whether a recommendation algorithm prioritises popular, highly ranked or clickbait content over a user’s actual preferences (popularity bias), or fails to understand multiple user interests at the same time, recommending only a certain kind of result (single-interest bias). Depending on user behaviours, these can generate potentially harmful outcomes, such as inadvertently exposing users to content glorifying self-harm, eating disorders, suicide, and violent extremism.
Despite best efforts to curb their prevalence, automated feedback loops continue to recommend a fraction of such problematic material to users, surrendering them to algorithmic overdependence – where individuals rely too heavily on algorithms to make decisions, without fully considering their potential risks. Algorithmic overdependence, in turn, may funnel users into filter-bubbles, or echo chambers of one-dimensional and, at times, harmful and inaccurate narratives. Finally, such risks may be particularly pronounced for vulnerable users like minors and young adults, exposing them to potentially dangerous products, inappropriate content and bad actors.
Indeed, there have been a slew of lawsuits against the use of recommendation algorithms, highlighting their potential to cause harm, both online and offline. For example, according to the Gonzalez vs. Google case, petitioners argued that YouTube’s recommendation engine helped radicalise individuals on ISIS propaganda, resulting in them being sued under Title 18 of the U.S. Code § 2333 of the Antiterrorism Act (ATA).
Further, a Seattle school district blamed algorithms for playing a central role in exacerbating mental health issues among teenagers. Citing a 2021 investigation where teenage girls reportedly developed eating disorders after TikTok promoted extreme diet videos to them, the District sued leading social media platforms for allegedly addicting their children to problematic content.
Governments worldwide have accelerated their efforts to govern recommendation systems and prevent future instances of harm. Leading the pack is the European Union, which in recent years has launched a multi-pronged regulatory endeavour to govern such algorithms, starting with its Guidelines on Ranking Transparency in 2020, which mandates that recommendation and ranking decisions be explainable and well-communicated to users.
This is complemented by the Digital Services Act (DSA) – the EU’s mainstay legislation on online safety, which prescribes a series of measures to ensure recommender transparency, risk assessment and risk management (Articles 27, 34 and 35 of the legislation, respectively). The DSA mandates Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) to conduct independent external audits of recommendation algorithms (Article 37) and grant data access to Digital Services Coordinators and vetted researchers (Article 40), such that systemic risks to online safety are proactively prevented. Furthermore, under Article 38, the DSA directs VLOPs and VLOSEs to implement design and technical modifications to their systems to provide users with the choice to opt out of personalised recommendations.
Governing recommendation systems also comes under the purview of the EU AI Act, which seeks to establish a horizontal and risk-based regulatory regime for Artificial Intelligence. In the Act's latest compromise text – which was unanimously passed by the European Parliament on 14 June 2023 and has since proceeded to the final Trilogue stage of negotiations between the EU Parliament, Council and Commission – recommender systems deployed by VLOPs and VLOSEs have been designated as High-Risk AI systems. This brings forth a set of stringent obligations, mandating providers to undergo ex-ante conformity assessments, obtain a CE certification, conduct Fundamental Rights Impact Assessments, and establish post-market monitoring plans.
Across the Atlantic, the United States is seeing concerted bipartisan efforts to regulate recommendation systems and platform algorithms. Legislations like the Algorithmic Justice and Online Platform Transparency Act (2021), Platform Accountability and Transparency Act (2021) and the Filter Bubble Transparency Act (2019) are leading examples in this regard – and like the playbook followed by broader US AI regulation (notably the Algorithmic Accountability (AAA) and Stop Discriminations by Algorithms (SDAA) Acts), may subject providers of such systems to mandatory transparency measures and algorithmic audits.
It remains to be seen whether these endeavours will effectively reduce the incidence of harm or inadvertently stifle innovation. In the short term, however, increasing public scrutiny and government clarion calls for regulation are certain.
On 6 May 2023, The European Commission published draft rules on conducting annual independent audits of large platforms under the Digital Services Act. Targeting platform algorithms (which include recommendation systems), these rules are expected to be adopted by the Commission by the third quarter of 2023, leaving just a few months for platforms to comply with them. With such regulatory measures afoot, it is crucial to prioritise the development of AI systems that embed ethical principles such as fairness, explainability and harm mitigation right from the outset.
At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations covering a vast range of systems. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.
To find out more about how Holistic AI can help you, schedule a demo with us.
Authored by Siddhant Chatterjee, Public Policy Associate at Holistic AI.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Subscribe to our newsletter!
Join our mailing list to receive the latest news and updates.