This policy brief identifies additional content-agnostic strategies that can be utilised to limit the spread of harmful content through platforms’ recommendation systems.


Reducing the spread of harmful content on social media platforms (SMPs) has become a critical goal for regulators and governments around the world. Most of the conversation around the issue thus far has focused on developing appropriate content moderation strategies that can be used to compel platforms to take down content that has been deemed to be ‘illegal.’

In India, the Information Technology Rules (2021) frame SMPs as intermediaries and outline the responsibilities of SMPs with regard to content moderation. They also mandate the availability of grievance redressal mechanisms and place additional obligations on ‘significant social media intermediaries,’ which include a responsibility to avoid posting certain types of harmful content, as well as responsibilities as a platform to swiftly remove any content that has been deemed to be unlawful as per the act.

This approach to limiting the spread of harmful content online places responsibilities on platforms to restrict or moderate content that is deemed harmful. Such content-moderation-focused approaches alone will not be adequate and can result in the restrictions of rights. This approach is limited because:

This form of content moderation assumes that platforms function as intermediaries and not as content curators. While social media platforms do undoubtedly function as intermediaries, and there is tremendous value to the safe harbour freedoms they have, this does discount the fact that these social media platforms also play a curatorial role through the use of algorithmic recommendation systems that are optimised to serve their business interests. These systems are responsible for determining what content is shown to the user, usually based on numerous personal data points. These recommendation algorithms, therefore, play a significant role in what content users see and by extension can facilitate the spread of harmful content across users. Regulating them, therefore, is pivotal to addressing the spread of harmful content online.

Given these limitations of current content moderation strategies, the report suggests that equal, if not more, policy attention should be paid to content recommendation algorithms that are responsible for algorithmic amplification of harmful content. Such measures could include:

None of these solutions are perfect in themselves and further research and experimentation is required both from researchers and the government. What is clear, however, is that there must be some expansion of the current content moderation regime to include issues relating to content recommendation and algorithmic amplification.

Download it here ⬇

FINAL_Recommender Algos Brief_DFL.pdf