Algorithmic Auditing for Social Media Companies: Preventing the Algorithmic Amplification of Extremist Content
- 2025 Global Voices Fellow

- Mar 24
- 14 min read
Amelie Szczecinski, Curtin University, UNGA 6th Committee Fellow
Executive Summary
With the rise in the use of the internet and social media, online recommender algorithms have become central to how individuals consume information. While recommender algorithms can be positive in that they tailor content to user preferences and allow for easier access to relevant information, they are also harmful. They can promote extremist and radical content, sometimes leading individuals into ideological echo chambers.
This proposal recommends that the Australian government amends the Online Safety Act (Cth) (‘the Act’) to introduce requirements for social media companies to be subject to algorithmic auditing, increasing transparency surrounding the promotion of extremist content. This recommendation seeks to ensure Australia adopts internationally recognised best practices, in line with jurisdictions such as the EU, and aligns with the recommendations of the Christchurch Call for stronger regulation and oversight of algorithms (Christchurch Call, 2022).
This policy proposes amending Part 9, Division 7 of the Act to require social media companies to undertake algorithmic auditing by an independent third party. This ‘risk based approach’ targets algorithmic pathways promoting extremist content, as opposed to a wide-sweeping general audit. This amendment will also create mitigation requirements where algorithms are found to promote high risk (specifically extremist) content, with the eSafety Commissioner having the power to require corrective actions and impose penalties for non-compliance.
This policy option is appropriate as it acts as a preventative measure, regulating algorithms before they recommend extremist content while also ensuring platforms bear responsibility for mitigating such risks. However, this policy may face barriers in regulating social media companies which span transnationally, alongside limitations in the Australian Government's knowledge in relation to the algorithmic promotion of extremism.
Problem Identification
Online recommender algorithms, commonly utilised by sites such as Youtube, TikTok and X to drive engagement, play a large role in the promotion and spread of radical and extremist content (Whittaker, 2021). While recommendation algorithms traditionally work to prioritise and curate user content based on their data, research has found this is also relevant to harmful content: ‘if a user spends time engaging with potentially harmful content, those same metrics may lead to individuals seeing more of the same material or increasingly harmful material in their feeds’ (eSafety Commissioner, 2022).
The most pressing consequence of recommender algorithms promoting extremist content is their potential to increase the exposure of individuals to extremist and radical content, exacerbate the process of radicalisation, and in some instances, encourage actual acts of terrorism and violence (Whittaker, 2021). For example, the 2022 Buffalo shooter who killed 10 people was arguably radicalised through internet algorithms with a lawsuit stating that, ‘the shooter's near constant use of social media—and the algorithms that provided a continuing stream of videos to watch and forums to explore—exposed him to racist conspiracy theories and radicalized him to a dangerous degree’ (Sullivan, 2024).
Australia lacks policy that directly targets recommender algorithms, and is instead focused on the removal of violent material itself once it has spread. For example, Section 109 of the Online Safety Act (Cth) (‘the Act’) allows the eSafety Commissioner to issue removal notices to internet providers for material which is violent or abhorrent etc. Similarly, the Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth) criminalises the failure of online platforms to remove abhorrent violent content. Neither of these pieces of legislation target extremist content before it becomes violent, or the algorithms which contribute to this spread. This is therefore a reactive approach as opposed to a necessary proactive one.
Left unchecked, online recommender algorithms will continue to amplify extremist, radical and harmful content, potentially accelerating radicalisation throughout Australia and the rest of the world.
Context
Background
Recommender algorithms make personalised content suggestions to users, using computing instructions that determine what a user will be served (eSafety Commissioner, 2025). Typically, such algorithms are utilised to personalise content suggestions, help individuals discover information, and help content creators reach new audiences. However these algorithms can also have negative effects, with algorithms amplifying harmful, violent, and extremist content.
In this context, radicalisation refers to the process by which individuals ‘develop a commitment to a particular extremist ideology’ (Royal Commission of Inquiry into the Terrorist Attack On Christchurch Mosques on 15 March 2019, 2020). Online extremism generally refers to ‘internet activism that advocates, supports, or recruits for ideologically extreme political positions in online spaces’. (Senate Inquiry into Right Wing Extremist Movements in Australia, 2024).
There is strong evidence to suggest that recommender algorithms are highlighting extremist content. A 2022 Australian study found that experimental YouTube accounts were ‘lured to the manosphere through recommended video features’ (Roberts, 2024). This algorithm was found to ‘optimise more aggressively in response to user behaviour and show more extreme videos within a relatively brief time frame’ (Roberts, 2024). A similar study in England found that TikTok accounts with an initial ‘interest’ of masculinity or loneliness were presented with ‘four times as many videos with misogynistic content including objectification, sexual harassment or discrediting women, which increased from 13% of recommended videos to 56%’ (Weale, 2024).
The most pressing consequence of this is the potential for these recommender algorithms to accelerate the process of radicalisation, and sometimes lead to actual acts of violence and terrorism (Whittaker, 2021). Studies have shown that algorithmic amplification acts as a ‘conduit for polarisation and radicalisation’ (Whittaker, 2021, p 4). As users encounter more extremist and radical content, ‘they can gradually become desensitised to acts of violence and violent extremist ideas, which are normalised and reinforced’ (Australian Institute of Criminology, 2023).
While it is difficult to link algorithms to real violence, there are examples of individuals radicalised on the internet going on to commit violent terrorist acts, demonstrating the tangible consequence of online radicalisation as a whole (Australian Institute of Criminology, 2023). For example, the 2019 Christchurch shooter, Brenton Tarrant admitted in his manifesto that the internet played a large role in facilitating his beliefs, and subsequent attack (Quek, 2019). Similarly the 2014 Isla Vista killer Elliot Rodger, ‘constructed a logic of action in the online world, leading to what he saw as justified revenge and punishment’ (Blommaert, 2018).
Current Policy Landscape
Under section 109 of the Act (under the Abhorrent Violent Conduct Powers), the eSafety Commissioner has the power to issue removal notices to the provider of a social media service in relation to ‘Class 1’ material. Broadly this includes content depicting sex, drug misuse, and violence. While these powers are important in the regulation of online material, they do not allow for the removal of extremist content, or content which is likely to promote extremism. Meanwhile, subdivision HA, section 474.45A of the Criminal Code 1995 (Cth) deals with ‘offences relating to use of carriage service for violent extremist material’. However, similarly to the eSafety Commissioner’s powers, this provision only criminalises the sharing of ‘violent’ extremist material.
There is therefore a large statutory gap in addressing online extremist material which does not meet this criteria. Further, Australia does not have any enforceable mechanisms for the use and regulation of recommender algorithms.
Globally, Australia lags behind other jurisdictions both in relation to the implementation of legislation regarding algorithms, and legislation regarding extremist and radical content, despite the recommendations of the Christchurch Call. The Christchurch Call is a global initiative, launched in 2019 by the governments of New Zealand and France after a mosque terrorist attack in Christchurch killed 51 and injured a further 50. Its purpose is to bring countries together with a goal of ‘eliminating terrorist and violent extremist content online’. The Christchurch Call regards algorithmic reform (in particular algorithmic auditing) as a clear focus.
Algorithmic auditing refers to the ‘range of approaches used to review algorithmic processing systems’ (Goodman, 2023, p 291) and is crucial for promoting transparency and accountability. Algorithmic auditing allows for transparency as to how extremist content is amplified, and what can be done to slow down or prevent this process, identifying vital risks. Further, algorithmic auditing ensures that ‘algorithms and systems do not give extremists the advantage from the start by feeding existing biases’ (Whittaker, 2021, p 4.).
European Union
The European Union implemented the Digital Services Act (‘DSA’) in 2022, which contains a number of obligations to increase algorithmic transparency and accountability, with specific obligations for online platforms. Article 14(1) stipulates that service providers must disclose their measures and tools for content moderation ‘including algorithmic decision making’ while article 27(1) requires providers to outline ‘the main parameters used in their recommender systems’. Providers are also subject to due diligence obligations, with article 34(1) requiring that service providers must annually ‘identify, analyse and assess any systemic risks stemming from their algorithmic systems’.
The DSA has shown positive results broadly. In relation to algorithms, the European Commission has stated that the DSA is showing ‘concrete results, particularly in relation to its introduction of an ‘opt-out of the recommender systems based on profiling, which means that recommender systems now must also be available without profiling users’ (European Commission, 2025). Additionally, since the DSA’s introduction, the Commission has opened ten formal proceedings in relation to algorithmic recommender systems (European Commission, 2025).
However the DSA has been criticised for gaps in its enforcement mechanisms, with issues in relation to effective enforcement across the countries of the EU (Mattioli, 2025). Australia therefore overall trails behind its international counterparts in policy targeting both recommender algorithms, and the spread of extremist and radical content which is not violent in nature.
Policy Options
To address the risks posed by recommender algorithms in promoting extremist content, a critical measure of success would be an evident reduction in the algorithmic promotion of extremist content, coupled with increased transparency surrounding algorithmic practices, and enforceable platform accountability.
Option 1: Expand the Abhorrent Violent Conduct Powers of the eSafety Commissioner to allow the removal of ‘extremist content’ or content which is likely to promote extremism.
While the Abhorrent Violent Conduct Powers are important in the regulation of online material, they do not allow for the removal of extremist content, or content which is likely to promote extremism. Subsequently, such material remains available to be amplified by algorithms.
Amending the definition of Class 1 material to permit such removal would allow for a proactive approach. It would address content which does not yet meet the threshold of violence, but still displays radical and extremist ideologies. This would work to disrupt algorithmic amplification processes, reducing the visibility and circulation of such content.
In practice, this would allow the eSafety Commissioner to intervene before ideology becomes violent or criminal, interrupting the early stages of online radicalisation, and the role of algorithms in promoting such content.
While this is positive in that it allows for a proactive approach in addressing extremism and radicalisation before it escalates, regulating ‘extremist content’ is incredibly different as there is no one agreed definition for extremism, and it can be a very subjective term. Further, regulation here risks encroaching into freedom of speech, potentially restricting legitimate expression or debate.
Option 2: Require social media companies operating within Australia to be subject to independent algorithmic auditing.
This option would require the amendment of the Act to include algorithmic auditing for social media companies. In practice, this would mean that social media companies operating in Australia such as Meta, TikTok, and Youtube would be subject to auditing measures with transparency reporting obligations. Such audits would be carried out by Australian independent audit companies, and would be overseen by the eSafety Commissioner.
If implemented effectively, algorithmic audits would lead to greater transparency and accountability, leading to a better understanding of how recommender algorithms can promote extremist and radical content. Complementing this, impact assessments would allow for greater proactivity in shielding users from related harms and minimising negative outcomes (Whittaker, 2024). Further, this policy option would include an enforceability mechanism, where social media companies whose algorithms are found to actively promote extremist content would be subject to corrective action and penalties for non compliance.
While this option would encourage transparency for algorithmic processes, a potential weakness is the difficulty of regulating internet and social media platforms which span transnationally.
Option 3: Require social media companies operating within Australia to be subject to independent algorithmic impact assessments.
This option would amend the Online Safety Act 2021 (Cth) to require social media companies within Australia to conduct regular impact assessments, which evaluate the broader social risks of their recommender algorithms, particularly in relation to the promotion of extreme content.
From these impact assessments, social media companies would then be required to implement proportionate mitigation measures to reduce these identified impacts. Impact assessments investigate the ‘types, severity, and prevalence of effects of an algorithm’s outputs’ (Whittaker, 2021, p 17). This differs from algorithmic auditing as auditing is a more ‘targeted approach focusing on assessing a system for potential biases’. (Ada Lovelace Institute, 2022). This approach would align Australia with jurisdictions such as the EU, where the Digital Services Act requires annual risk assessments.
This would create strict rules on how algorithms in Australia are to be designed, implemented, and used, with strong protections for vulnerable groups such as children (UK Government, 2025). Fines for specific non-compliance would mirror those currently utilised by the eSafety Commissioner under the existing Act. This policy option would therefore help to promote ‘ethical algorithms to help stop online radicalisation’ (Collins, 2021).
While this is important in that it would proactively address the risks of algorithms in promoting extremism and other harmful content, social media companies may object to these requirements, making them difficult to enforce.
Policy Recommendation
Option 2, to require social media companies within Australia to be subject to independent algorithmic auditing, is recommended as the most suitable option as it acts as a preventative measure, identifying and mitigating harmful algorithmic behaviors before it escalates to the promotion of extremist content.
This policy would involve the amendment of the Act to include independent algorithmic auditing as an element of regulatory oversight, requiring social media platforms to provide transparency in the design, use, and operation of their algorithms. This amendment would be implemented under Part 9, Division 7 of the Act which details industry codes and standards.
This amendment would outline how algorithm audits are to be conducted as a focused ‘risk based audit approach’ (Mebmer and Degeling, 2023). For regulating extremist content, the risk based approach will prioritise auditing the part of social media algorithms with the highest risk of promoting such content, specifically in relation to scenarios where individuals may begin to be exposed to harmful/extremist material.
In practice, this amendment would mean that social media platforms that use recommender algorithms would be required to conduct annual audits and provide these results to the eSafety Commissioner. Such audits will be conducted by independent third party auditors. Under the Act, a failure to be subject to such audits could incur non compliance penalties ranging from penalty units, to injunctions and infringement notices (Online Safety Act 2021 (Cth).
Part two of this policy recommendation would focus on enforceability and mitigation where algorithms are found by audits to promote extremist material (or risks). Here, social media platforms will be legally obligated to implement corrective actions to mitigate identified harms in relation to their recommender algorithms. Such corrective actions would entail modifying algorithmic pathways, adjusting recommender settings and improving content moderation. The role of the eSafety Commissioner would consist of overseeing the audit process, monitoring compliance, and imposing penalties.
The effect of this amendment would be increased accountability and transparency of social media platforms, particularly in relation to extremism. Further, such audits could lead to safer design practices, and an eventual reduction in exposure to extremist content.
The main costs would include the amendment of the legislation and the regulatory oversight of algorithmic auditing, as well as research costs associated with setting up an algorithmic auditing process. This proccess will be covered by the Australian eSafety Commissioner. According to the eSafety Commissioner Annual Report 2024-2025, the eSafety Commissioner spent a total $13,522,544 on contractors (eSafety Commissioner, 2025). Algorithmic auditing costs would therefore extend beyond this baseline by an estimated 25% to account for these costs.
Risks
Barriers
A barrier to the implementation of this policy is the Australian Government’s knowledge gaps on the role and function of recommender algorithms, particularly as they promote extremism. According to the Joint Select Committee on Social Media and Australian Society (2024), there is very little ‘systematic knowledge about how platform’s algorithms, recommender systems and business tactics influence what Australians see (and hear)’.
However, there are mitigation strategies available to counter this barrier. The Australian Government could engage in consultation and collaboration with relevant organisations and initiatives regarding the design of algorithmic auditing regulations.
Another barrier to the implementation of this policy is the transnational nature of social media, and the difficulty of regulating platforms which span across multiple jurisdictions. While the Act protects users accessing social media platforms from Australia, regardless of where the company is based, enforcement does become difficult (Holland & Tang, 2023). This is because ‘most platforms have a limited, or no, local legal presence in Australia, which makes serving or enforcement of legal options very difficult’ (Joint Select Committee on Social Media and Australian Society, 2024).
Another barrier to the implementation of this policy, is that social media companies may resist this policy, or may not fully comply with it. Algorithms are designed to maximise engagement and therefore profit, which means that companies may prioritise their business interests over adherence to regulatory measures.
Risks
A risk stemming from this is the social risk which arises in relation to free speech, as regulating extremist algorithmic content may be perceived as censorship, potentially limiting legitimate expression.
False assurance is a risk stemming from non-compliance. According to Goodman (2022) a social media company that ‘has audited itself or submitted to inadequate auditing can provide false assurance that it is complying with norms and laws’. Such an audit would therefore be meaningless. False assurance, however, can be mitigated by ensuring that audits are conducted both independently, and with firm standards set by the eSafety Commissioner.
References
Ada Lovelace Institute. (2022). NMIP algorithmic impact assessment user guide (February 2022). Ada Lovelace Institute. https://www.adalovelaceinstitute.org/wp-content/uploads/2022/02/Algorithmic-impact-assessment-user-guide.pdf
Australian Communications and Media Authority & eSafety Commissioner. (2025). Annual report 2024-25. Australian Government. https://www.acma.gov.au/sites/default/files/2025-10/ACMA%20and%20eSafety%20annual%20report%202024-25.pdf
Blommaert, J. (2018). Offline-online modes of identity and community: Elliot Rodger’s twisted world of masculine victimhood. In M. Hoondert, P. Mutsaers, & W. Arfman (Eds.), Cultural Practices of Victimhood. Routledge. https://www.taylorfrancis.com/chapters/edit/10.4324/9781315148335-10/online-offline-modes-identity-community-jan-blommaert?utm_
Cambridge Dictionary. (n.d.). Manosphere. In Cambridge Dictionary. Retrieved November 1 2019, from https://dictionary.cambridge.org/dictionary/english/manosphere
Collins, N. (2021). How can ethical algorithms combat online extremism? The University of Auckland New Zealand. https://www.thebigq.org/2021/06/20/how-can-ethical-algorithms-combat-online-extremism/?utm_
Criminal Code 1995 (Cth)
Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (Cth). https://www.aph.gov.au/Parliamentary_Business/Bills_Legislation/Bills_Search_Results/Result?bId=s1201
Deutsche Welle News. (2024, November 28). Tiktok, Meta slam Australia’s social media ban for under-16s. Deutsche Welle News. https://www.dw.com/en/tiktok-meta-slam-australias-social-media-ban-for-under-16s/a-70909479?utm_
European Commission. (2025). The enforcement framework under the Digital Services Act. https://digital-strategy.ec.europa.eu/en/policies/dsa-enforcement
eSafety Commissioner. (2025). Recommender systems and algorithms - position statement. https://www.esafety.gov.au/industry/tech-trends-and-challenges/recommender-systems-and-algorithms
European Union. Parliamentary question - E-002826/2024(ASW) Answer given by Executive Vice-President Virkkunen on behalf of the European Commission. European Parliament. 19 February 2025. https://www.europarl.europa.eu/doceo/document/E-10-2024-002826-ASW_EN.html?utm
Goodman, E. (2023). Algorithmic Auditing: Chasing AI Accountability. Santa Clara High Technology Law Journal, 39(3), 290-338. https://digitalcommons.law.scu.edu/cgi/viewcontent.cgi?params=/context/chtlj/article/1689/&path_info=11_Goodman___Algorithmic_Auditing_Chasing_AI_Accountability_PUBLISHED.pdf
Goodman, E., & Trehu, J. (2022, November 15). AI Audit-Washing and Accountability. The German Marshall Fund of the United States. https://www.gmfus.org/news/ai-audit-washing-and-accountability#:~:text=Inadequate%20audits%20or%20those%20without,credit%20without%20doing%20the%20work
Holland, R., Tang, K. (2023). Social media and online safety: Australian Regulation spotlight. Herbert Smith Freehills Kramer. https://www.hsfkramer.com/insights/2023-06/social-media-and-online-safety-australian-regulation-spotlight?utm_
Khalil, L. (February 2021). Inquiry into extremist movements and radicalism in Australia: Submission to the Parliamentary Joint Committee on Intelligence and Security: Inquiry into extremist movements and radicalism in Australia. Lowy Institute. https://www.lowyinstitute.org/sites/default/files/KHALIL%20PJCIS%20Parliamentary%20Submission%20FINAL%20PDF.pdf
Meßmer, A., & Degeling M. (2023). Auditing Recommender Systems–Putting the DSA into practice with a risk-scenario-based approach. Arxiv. https://arxiv.org/abs/2302.04556
Mhor Collective. (2025). Echo Chambers and Empty Spaces: Practitioners exploring digital inequality and misogyny. Mhor Collective. https://www.mhorcollective.com/wp-content/uploads/2025/03/Incel-and-Online-Misogyny-Report.pdf
Milmo, D. (2025, August 2). UK Online Safety Act risks ‘seriously infringing’ free speech, says X. The Guardian. https://www.theguardian.com/technology/2025/aug/01/uk-online-safety-act-free-speech-x-elon-musk
Mølmen, G., Ravndal, J. (2021). Mechanisms of online radicalisation: how the internet affects the radicalisation of extreme-right lone actor terrorists. Behavioral Sciences of Terrorism and Political Aggression, 15(4), 463-487. https://www.tandfonline.com/doi/full/10.1080/19434472.2021.1993302?src=recsys
Online Safety Act 2021 (Cth)
Office of the New York State Attorney General Letitia James. (2022). Investigative Report on the role of online platforms in the tragic mass shooting in Buffalo on May 14 2022. New York State Attorney General. https://ag.ny.gov/sites/default/files/buffaloshooting-onlineplatformsreport.pdf
Parliament of Australia Senate Standing Committee on Legal and Constitutional Affairs. (2024). Right Wing Extremist Movements in Australia: Chapter 5 Extremism and the online environment. Australian Government. https://www.aph.gov.au/Parliamentary_Business/Committees/Senate/Legal_and_Constitutional_Affairs/RWExtremists23/Report/Chapter_5_-_Extremism_and_the_online_environment
Parliament of Australia Joint Committee on Social Media and Australian Society. (2024). Social media: the good, the bad, and the ugly - Final Report: Chapter 5 - Regulation of social media platforms. Australian Government. https://www.aph.gov.au/Parliamentary_Business/Committees/Joint/Social_Media_and_Australian_Society/SocialMedia/Final_report/Chapter_5_-_Regulation_of_social_media_platforms?utm_
Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC.
Riga, J. (2024, November 29). Meta, Tiktok and Snapchat respond to new Australian laws banning social media for kids and teenagers under 16. ABC News. https://www.abc.net.au/news/2024-11-29/meta-snapchat-tiktok-respond-to-australian-social-media-ban/104664478
Rolfe, T. (2024). EU Digital Services Act. Institute for Strategic Design. https://www.isdglobal.org/isd-explainer/eu-digital-services-act/
Sullivan, B. (2024, March 19). Reddit and Youtube must face lawsuit over the radicalisation of the Buffalo shooter. National Public Radio. https://ag.ny.gov/sites/default/files/buffaloshooting-onlineplatformsreport.pdf
The Christchurch Call. (2022). Christchurch Call Initiative on Algorithmic Outcomes. https://www.christchurchcall.org/christchurch-call-initiative-on-algorithmic-outcomes/
The Christchurch Call. (2024). Auditing Proprietary Algorithms while Preserving Privacy is Possible: Here’s How. https://www.christchurchcall.org/auditing-proprietary-algorithms-while-preserving-privacy-is-possible-heres-how/
Thomas, E., Balint, K. (2022). Algorithms as a Weapon Against Women: How YouTube Lures Boys and Young Men into the ‘Manosphere’. Institute for Strategic Dialogue. https://www.isdglobal.org/isd-publications/algorithms-as-a-weapon-against-women-how-youtube-lures-boys-and-young-men-into-the-manosphere/
Weale, S. (2024, February 6). Social media algorithms ‘amplifying misogynistic content’. The Guardian. https://www.theguardian.com/media/2024/feb/06/social-media-algorithms-amplifying-misogynistic-content
Whittaker, J., Looney, S., Reed, A., Votta, F. (2021). Recommender systems and the amplification of extremist content. Internet Policy Review, 10(2), 1-29. https://doi.org/10.14763/2021.2.1565
Wolbers, H., Dowling., C., Cubitt, T., & Kuhn, C. (2023). Understanding and preventing internet-facilitated radicalisation. Trends & issues in crime and criminal justice (no. 673). Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/tandi/tandi673
The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.
.png)


