Promoting Accountability for AI Misinformation: Intermediary Digital Liability
- 2025 Global Voices Fellow
- Jul 8
- 14 min read
Oliver Price, Curtin University, AI for Good 2025 Fellow
Executive Summary
The increasing accessibility and capability of Generative Artificial Intelligence Models (AI) has empowered actors to manipulate and synthesise media, causing a rise in harmful falsified information on digital platforms. Technology companies are not currently incentivised to remove falsified media generated by AI, with polarising misinformation driving user engagement while undermining trust in the technology. Rampant AI-driven false information has broader effects on Australian democracy, with 43% of surveyed Australians identifying election inference as a growing risk of this technology. Increasing trust in AI would ensure consumers and businesses engage with the technology, adding an estimated $170 to $600 billion to the Australian economy.
Companies can be incentivised to remove AI-generated misinformation by establishing a liability for the removal of this content. An intermediary digital liability would require media companies to remove content reasonably believed to be harmful AI-generated misinformation, with the eSafety Commissioner responsible for reviewing and enforcing this requirement. This paper proposes the amendment of the Online Safety Act 2021 (Cth) to empower the eSafety Commissioner to apply the proposed Digital Duty of Care Bill (Cth) to AI-generated misinformation. The estimated cost of this measure is approximately $7.9 million over a period of four years.
This policy option is most appropriate as it creates transparent and enforceable requirements for the moderation of AI misinformation, increasing user trust in the technology and the broader media landscape. However, the application of the liability would face resistance from digital media companies, particularly those based in the United States, and would need to be appropriately balanced so as to uphold the freedom of expression and industry innovation.
Problem Identification
AI-generated media is becoming increasingly difficult to identify, and the barrier to entry to generate convincing AI-synthesised content has lowered, enabling an explosion of falsified and misleading deepfake content on social media platforms (World Economic Forum, 2024). The polarisation of information on social media channels increases engagement for these companies whilst undermining social cohesion, transparency and user mental health - particularly as there are limited means by which users can effectively report, remove, and seek compensation for misleading AI content (Quereshi and Bhatt, 2024). AI-driven misinformation surrounding Australian democratic processes is of particular concern, with 43% of Australians identifying election interference as a key concern of AI-powered technology (Department of Infrastructure, 2024).
In Australia, social media companies are currently not required or incentivised to proactively limit or remove false and misleading content produced by AI. As identified by the Department of Industry, Science and Resources (DISR) (2023), a lack of transparency mechanisms have undermined users’ trust in AI technology, resulting in a low adoption of AI technology by the public and private companies alike in the Australian economy. The potential value of adopting AI technologies is estimated to be between $170 and $600 billion a year (DISR 2024), however, a failure to overcome low trust in the technology would curtail these benefits.
Context
Background
While misinformation has become an increasingly pressing issue with the development of AI technology, Australia has failed to sufficiently address this threat. In its submission on Human Rights Centred AI, the Australian Human Rights Commission (2023) outlined the current need to effectively combat misinformation on social media while ensuring laws do not overreach and restrict freedom of expression online. The requirement for the Australian Government to take a proactive approach to regulating AI was emphasised in the interim response to the Safe and Responsible AI in Australia Consultation (2024), with a particular focus on introducing an obligation for transparency. This report outlined the necessity for mandatory standards where voluntary guardrails are insufficient to promote transparency, while balancing innovation, international interoperability and fundamental rights.
Current Policy Landscape
The threat of AI-generated misinformation has already been recognised in Australia, with the Criminal Code Amendment (Deepfake Sexual Material) Act 2024 (Cth) targeting sexually explicit material created or altered using AI technology by introducing criminal offences. Following the High Court decision in Fairfax Media Publications Pty Ltd v Voller [2021] SLR 43(2), media companies are considered liable for content posted on their platforms as publishers, creating an avenue for these services to be held accountable for hosting harmful content such as misinformation. No further legislation has been introduced to directly target AI-generated content, however in 2024 the Commonwealth Government introduced the Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2024 (Cth) (Misinformation Bill 2024), which included provisions to combat seriously harmful misinformation content on digital platforms. While the Misinformation Bill 2024 was discontinued due to criticism of its threat to freedom of expression, it reflects a growing urgency to address the online misinformation threat. Additionally, the AI Ethics Principles 2019 guides the responsible development and implementation of the technology in Australia, highlighting values such as transparency and accountability.
The Online Safety Act 2021 further reflects the appetite for the government to protect users of online platforms, establishing broad powers for an eSafety Commissioner to regulate media outlined as harmful within the Act. Powers of the eSafety Commissioner include determining industry standards as outlined in s 145, which media platforms are required to adhere with. The Online Safety Act, in its current form, does not place the onus for keeping Australian consumers safe online on tech companies. The Digital Duty of Care, currently in development by the Labour government in response to a statutory review of the Online Safety Act 2021, would create this obligation for social media companies to take reasonable steps to protect Australian users from foreseeable harm. The scope of harm, however, is restricted to the contents of the Online Safety Act 2021, which does not identify AI misinformation.
Ultimately, momentum currently exists within Australia to address threats present in online spaces, including misinformation.
Case Studies
The regulation of online media platforms to combat AI-facilitated online misinformation is not novel. Germany has implemented the Network Enforcement Act 2017 (NetzDG) which requires obviously illegal content uploaded on online platforms to be deleted. From January to June 2018, 480 531 content reports were received from German individuals and businesses, with 18.2% of these reports leading to actions by social media platforms (Kasakowskij et al., 2020). Failure to comply with the law can result in fines of up to $88,855,250 (€50 million) (Neudert, 2024), with Facebook being fined $3,554,210 (€2 million) by German authorities in 2019 for under-reporting complaints about illegal content on its platform (Escritt, 2019). Singapore has also recognised the role of internet intermediaries in combating misinformation through the Protection from Online Falsehoods and Manipulation Act 2019, with the government empowered to issue notices that require social media platforms to label content as false or misleading. While creating liabilities for online platforms to monitor and remove misleading information, both legislative approaches have been criticised for their impact on freedom of expression through the overblocking and self-censorship of users (Maaß, Wortelker and Rott, 2024).
China has taken a broader approach, requiring internet information services to conspicuously label content suspected to be AI-generated under the Provisions on the Administration of Deep Synthesis Internet Information Services 2022. A similar bill has been proposed in the US, with the DEEP FAKES Accountability Act 2019 compelling digital platforms to implement a visual or audio watermark to false personation records generated by deepfake technology.
The global response to the threat of AI facilitated misinformation reflects the need to balance the regulation of online platforms with the necessity to uphold the fundamental right to freedom of expression in undertaking a regulatory approach to misinformation. A successful policy approach would reduce the prevalence of AI-generated false information in Australia by creating mandatory and enforceable guardrails to uphold public trust.
Options
To address the spread of AI-facilitated misinformation, the Australian government should ensure that companies are disincentivised to proliferate this content and that, when published, media is easily recognisable as AI-generated. Policymakers have several options to achieve these objectives:
Require social media companies to identify AI-generated media with clear watermarks.
This policy option would ensure that media generated by AI systems is identifiable. A framework for watermark guidelines would need to be established, and a government agency, such as the Australian Communications and Media Authority (ACMA), will be required to monitor the enforcement of this obligation. Drawing from the Provisions on the Administration of Deep Synthesis Internet Information Services 2022 (CHN), and proposed DEEP FAKES Accountability Act 2019 (USA), internet services could face financial penalties for failing to adhere to disclosure guidelines. While watermarking would allow for increased transparency, the application and enforcement of this system would be difficult due to resistance from technology companies.
Establish an industry code of conduct for online information ethics.
The specific addition of AI information ethics in online spaces to an industry code, such as Australia’s AI Ethics Principles (DISR, 2019) would set expectations for social media companies to abide by in identifying and limiting the publishing of AI-generated misinformation. The outlined ethics for the online space would require social media companies to not actively proliferate false information in the form of deepfakes. However, encouraging the industry to adhere to and actively enforce these practices may prove difficult without a precedent enforcement mechanism (Maelen, 2020). Voluntary guard rails have been identified as insufficient by the DISR, so the developed industry code of conduct would require the ability of the ACMA to enforce discrete compliance requirements. Compulsory industry codes currently operating in Australia may be considered as precedents, such as the Commercial Television Industry Code of Practice 2015 (Cth) which requires networks to present accurate and fair material to maintain their broadcasting licenses.
Create an intermediary digital liability for the non-removal of AI misinformation.
This policy option would amend the Online Safety Act 2021 to hold intermediary internet services liable for the non-removal of false information generated by AI technology under the proposed Digital Duty of Care. Through this amendment, the current regulatory powers of the eSafety Commissioner would be expanded to oversee adherence to this liability, fielding complaints and enforcing penalties where companies are found to be non-compliant. This option promotes transparency and accountability, however the scope of this liability would need to be clearly defined to ensure it is not undermining freedom of expression and industry innovation.
Policy recommendation
Option 3, to “create an intermediary digital liability for the non-removal of AI misinformation” is recommended as the most effective strategy to address the proliferation and identification of false information generated by AI and hosted by digital media companies.
This policy option involves the amendment of the Online Safety Act 2021 so as to identify the regulation of AI-generated misinformation as an element of Australian online safety, allowing for its regulation and oversight by the eSafety Commissioner. This reform would expand Part 3 of the Online Safety Act 2021 to empower the eSafety Commissioner to receive complaints about AI-generated misinformation, launch investigations and use discretionary enforcement measures.
The effect of this amendment would enable the proposed Digital Duty of Care to apply to social media companies in the regulation of online misinformation, creating an intermediary liability for this conduct.
The rationale for choosing this policy is that it establishes a clear obligation that ensures media organisations are held accountable for harmful AI misinformation. This reflects the DISR’s (2024) identification of mandatory guardrails as being necessary for high-risk situations, establishing liability for AI safety risks such as misinformation. This contrasts with implementing a code of conduct, which does not create an actionable right for individuals, nor generates proactive compliance (Scassa 2023). The intermediary liability would solely apply to content identified as harmful, and allows for judgement on the part of the eSafety Commissioner in determining if a media company has reasonably acted to remove the content. The nexus for determining if content is harmful may be drawn from s 14 the Misinformation Bill 2024, which provides that serious harm includes content that affect matters of public health, democratic function, discrimination, and economic stability. Therefore, this policy creates a degree of discretion and flexibility when compared to a blanket requirement to watermark AI-generated media, better maintaining the function of AI-technology in low-risk settings.
Intermediary Liability Mechanism
The implementation of an intermediary liability involves an active duty of care that requires intermediaries to monitor and take action against harmful content on their platforms (Machado and Aguiar, 2023). An intermediary liability would function similar to the tort of defamation in Australia, operating as a strict liability where an intention to publish material needs to be established with no requirement to prove an intention of the internet platform to spread misinformation.
Following s 2 of the Online Falsehoods Act (SGP), an internet intermediary service would be defined as "a service that allows end-users to access materials originating from third parties or through the internet.” The definition for misinformation would be adopted from s 13(1) of the withdrawn Misinformation Bill 2024 as to mean content that is “reasonably verifiable as false, misleading or deceptive” and is “reasonably likely to cause or contribute to serious harm.”
This amendment would narrow the scope of the definition of misinformation in operation to solely apply to “AI-generated or manipulated image[s], audio or visual content that resembles existing persons, objects, places, entities, or events and would falsely appear to a person to be authentic or truthful” (Artificial Intelligence Act 2024 (EU)).
Enforcement
The Australian eSafety Commissioner would further have the responsibility to monitor and enforce the intermediary liability for AI-misinformation. The eSafety Commissioner, as per Part 10 of the Online Safety Act 2021 has a variety of powers to ensure the enforcement of obligations placed upon internet intermediary services, including the levying of civil penalty provisions, issuing of infringement notices and application for injunctions. Much like the eSafety Commissioner’s current approach to regulation of illegal and restricted online content, the authority would be able to receive complaints about potential breaches of the industry liability, and undertake investigation and enforcement measures where they find a provider to be non-compliant (eSafety Commissioner, 2024). The final penalty the eCommissioner may elect to levy for non-compliance would be set at “10% of annual turnover during the period of 12 months ending at the end of the month in which the provider contravened” (Misinformation Bill 2024, division 2).
As per s 220 of the Online Safety Act 2021, an application may be made to the Administrative Review Tribunal for the review of a decision by the eSafety Commissioner to issue a removal notice.
Costing and Success Metrics
In anticipation of the Misinformation Bill 2024, the Commonwealth Government provided the Australian Media and Communications Authority, under which the eSafety Commissioner functions, $7.9 million over four years from 2023-24 (Federal Budget 2023-2024). It is likely this policy measure will involve a comparable cost, which may be appropriated considering the failure of the Misinformation Bill 2024 to pass. The cost of this measure may be partially offset by any revenue derived from the eSafety Commissioner implementing fines for misconduct.
The success of this policy can be determined using existing standards as outlined in the Commissioner's Annual Report 2023-24. A performance measure of the authority is the proportion of surveyed Australians who have trust and confidence in the content and services available to them online. Therefore, an increased percentage of Australian adults having trust in online media would indicate policy success.
Risks and Barriers
Barriers
A barrier to the implementation of an intermediary digital liability for AI-generated misinformation is resistance from social media transnational corporations, and, by extension, the diplomatic pressure from the USA, where a majority of these companies are based. Of the 10 largest tech companies by market cap in 2025, eight are based in the USA (Forbes India 2025), dominating the digital media landscape in Australia. At the Artificial Intelligence Action Summit 2025, the USA Vice-President stated the current administration's opposition to the “excessive regulations that stifles progress” and that American-made AI “will not be co-opted into a tool of authoritarian censorship” (Kaput 2025). This speech reflected the broader, bipartisan, approach of USA administrations towards digital regulation, with laws regulating USA tech companies being viewed as politically motivated digital protectionism aimed at harming American businesses and undermining free speech (Lancieri 2018).
The Memorandum on Defending American Companies and Innovators From Overseas Extortion and Unfair Fines and Penalties 2025 (USA) specifically targeted the USA technology sector, aiming to protect the industry from fines and penalties that would be necessary to enforce an intermediary digital liability. This barrier has already affected the regulation of technology companies in Australia, with US-based media company ‘X’ refusing to comply with an order to remove media from its platform issued by the Australian eSafety Commission and framing the order as an attack on free speech, pressuring the Federal Court case seeking to enforce this order to be abandoned (eSafety Commission 2024). Therefore, the operation of an intermediary digital liability would have to overcome the necessary application of punitive measures against tech companies within international jurisdictions, potentially eliciting diplomatic backlash from the US administration.
Risks
The implementation of an intermediary digital liability raises two prominent risks; the potential perception of an attack on the freedom of expression, and the possibility of undermining the competitiveness of Australia’s digital industry.
Social Risk
Application of a digital liability to AI generated media risks undermining the ability of this technology to facilitate free speech, and social and political comment (Celli 2020). As seen with the implementation of the German NetzDG, industry groups have criticised the regulation of online media as incentivising platforms “to over-remove content rather than face fines for acting too slowly” (Gorwa 2021, 7). The perceived danger for the freedom of expression derives from two mechanisms, the overblocking of AI media by private companies and ‘chilling effects’ through the self-censorship of users when posting AI media (Maaß, Wortelker and Rott 2024). The ramifications of media regulation on the freedom of expression contributed to the discontinuing of the Misinformation Bill 2024, and a digital liability would likely raise similar concerns.
The World Economic Forum (2024) has recognised this risk, however has inversely asserted that governments and platforms, in aiming to protect free speech and civil liberties, fail to effectively address falsified information and harmful content. It will therefore be necessary for the eSafety Commissioner, in enforcing the liability, to ensure that the removal of AI misinformation is balanced with the need to uphold the freedom of expression on digital platforms.
Economic Risk
A priority of the Australian government’s approach to the regulation of AI technology is to avoid “unnecessary or disproportionate burdens for businesses” so as to balance the need for “innovation and competition” (DISR 2024, 19). The implementation of a digital liability would create a burden on digital platforms operating within Australia, requiring financial investment to monitor and remove content that contravenes guidelines established by the eSafety Commissioner. The financial impact of fines, and the increased barrier to entry for emerging digital platforms, would likely further impede investment, innovation and competitiveness within the domestic digital technology sector.
While a digital liability would create an economic burden for digital platforms, this mechanism has the potential to increase the transparency of AI media and therefore encourage trust in the technology within the Australian public. Low levels of public trust has been identified by the DISR (2024) as contributing to the low adoption rate of AI technologies in Australia. In addressing the lack of transparency within these systems, Australian consumers may be encouraged to engage with these technologies and explore the economic opportunities they present.
References
Artificial Intelligence Act 2024 (EU).
Australian Communications and Media Authority eSafety Commissioner. (2024). Annual report 2023-24. Australian Government. https://www.esafety.gov.au/sites/default/files/2024-10/ACMA-eSafety-annual-report-2023-24.pdf?v=1749035918370
Australian Human Rights Commission. (2023). The need for human rights-centred artificial intelligence: Submission to the Department of Industry, Science and Resources. https://humanrights.gov.au/our-work/legal/submission/need-human-rights-centred-ai
Celli, F. (2020). Deepfakes are coming: Does Australia come prepared? Canberra Law Review, 17(2), 193-204. https://heinonline.org/HOL/P?h=hein.journals/canbera17&i=291
Commercial Television Industry Code of Practice 2015 (Cth).
Commonwealth Parliament. 2023. Budget 2023-24: Agency resourcing budget paper no. 4. https://archive.budget.gov.au/2023-24/bp4/download/bp4_2023-24.pdf
Communications Legislation Amendment (Combating Misinformation and Disinformation) Bill 2024 (Cth).
Criminal Code Amendment (Deepfake Sexual Material) Act 2024 (Cth).
DEEP FAKES Accountability Act 2019 (USA).
Department of Industry, Science and Resources. (2019). Australia’s Artificial Intelligence Ethics Principles. Australian Government. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles
Department of Industry, Science and Resources. (2024). Safe and responsible AI in Australia consultation: Australian Government’s interim response. Australian Government.https://www.industry.gov.au/news/australian-governments-interim-response-safe-and-responsible-ai-consultation
Department of Industry, Science and Resources. (2023). Safe and responsible AI in Australia: Discussion paper. Australian Government. https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia-discussion-paper.pdf
Department of Infrastructure, Transport, Regional Development, Communications and the Arts. (2024). Online misinformation and disinformation reform: Impact analysis. Australian Government. https://www.aph.gov.au/-/media/Senate/committee/Environment_and_Communications/MDI/Combatting_Misinformation_and_Disinformation_Bill_-_Impact_Analysis.pdf
eSafety Commissioner. (2024). Statement from the eSafety Commissioner re: Federal Court proceedings. https://www.esafety.gov.au/newsroom/media-releases/statement-from-the-esafety-commissioner-re-federal-court-proceedings
Escritt, T. (2019, July 2). Germany fines Facebook for under-reporting complaints. Reuters. https://www.reuters.com/article/business/germany-fines-facebook-for-under-reporting-complaints-idUSKCN1TX1HZ/
Fairfax Media Publications Pty Ltd v Voller [2021] SLR 43(2).
Forbes India. (2025, April 30). Top 10 largest tech companies in the world by market cap in 2025. https://www.forbesindia.com/article/explainers/top-tech-companies-world-market-cap/95180/1
Gorwa, R. (2021). Elections, institutions, and the regulatory politics of platform governance: The case of the German NetzDG. Telecommunications Policy, 45(6), 102145. https://doi.org/10.1016/j.telpol.2021.102145
Kaput, M. (2025, February 18). JD Vance’s AI speech in Europe: “AI future will not be won by hand-wringing about safety”. Marketing Artificial Intelligence Institute. https://www.marketingaiinstitute.com/blog/jd-vance-ai-speech
Kasakowskij, T., Fürst, J., Fischer, J., & Fietkiewicz, K.J. (2020). Network enforcement as denunciation endorsement? A critical study on legal enforcement in social media. Telematics and Informatics, 46, 101317. https://doi.org/10.1016/j.tele.2019.101317
Lancieri, F. (2018). Digital protectionism? Antitrust, data protection and the EU/US transatlantic rift. Journal of Antitrust Enforcement, 7(1), 27-53. http://dx.doi.org/10.2139/ssrn.3075204
Maaß, S., WortelkerJ., & Rott, A. (2024). Evaluating the regulation of social media. An empirical study of the German NetzDG and Facebook. Telecommunications Policy, 48(5), 102719. https://doi.org/10.1016/j.telpol.2024.102719
Machado, C.C.V., Aguiar, T.H. (2023). Emerging regulations on content moderation and misinformation policies of online media platforms: Accommodating the duty of care into intermediary liability models. Business and Human Rights Journal, 8(2), 244-251. https://doi.org/10.1017/bhj.2023.25
Maelen, C.V. (2020). From opt-in to obligation? Examining the regulation of globally operating tech companies through alternative regulatory instruments from a material and territorial viewpoint. International Review of Law, Computers and Technology, 34(2), 183-200. https://doi.org/10.1080/13600869.2020.1733754
Memorandum on Defending American Companies and Innovators From Overseas Extortion and Unfair Fines and Penalties 2025 (US)
Network Enforcement Act 2017 (DEU).
Neudert, L.M. (2024). Reclaiming digital sovereignty: Policy and power dynamics behind Germany’s NetzDG. Journal of Information Policy, 14, 417-470. https://doi.org/10.5325/jinfopoli.14.2024.0013
Online Safety Act 2021 (Cth).
Protection from Online Falsehoods and Manipulation Act 2019 (SGP).
Provisions on the Administration of Deep Synthesis Internet Information Services 2022 (CHN).
Qureshi, I., & Bhatt, B. (2024). Social media-induced polarisation. Information Systems Journal, 34(4), 1425-1431. https://doi.org/10.1111/isj.12525
Scassa, T. (2023). Regulating AI in Canada: A critical look at the proposed artificial intelligence and data act. Canadian Bar Foundation, 101(1), 1. https://cbr.cba.org/index.php/cbr/article/view/4817
World Economic Forum. (2024). The Global Risks Report 2024. https://www3.weforum.org/docs/WEF_The_Global_Risks_Report_2024.pdf
The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.