Content Provenance and Disclosure Requirements for AI Generated Content on Digital and Traditional Media Platforms
- Global Voices Fellow
- Mar 31
- 13 min read
Updated: Apr 19
Stefan Hofmann, Curtin University, AI For Good 2024
Executive Summary
With the recent public releases of generative artificial intelligence (AI) tools, it is now easier than ever for people to create realistic looking media with the intention to mislead. When misinformation and disinformation is prevalent, people's confidence in AI and trust in the media is undermined, which is detrimental to a growing economy and maintaining democracy. The effectiveness of AI-generated misinformation can be minimised by providing the public with the provenance (place of origin) of the content they see in the media. This is information that would outline the origin and edit history of a piece of content such as an image or audio file, including disclosure on methods which involved generative AI.
Standards such as Content Credentials, which have been created in collaboration with industry, already have the technology to make this possible (Content Authenticity Initiative, n.d). This level of transparency allows the viewer to know how an image, video or audio file was made and empowers them with the confidence to be able to make an informed choice on whether to take a piece of content at face value. This method is most ideal as it prevents the government from having to be the arbiter of what counts as misinformation, and doesn't limit the use of generative AI in the media, so long as disclosure and provenance details are provided.
The Combating Misinformation and Disinformation Bill was recently introduced to parliament, and aimed to provide powers to the Australian Communications and Media Authority (ACMA) to establish industry codes of practice which addresses the proliferation of misinformation and disinformation on digital platforms. This paper proposes the reintroduction of an amended version of the Bill to expand these powers to include traditional media outlets and recommends standards such as Content Credentials be implemented in codes of practice to ensure that misleading content generated by AI can be identified by the Australian public. The 2023-24 Budget provided $7.9 million to the ACMA over four years to support the enactment of this Bill, and this paper also factors in $1 million for a supporting public awareness campaign. However, reintroducing this contentious Bill could present a political risk, given its recent public backlash, among other risks discussed in this paper.
Problem Identification
The release of ChatGPT in late 2022 spurred the influx of similar generative AI tools to the market, which gave the public access to a range of powerful tools for quickly creating realistic looking images, narration, and written content (Lawton, 2024). Applications such as proofreading, idea generation, and concept explanation allows generative AI to provide major productivity benefits. However, inappropriate use of these technologies, such as the creation of deepfakes (that is, digitally manipulated media which replaces one person’s likeness with another), and the sudden ease of creating deceiving content, has caused concern about the authenticity of the content viewed in social and news media (Select Committee on Adopting Artificial Intelligence, 2024).
The rising public concern on the authenticity of content seen on media platforms has motivated big organisations such as Adobe, Microsoft and the New York Times to create the Content Authenticity Initiative (CAI). The Content Authenticity Initiative (n.d.) purports that the key to maintaining the public’s trust in the content they see is through the provision of content provenance details, which includes its interaction with generative AI systems such as AI-photo editing features, or AI image/video generation tools. To achieve this vision, the CAI promotes the Content Credentials industry standard, which when attached to a piece of content, provides provenance details on the capturing, and editing of the content, including whether generative AI tools were used in its creation (Content Authenticity Initiative, n.d). While Content Credentials don’t prove whether a piece of content is genuine, mechanisms which provide transparency into generative-AI use empowers the public with the tools they need to make informed decisions on whether they wish to see a piece of content as genuine or misinformation.
Given the transformative nature of AI, jurisdictions have only just begun establishing governance around AI. The state of New York has directly addressed the rising misinformation and disinformation problem that generative AI brings to elections after AI-generated audio was used to mislead radio listeners into believing offensive remarks were made by a sitting politician (Harkavy, 2024). Legislation was amended which requires news organisations to disclose all uses of Artificial Intelligence Generated Content (AIGC) in political advertising (Harvaky, 2024).
Countries have also addressed the broader issue of misinformation and disinformation through public education. When Finland ramped up its anti-fake news initiative from 2014 onwards by revising school curriculums, and running courses for residents and politicians, it ranked first out of all European countries in media literacy skills (Mackintosh, 2019). While not directly addressing the misinformation issue, the City of Amsterdam, in a bid to increase transparency of use in AI in government processes, now provides a public register of when it is used, as well as insight into how these algorithms work (Department of Industry Science and Resources, 2023).
Other countries have also adopted omnibus approaches to AI governance, which address misinformation concerns. The United States (US) President issued a non-binding executive order on ‘safe, secure and trustworthy’ AI, which aims to establish practices for detecting AIGC, so that use of AI can be identified, especially in cases of deceiving use (The White House, 2023). The European Union (EU) Artificial Intelligence Act (2024) also dictates that limited or higher risk applications of AI-Powered systems, which includes the creation of AIGC, must come with labelling that can also be machine-read (European Commission, 2023). Given that all these policies are still new, evaluating their effectiveness has not been possible, however they all provide mechanisms which raise public awareness on when content may be modified/created through AI, and assess their own risk to misinformation and disinformation.
The Australian Government has already begun building its policy surrounding AI. The only AI-specific legislation passed by the government so far is The Deepfake Sexual Material Bill 2024 (Cth), which outlaws the sharing of non-consensual deepfake sexual material (Prime Minister of Australia, 2024). This policy is limited to AIGC of a sexual nature, and doesn’t consider any other uses of generative AI, including addressing misinformation or increasing transparency of its use.
The ACMA, as Australia’s media regulator, has an interest in the proliferation of misinformation in news and social media. The Authority currently oversees the Australian Code of Practice on Disinformation and Misinformation, which is administered by the non-profit Digital Industry Group Inc (DIGI), where signatories such as Adobe, Apple, and Meta have committed to measures which address misinformation, including in a generative AI context. These commitments are limited, however, in the fact that they’re not enforceable by the ACMA.
The government recently introduced the Combating Misinformation and Disinformation Bill 2024 (Cth) which aimed to address this shortfall. It provided the ACMA with information gathering powers on ‘digital communication platforms’, as well as the ability to request the creation of codes of practice, or create industry standards as a stronger form of regulation (Australian Communications and Media Authority, personal communication, 2024; Department of Infrastructure Transport Regional Development Communications and the Arts, 2024). This would allow the ACMA to either work with social media platforms to enforce measures which address the AI-fuelled misinformation problem, or even enforce their own standards on industry, such as Content Credentials.
After failing in the Senate, this bill was removed from the notice paper on the 25th November 2024, and the government indicated that the Bill would be postponed indefinitely (Kampmark, 2024). This Bill was rejected by parliamentarians and the general public for infringing on free speech, not giving the ACMA enough power to force social media organisations to comply, and notably, failing to consider traditional news media organisations in its scope (Kampmark, 2024).
Options
There are several policy levers available to the Australian Government which could empower the public with tools to appropriately judge a piece of content and address the issue of AI-generated misinformation and disinformation. These are:
The delivery of a public education campaign on the risks of misinformation & disinformation spread through generative AI.This option would see the commissioning of a campaign which would highlight the potential for AI to be used to deceive people, ways of identifying AIGC, and the use of tools like Content Credentials to verify content provenance. Given that detection, and content provenance tools aren’t 100% accurate, this empowers the public with the knowledge to make their own judgements. Public education campaigns based upon misinformation and disinformation have been effective in countries like Finland, and a similar program would be proposed, with a closer focus on AIGC. However, the success of campaigns such as these relies on the public’s willingness to consume the message at a time where cost of living and other issues dominate the media landscape, and does nothing to prevent the root cause of the issue.
The provision of power to the ACMA to co-design enforceable misinformation codes of practice for traditional media as well as digital media platforms.Under this policy option, the ACMA would compel the creation of a co-designed industry code of practice. The code would require traditional and digital media providers to pass on content provenance details to the viewer to aid in identifying potential misinformation and disinformation developed by AIGC. This would allow industry to adopt their own practices which fit into these requirements, increasing the chance of a positive attitude to the change by the industry. Implementing standards into the codes such as the Content Credentials standard would be an effective solution to these requirements. This policy suggestion would be implemented by amending the Combating Misinformation and Disinformation Bill 2024 (Cth) to broaden its scope to include traditional media organisations such as TV and radio broadcasters. Given that the proposed legislation recently failed in the Parliament after public disapproval, reintroducing an amended Bill could represent a political risk to the government of the day.
Amendment of the Broadcast Services Act 1992 (Cth) and Online Safety Act 2021 (Cth) to enforce the provision of content provenance detail or disclosure of AIGC to viewers. Instead of creating an industry code of practice, parliament can instead write these requirements into law, by amending the Broadcast Services Act 1992 and Online Safety Act 2021. These amendments would require news and social media organisations to either utilise the Content Credentials standard, or pass on information similar as to what would be within Content Credentials to the viewer. Media organisations may be hesitant toward this approach, as they may feel as if they’ve had less input when compared to the co-creation process of an industry code. Legislating these requirements into law also makes changing them more difficult, especially if the standards introduced become obsolete due to further advances in AI technology. However other parties, such as the public, may appreciate the open debate and standard parliamentary processes when drafting this policy.
Policy Recommendation
References
-------
The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.