top of page

Content Provenance and Disclosure Requirements for AI Generated Content on Digital and Traditional Media Platforms

  • Writer: Global Voices Fellow
    Global Voices Fellow
  • Mar 31
  • 13 min read

Updated: Apr 19

Stefan Hofmann, Curtin University, AI For Good 2024


Executive Summary


With the recent public releases of generative artificial intelligence (AI) tools, it is now easier than ever for people to create realistic looking media with the intention to mislead. When misinformation and disinformation is prevalent, people's confidence in AI and trust in the media is undermined, which is detrimental to a growing economy and maintaining democracy. The effectiveness of AI-generated misinformation can be minimised by providing the public with the provenance (place of origin) of the content they see in the media. This is information that would outline the origin and edit history of a piece of content such as an image or audio file, including disclosure on methods which involved generative AI.

Standards such as Content Credentials, which have been created in collaboration with industry, already have the technology to make this possible (Content Authenticity Initiative, n.d). This level of transparency allows the viewer to know how an image, video or audio file was made and empowers them with the confidence to be able to make an informed choice on whether to take a piece of content at face value. This method is most ideal as it prevents the government from having to be the arbiter of what counts as misinformation, and doesn't limit the use of generative AI in the media, so long as disclosure and provenance details are provided.


The Combating Misinformation and Disinformation Bill was recently introduced to parliament, and aimed to provide powers to the Australian Communications and Media Authority (ACMA) to establish industry codes of practice which addresses the proliferation of misinformation and disinformation on digital platforms. This paper proposes the reintroduction of an amended version of the Bill to expand these powers to include traditional media outlets and recommends standards such as Content Credentials be implemented in codes of practice to ensure that misleading content generated by AI can be identified by the Australian public. The 2023-24 Budget provided $7.9 million to the ACMA over four years to support the enactment of this Bill, and this paper also factors in $1 million for a supporting public awareness campaign. However, reintroducing this contentious Bill could present a political risk, given its recent public backlash, among other risks discussed in this paper.

Problem Identification

The release of ChatGPT in late 2022 spurred the influx of similar generative AI tools to the market, which gave the public access to a range of powerful tools for quickly creating realistic looking images, narration, and written content (Lawton, 2024). Applications such as proofreading, idea generation, and concept explanation allows generative AI to provide major productivity benefits. However, inappropriate use of these technologies, such as the creation of deepfakes (that is, digitally manipulated media which replaces one person’s likeness with another), and the sudden ease of creating deceiving content, has caused concern about the authenticity of the content viewed in social and news media (Select Committee on Adopting Artificial Intelligence, 2024).


The rising public concern on the authenticity of content seen on media platforms has motivated big organisations such as Adobe, Microsoft and the New York Times to create the Content Authenticity Initiative (CAI). The Content Authenticity Initiative (n.d.) purports that the key to maintaining the public’s trust in the content they see is through the provision of content provenance details, which includes its interaction with generative AI systems such as AI-photo editing features, or AI image/video generation tools. To achieve this vision, the CAI promotes the Content Credentials industry standard, which when attached to a piece of content, provides provenance details on the capturing, and editing of the content, including whether generative AI tools were used in its creation (Content Authenticity Initiative, n.d). While Content Credentials don’t prove whether a piece of content is genuine, mechanisms which provide transparency into generative-AI use empowers the public with the tools they need to make informed decisions on whether they wish to see a piece of content as genuine or misinformation. 


Given the transformative nature of AI, jurisdictions have only just begun establishing governance around AI. The state of New York has directly addressed the rising misinformation and disinformation problem that generative AI brings to elections after AI-generated audio was used to mislead radio listeners into believing offensive remarks were made by a sitting politician (Harkavy, 2024). Legislation was amended which requires news organisations to disclose all uses of Artificial Intelligence Generated Content (AIGC) in political advertising (Harvaky, 2024). 


Countries have also addressed the broader issue of misinformation and disinformation through public education. When Finland ramped up its anti-fake news initiative from 2014 onwards by revising school curriculums, and running courses for residents and politicians, it ranked first out of all European countries in media literacy skills (Mackintosh, 2019). While not directly addressing the misinformation issue, the City of Amsterdam, in a bid to increase transparency of use in AI in government processes, now provides a public register of when it is used, as well as insight into how these algorithms work (Department of Industry Science and Resources, 2023). 


Other countries have also adopted omnibus approaches to AI governance, which address misinformation concerns. The United States (US) President issued a non-binding executive order on ‘safe, secure and trustworthy’ AI, which aims to establish practices for detecting AIGC, so that use of AI can be identified, especially in cases of deceiving use (The White House, 2023). The European Union (EU) Artificial Intelligence Act (2024) also dictates that limited or higher risk applications of AI-Powered systems, which includes the creation of AIGC, must come with labelling that can also be machine-read (European Commission, 2023). Given that all these policies are still new, evaluating their effectiveness has not been possible, however they all provide mechanisms which raise public awareness on when content may be modified/created through AI, and assess their own risk to misinformation and disinformation.


The Australian Government has already begun building its policy surrounding AI. The only AI-specific legislation passed by the government so far is The Deepfake Sexual Material Bill 2024 (Cth), which outlaws the sharing of non-consensual deepfake sexual material (Prime Minister of Australia, 2024). This policy is limited to AIGC of a sexual nature, and doesn’t consider any other uses of generative AI, including addressing misinformation or increasing transparency of its use. 


The ACMA, as Australia’s media regulator, has an interest in the proliferation of misinformation in news and social media. The Authority currently oversees the Australian Code of Practice on Disinformation and Misinformation, which is administered by the non-profit Digital Industry Group Inc (DIGI), where signatories such as Adobe, Apple, and Meta have committed to measures which address misinformation, including in a generative AI context. These commitments are limited, however, in the fact that they’re not enforceable by the ACMA. 


The government recently introduced the Combating Misinformation and Disinformation Bill 2024 (Cth) which aimed to address this shortfall. It provided the ACMA with information gathering powers on ‘digital communication platforms’, as well as the ability to request the creation of codes of practice, or create industry standards as a stronger form of regulation (Australian Communications and Media Authority, personal communication, 2024; Department of Infrastructure Transport Regional Development Communications and the Arts, 2024). This would allow the ACMA to either work with social media platforms to enforce measures which address the AI-fuelled misinformation problem, or even enforce their own standards on industry, such as Content Credentials. 


After failing in the Senate, this bill was removed from the notice paper on the 25th November 2024, and the government indicated that the Bill would be postponed indefinitely (Kampmark, 2024). This Bill was rejected by parliamentarians and the general public for infringing on free speech, not giving the ACMA enough power to force social media organisations to comply, and notably, failing to consider traditional news media organisations in its scope (Kampmark, 2024).


Options

There are several policy levers available to the Australian Government which could empower the public with tools to appropriately judge a piece of content and address the issue of AI-generated misinformation and disinformation. These are:


  1. The delivery of a public education campaign on the risks of misinformation & disinformation spread through generative AI.This option would see the commissioning of a campaign which would highlight the potential for AI to be used to deceive people, ways of identifying AIGC, and the use of tools like Content Credentials to verify content provenance. Given that detection, and content provenance tools aren’t 100% accurate, this empowers the public with the knowledge to make their own judgements. Public education campaigns based upon misinformation and disinformation have been effective in countries like Finland, and a similar program would be proposed, with a closer focus on AIGC. However, the success of campaigns such as these relies on the public’s willingness to consume the message at a time where cost of living and other issues dominate the media landscape, and does nothing to prevent the root cause of the issue.


  2. The provision of power to the ACMA to co-design enforceable misinformation codes of practice for traditional media as well as digital media platforms.Under this policy option, the ACMA would compel the creation of a co-designed industry code of practice. The code would require traditional and digital media providers to pass on content provenance details to the viewer to aid in identifying potential misinformation and disinformation developed by AIGC. This would allow industry to adopt their own practices which fit into these requirements, increasing the chance of a positive attitude to the change by the industry. Implementing standards into the codes such as the Content Credentials standard would be an effective solution to these requirements. This policy suggestion would be implemented by amending the Combating Misinformation and Disinformation Bill 2024 (Cth) to broaden its scope to include traditional media organisations such as TV and radio broadcasters. Given that the proposed legislation recently failed in the Parliament after public disapproval, reintroducing an amended Bill could represent a political risk to the government of the day.


  3. Amendment of the Broadcast Services Act 1992 (Cth) and Online Safety Act 2021 (Cth) to enforce the provision of content provenance detail or disclosure of AIGC to viewers. Instead of creating an industry code of practice, parliament can instead write these requirements into law, by amending the Broadcast Services Act 1992 and Online Safety Act 2021. These amendments would require news and social media organisations to either utilise the Content Credentials standard, or pass on information similar as to what would be within Content Credentials to the viewer. Media organisations may be hesitant toward this approach, as they may feel as if they’ve had less input when compared to the co-creation process of an industry code. Legislating these requirements into law also makes changing them more difficult, especially if the standards introduced become obsolete due to further advances in AI technology. However other parties, such as the public, may appreciate the open debate and standard parliamentary processes when drafting this policy.



Policy Recommendation

Option 2, “the provision of power to the ACMA to co-design enforceable misinformation codes of practice for traditional media as well as digital media platforms” is recommended as the most viable option to reduce the prevalence of misinformation and disinformation through AIGC.

When the appropriate powers are provided, the ACMA will be able to request the creation of a code which could include anti-misinformation and disinformation measures such as providing content provenance details to viewers, or disclosure of generative AI use in the creation or modification of content. To easily comply with these requirements, the ACMA can recommend digital communication platforms co-design their code of practice with requirements to implement Content Credentials information for users of their platforms. Traditional media organisations using radio and television could also comply by disclosing to viewers that content being shown was modified or created by generative AI, with Content Credentials also being a good source of this information. In circumstances such as a determination by the ACMA that the codes of practice are inadequate to suit these requirements, the ACMA will be able to create its own industry standards to enforce upon digital platforms and traditional media organisations.

Regulation of misinformation and disinformation is currently not within the remit of the ACMA, and legislation will need to be passed in order for the above to be able to occur. The Combating Misinformation and Disinformation Bill would provide powers for the ACMA to enforce codes of practices and industry standards for digital platforms, and this option proposes expanding this scope to include traditional news media organisations. This omission was cited as one of the reasons for the Australian Greens to vote down this bill, and addressed this in an amendment to the Bill. The Combating Misinformation and Disinformation Bill primarily amends the Broadcasting Services Act 1992 (Cth) and Australian Communications and Media Authority Act 2005 (Cth) to provide the ACMA with powers to enforce codes of practice, and industry standards, as well as information gathering powers.

This transparency-based approach is recommended as it gives the Australian public the information they need to inform themselves on the provenance of a piece of content, and decide whether they wish to believe it is genuine or not. If a viewer had the ability to view that an image was captured through a digital camera, but was then significantly modified using generative-AI features in photo-editing software, they may wish to consume the content with caution.


Consequently, this should reduce cases of misleading content being circulated in the first place, given that the public would now easily be able to see the full edit history, and capture details of content. The Content Credentials standard, which is recommended as a technical standard to meet the requirements outlined in this policy, has already been designed by members of the social and news media industries with this intention in mind. This should minimise the chance of pushback from industry, and accelerate adoption from remaining parties. Compared with other methods of addressing misinformation, this solution is ideal as it prevents the government from having to be the arbiter of what counts as 'misinformation', which the public could see as infringing on their freedom of speech. It also doesn’t dictate what type of content can be published in social and traditional media, which would see opposition from the industry and general public if such a policy was proposed.

The cost of this policy will primarily incorporate the costs involved in passing the Combating Misinformation and Disinformation Bill. In the 2023-24 Budget, the government provided $7.9 million to the ACMA over four years to support the creation of this Bill, and to facilitate the powers that it provides the ACMA (Department of Finance, 2023). Given that this proposal recommends actions covered by these powers, the cost of this policy should be included within this allocation which has already been budgeted for. Additional costs may lie in a public education campaign, which informs the public on the content provenance and disclosure changes, and how they can use this added transparency to identify AIGC which could be misleading.


The 2024-25 budget provided $1.0 million to fund an education and awareness campaign of the government’s new ‘mandatory minimum classifications for gambling-like content in computer games’ (Rowland, 2024). An education campaign for the policy outlined in this paper would be similar in scope to this one, and therefore similar in cost. The success of this policy will be determined by the ACMA, who monitors the Australian media. While all complaints about misinformation are currently directed to DIGI, the ACMA will receive the powers it needs to act on misinformation complaints once the bill is passed. Success can be measured by a reduction in misinformation complaints in relation to AIGC.


Risks

The largest risk surrounding this policy proposal revolves around the Bill it aims to amend. After backlash from the public, opposition and crossbench, the The Combating Misinformation and Disinformation Bill was removed from the notice board (Butler, 2024). One of the main criticisms among the public was a fear that freedom of expression would not be adequately protected under this Bill (Butler, 2024). Reintroducing it again exposes the government to an already apprehensive public when it comes to misinformation and disinformation reform.


Given the extra responsibilities placed upon digital platforms and traditional media organisations, they may lobby against these requirements. The news media has great influence on the political conversation, so there is a chance they may be able to get the Australian public on their side in a policy conflict. Social media companies also have influence over the public, and given the smaller size of the Australian market, they have a history of withdrawing services from Australia, rather than complying with regulation in an attempt to deter other nations from following suit. This was seen in 2021 where Facebook pulled all news content from Australia temporarily in response to the News Media Bargaining Code (Australian Broadcasting Company, 2021).


There is also a chance that a greater public stigma could be garnered toward content which was wholly or even partially created by AI, causing people to dismiss any material involved with AI at all, even if it was only partially used. Minor uses of generative AI in content creation such as removing objects from images, or extending an image outside the frame would indicate use of generative AI in its creation, even if these modifications are minor and play no part in altering the original meaning of the content. These applications, and many other minor uses of generative AI are already widely used, and may cause frustration among content creators who already rely on these tools, and don’t want their content to have a ‘misleading stigma.’


On the other hand, the opposite effect can be observed. There is a risk that people may see the lack of a generative AI tool listed in the Content Credentials of a piece of content, and establish automatic trust in the content. Content made without the assistance of generative AI can still be made to mislead others, and the lack of generative AI cannot be seen as an automatic trust marker by the public. Given Content Credentials’ purpose as a single source of truth for content provenance, any potential forgery of this information could also have greater consequences than if they weren’t attached to an image in the first place, and needs to be additionally considered.

References

World Economic Forum. (2024). These are the 3 biggest emerging risks the world is facing. https://www.weforum.org/agenda/2024/01/ai-disinformation-global-risks/ 


Ognyanova, K., Lazer, D., Robertson, R. E., & Wilson, C. (2020). Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-024 


Roy Morgan. (2023). Majority of Australians believe artificial intelligence (AI) creates more problems than it solves https://www.roymorgan.com/findings/9339-campaign-for-ai-safety-press-release-august-2023


Department of Industry Science and Resources. (2023). Safe and responsible AI in Australia https://storage.googleapis.com/converlens-au-industry/industry/p/prj2452c8e24d7a400c72429/public_assets/Safe-and-responsible-AI-in-Australia-discussion-paper.pdf


Department of Industry Science and Resources. List of Critical Technologies in the National Interest: AI technologies. https://www.industry.gov.au/publications/list-critical-technologies-national-interest/ai-technologies


Newsguard. (2024). Tracking AI-enabled Misinformation: 1,000 ‘Unreliable AI-Generated News’ Websites (and Counting). https://www.newsguardtech.com/special-reports/ai-tracking-center/


Content Authenticity Initiative. How it works. https://contentauthenticity.org/how-it-works 

Harvaky, R. (2024). New York clamps down on the use of AI in political ad campaigns. https://www.globallegalinsights.com/news/new-york-clamps-down-on-the-use-of-ai-in-political-ad-campaigns/


The White House. (2023). FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/


Department of Infrastructure Transport Regional Development Communications and the Arts. (2023). Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2023 Guidance Note.  Retrieved from https://www.infrastructure.gov.au/have-your-say/new-acma-powers-combat-misinformation-and-disinformation


Department of Finance. (2023). Regulatory Powers to Combat Misinformation and Disinformation. Retrieved from https://structure.gov.au/measure/regulatory-powers-combat-misinformation-and-disinformation


Rowland, M. (2024). Boosting connectivity and safety for Australians https://minister.infrastructure.gov.au/rowland/media-release/boosting-connectivity-and-safety-australians


Australian Broadcasting Company. (2021). Facebook just restricted access to news in Australia. Here's what that means for you. https://www.abc.net.au/news/2021-02-18/facebook-news-ban-what-just-happened-post-zuckerberg/13166710


Lawton, G. (2024). What is generative AI? Everything you need to know. https://www.nvidia.com/en-au/glossary/generative-ai/#:~:text=Generative%20AI%20models%20can%20take,or%20turn%20video%20into%20text


Fan, B., Liu, S., Pei, G., Wu, Y., & Zhu, L. (2021). Why Do You Trust News? The Event-Related Potential Evidence of Media Channel and News Type [Original Research]. Frontiers in Psychology, 12. https://doi.org/10.3389/fpsyg.2021.663485 

Australian Communications and Media Authority. (2020). Artificial intelligence in communications and media Occasional paper. https://www.acma.gov.au/sites/default/files/2020-07/Artificial%20intelligence%20in%20media%20and%20communications_Occasional%20paper.pdf


Mackintosh, E. (2019). Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy. CNN. https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/ 

Select Committee on Adopting Artificial Intelligence. (2024). Select Committee on Adopting Artificial Intelligence. https://parlinfo.aph.gov.au/parlInfo/download/committees/reportsen/RB000493/toc_pdf/SelectCommitteeonAdoptingArtificialIntelligence(AI).pdf


Kampmark, B. (2024). Ding dong, Australia’s misinformation and disinformation Bill is dead. Independent Australia. https://independentaustralia.net/politics/politics-display/ding-dong-australias-misinformation-and-disinformation-bill-is-dead,19250


Butler, J. (2024, November 24). Labor dumps misinformation bill after Senate unites against it. The Guardian. https://www.theguardian.com/australia-news/2024/nov/24/labor-dumps-misinformation-bill-after-senate-unites-against-it





-------


The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.

Global Voices Logo (Blue world with great continents, Australia in focus at the bottom)
Global Voices white text
  • Instagram
  • LinkedIn

Careers

 

The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.

Global Voices is a registered charity.

ABN: 35 149 541 766

Copyright Ⓒ Global Voices Ltd 2011 - 2020

Global Voices would like to acknowledge Aboriginal and Torres Strait Islander peoples as Australia’s First People and Traditional Custodians.

We value their cultures, identities, and continuing connection to country, waters, kin and community. We pay our respects to Elders, both past and present, and are committed to supporting the next generation of young Aboriginal and Torres Strait Islander leaders.

bottom of page