top of page

Economic Stability in the Digital Age: Targeted Regulation of AI in the Financial Sector

  • Writer: 2025 Global Voices Fellow
    2025 Global Voices Fellow
  • Feb 18
  • 21 min read

Mia Wegener, The University of Melbourne (Faculty of Business and Economics), AI for Good Fellow  


Executive Summary


The rapid adoption of artificial intelligence (AI) in Australia’s financial sector presents significant opportunities but also introduces systemic risks to economic stability. In particular, AI-driven trading systems can amplify market volatility, enable manipulative market practices, and trigger procyclical behaviour.  


Existing regulations governing Australia’s financial markets were developed on the assumption of human decision-making and do not account for the complexity of adaptive AI systems. While current guidelines may apply to AI usage by default, the lack of clear guardrails creates ambiguity and limits regulatory enforceability.  

  

This policy proposal recommends amending Section 5.7.3 of the ASIC Market Integrity Rules to explicitly reference AI-driven trading systems. This targeted update would clarify that existing rules on market manipulation and algorithmic trading apply equally to AI, addressing the emergence of novel risks while maintaining Australia’s technology-neutral legislative approach.   

  

The proposed policy can be implemented through a minor legislative change overseen by the Australian Securities and Investments Commission (ASIC). Estimated costs include $10-25 million over 3-4 years for consultation and regulatory system upgrades, and $5-10 million annually for ongoing monitoring.   

  

Key barriers include ASIC’s need for technical upskilling, challenges in defining AI-based market manipulation, and industry resistance. Risks include the reduced attractiveness of Australia’s investment climate and the potential for unintended market consequences. Nonetheless, as AI adoption accelerates across financial markets, regulatory clarity is essential. This policy provides a practical, low-intervention option to strengthen AI governance and support Australia’s ongoing financial stability. 


Problem Identification

In 2025, 63% of global financial regulators identified algorithmic trading as a "current or near-term" application of AI (IOSCO, 2025). Globally, this market is projected to reach US$65.2 billion by 2032 (Allied Market Research, 2024) and in Australia, AI integration across the financial services industry is forecast at 70% by 2027​​ (Pagram, 2024). However, this rapid adoption is unfolding against a backdrop of weak governance and low public trust. While 71% of Australian employees report using generative AI (GenAI) tools, only 30% indicate their organisations have policies governing responsible usage, and Australia ranks lowest globally in ​the ​belief that the benefits of AI outweigh the risks (Gillespie et al., 2025).  

  

Existing laws governing financial services assume human-driven decision-making and lack clear responsibility frameworks for AI-formulated decisions, limiting enforceability for autonomous systems. The technologically agnostic terms used in legislation – in line with Australia’s technology-neutral regulatory approach – mean that these regulations technically cover AI usage, but the lack of precise specification creates ambiguity. This complicates the identification and mitigation of risks associated with AI-driven trading systems, including their potential to amplify systemic economic risks.   

  

The proliferation of similar AI models across firms is likely to increase market interconnectedness and foster procyclical behaviours such as herding (FSB, 2024). In the short term, this may trigger unexplained market shifts and heightened volatility, eroding investor confidence (IMF, 2024). 


Over the long term, imprecise delegation of responsibility heightens the vulnerability of Australia’s financial system to economic shocks (Hall, 2024). In the absence of sufficient guardrails, AI systems responding autonomously to extraneous shocks may drive rapid portfolio shifts toward safe assets, leading to one-way markets, fire sales, and liquidity crises  (see Appendix for Glossary of key terms) (OECD, 2023). 

Context

Global Context AI is increasingly utilised in financial services to optimise internal processes, including data quality checks, financial statement reconciliation, and the provision of customer support. These use cases can improve productivity and lower operating costs for firms (OECD, 2023).   

 

Although early applications have posed relatively limited risks to economic stability, industry practices are evolving, and capital market participants should be “prepared to respond to an acceleration in the pace of adoption in high-impact areas” (IMF, 2024). A recent survey of UK firms found that 11% use AI for algorithmic trading, with a further 9% planning adoption within the next three years (Breeden, 2024). Even capital management – a traditionally cautious sector – is beginning to integrate AI, with 4% of firms already using AI and 10% planning adoption within three years.   

  

It is in this context that the IMF’s 2024 World Economic Outlook strongly recommended financial sector authorities update their skills and supervisory technology to monitor the emergence of AI in more risk-sensitive areas of the economy. 


Australian Context


In Australia, the trend is similar. ASIC’s 2024 Report Beware the gap: Governance arrangements in the face of AI innovation identified both a “rapid acceleration” in AI deployment across financial services, as well as a shift towards “more complex and opaque” applications, such as the use of GenAI to assist in underwriting services and risk evaluation.  

 

In insurance, AI is enabling faster and more accurate claims processing, with firms such as Suncorp and QBE using machine learning to assess property risks (NextDC, 2024). At an institutional level, major banks – including CBA, NAB, ANZ, and Westpac – are deploying AI to automate processes and develop internal chatbots to streamline customer interactions (CBA, 2024; NAB, 2023; ANZ, 2023; Westpac, 2023). This adoption represents substantial economic opportunity: recent modelling suggests that GenAI could contribute between $45 billion and $115 billion annually to the Australian economy by 2030, driven primarily by increased productivity via automation (Microsoft & Tech Council of Australia, 2023). 

 

However, the proliferation of AI is not without risks. The widespread use of AI-driven trading models that process information in similar ways may increase market vulnerability to both exogenous shocks and model errors, including AI hallucinations (RBA, 2024). Such risks are amplified by the complexity and nonlinearity of AI systems, which make it challenging to understand how models have reached certain decisions and to hold market participants accountable if those decisions contribute to systemic instability.


Overview of risks arising from increased AI use in capital markets


  1. Misaligned Incentives - AI "has no values, only objectives" (Leitner at al., 2024)


The computational power of AI systems poses risks if directed towards narrow objectives that fail to encapsulate broader human values (Danielsson, 2021).   


Key risks:  

  • Divergence from broader public interest by AI systems optimised for profit maximisation.   

  • Development of strategies that come at the expense of a stable and efficient market due to the absence of nuanced human judgement. 

Example:  

In the 1980s, EURISKO, an AI system, won a war game by sinking its own fleet’s slowest ships to enhance manoeuvrability (Krakovna, 2018). This did not infringe the game’s rules but violated implicit human ethics, illustrating an example of "reward hacking."  


  1.  Increased Market Homogeneity


AI models are commonly developed using similar datasets, algorithms, and objectives. This may lead to a convergence in trading strategies and risk measurement models across firms (Hall, 2024). Relative to traditional algorithmic trading, AI models are more capable of responding to live inputs such as changing market sentiment or breaking news, increasing the likelihood that multiple systems react simultaneously to new information.


 Key risks:   

  • Homogeneity in trading strategies as AI models trained on similar data and optimised for similar objectives tend to react uniformly to new information.

  • Outsized market volatility arising from many AI-driven systems updating their positions simultaneously in response to minor information shocks. 

Example:   

In the 1990s, Long-Term Capital Management (LTCM) promoted quantitative trading strategies that became widely imitated by investment firms across financial markets. External shocks, like the 1997 Asian Financial Crisis, prompted a cascade of similar responses (due to the assumptions and risk perceptions shared by models), rapidly destabilising the wider system (Hall, 2024).  


  1. Procyclicality Under Stress


During periods of market stress, synchronised reactions to market signals will exacerbate instability (FSB, 2024), even where such homogeneity may appear benign under normal market conditions.  

Key risks:   

  • Herding, collusion, and other destabilising market behaviours arising from interconnected AI models responding similarly to live data pipelines, such as selling in unison if one sector or asset class comes under stress (FSB, 2024). 

  • Rapid shifts toward safe assets, leading to one-way markets, fire sales, and liquidity crises in stressed conditions (OECD, 2023).

  

Example:   

During the COVID-19 market crash in March 2020, AI-powered Exchange Traded Funds (ETFs) increased their portfolio turnover (i.e. they traded more aggressively) relative to traditional ETFs. This indicates that they responded to the downturn by engaging in herd-like selling, further driving prices down in an already collapsing market (Abbas, et al., 2024).   


  1. Shock Amplification and Systemic Risk


The combined risks of misaligned incentives, market homogeneity, and procyclicality increase the susceptibility of financial markets to systemic disruption. This risk is inflated by the scale and speed of AI-driven trading (ASIC, 2024).  


Key risks:   

  • Compounding of minor market distortions due to AI systems executing trades at speeds that outpace human intervention. 

  • System-wide collapse triggered by the cascading effects of manipulative or destabilising AI-driven trading behaviour.  


Example:   

The 2010 Flash Crash was triggered by high-frequency trading algorithms misinterpreting market signals and initiating selloffs which cascaded rapidly across the market. This resulted in the US stock market losing US$1 trillion in value within 30 minutes (Cornell, 2020).    


Overall, a widespread reliance on models programmed to prioritise individual return over broader market stability and which respond similarly to market signals can push “a slight market downturn into a rapid collapse” (Frazier, 2024).  


Current Policy Landscape


Market Integrity Regulation


Market misconduct prohibitions are governed by Part 7.10 of the Corporations Act 2001 (Cth). These rules are operationalised by Part 5.7 of the ASIC Market Integrity Rules (Securities Markets) 2017, which provide detailed clarification on what conduct constitutes “market manipulation”. This includes false or misleading orders with respect to timing, size, or genuine commercial purpose, as well as orders that deviate from the regular trading patterns of a Trading Participant. These Rules also govern how trading firms deploy automated order processing (AOP) systems to ensure that automated trades remain accountable to the relevant Trading Participant and do not distort market efficiency.


Section 5.6 stipulates that Trading Participants must implement pre-programmed filters – such as price and volume thresholds – and retain full control and auditability over these settings. Access to trading systems is restricted to persons authorised by the Trading Participant, who must demonstrate familiarity with both the trading infrastructure and the market operating rules. To ensure reliability, firms are required to conduct external reviews and submit formal certification of compliance to ASIC before launching an AOP system or making material changes to it.     

Sections 5.7.1–5.7.2 set out prohibitions on manipulative trading conduct, including the criteria used to assess whether conduct has the effect of misleading the market. Clause 5.7.3 explicitly extends these obligations to orders placed through AOP systems, ensuring that automation is held to the same conduct standards as manual trading.    


Importantly, these obligations were designed for deterministic automated systems which execute trades based on fixed, pre-defined rules, making them relatively predictable and auditable. In contrast, AI-driven trading models using machine learning can autonomously evolve their strategies in real time, formulating decisions which cannot be easily explained, and which may deviate from what was explicitly programmed.  


This fundamental shift in how trading logic is generated introduces novel regulatory challenges. While AI-driven systems may technically fall under the existing rules, the current framework does not fully account for the unique risks posed by their opacity and adaptability.    


AI Regulation


Since the initial drafting of this paper, the AI policy landscape has transformed significantly and will likely continue to do so. Australia is committed to a technology-neutral, principles-based approach to AI regulation (Department of Industry, Science and Resources [DISR], 2024), most recently formalised in the National AI Plan released in December 2025. This plan explains that Australia has moved away from a standalone "AI Act" or a central set of "Mandatory Guardrails" - as was previously proposed by DISR in November 2024.  


The plan maintains that current legal frameworks – including the Corporations Act, Privacy Act, and Australian Consumer Law – are sufficient to address emerging AI risks, conditional on proactive governance and alignment with best practices such as those outlined in Australia’s AI Ethics Principles (2019). Under a decentralised model, individual regulators retain primary responsibility for identifying and mitigating potential AI harms within their specific sectors. In practice, this means that regulators are expected to adapt sector-specific guidelines as needed to ensure that regulatory actions remain targeted given the “evolving nature of AI and its impacts across the economy and society” (DISR, 2025). To support these efforts, the government plans to establish the AI Safety Institute (AISI) in early 2026, which will work with regulators to help enforce and update regulatory frameworks as technology continues to progress.  


Furthermore, DISR released the Guidance for AI Adoption (GfAA) in October 2025, officially superseding the 2024 Voluntary AI Safety Standard (VAISS), which set out ten non-binding guardrails for responsible AI deployment. The GfAA condenses these guardrails into six "essential practices": Decide accountability; Understand impacts; Measure risks; Share information; Test/monitor; Maintain human control. 


Gaps in current legislation


While the initiatives outlined above reaffirm the role of existing laws, these broad frameworks remain fundamentally 'outcome-based' and do not provide ex-ante technical constraints for addressing applications of AI in financial and capital markets.    


Key gaps include: 


  • The release of the 2025 National AI Plan indicates that earlier proposals for mandatory economy-wide guardrails have been abandoned in favour of sector-specific regulator action. This leaves the GfAA as a non-binding benchmark rather than enforceable law.  

  • The Financial Accountability Regime Act 2023 (FAR Act) does not recognise deployers of high-risk AI as accountable persons. Thus, no AI-specific governance is incorporated in the key functions assigned to accountable persons.    


  • Neither the Corporations Act nor ASIC’s Financial Market Integrity Rules explicitly reference the use of AI in financial markets. This creates legislative ambiguity around the potential for:      

  • AI to contribute to manipulative market practices   

  • AI to generate misleading knowledge   

  • AI to create new avenues for insider trading

  • Clause 5.7.3 in the ASIC Market Integrity Rules (2017) explicitly extends market manipulation regulations to AOP systems, despite prior clauses already capturing such conduct through technologically neutral language. However, this provision has not been further extended to address the potential for AI systems to autonomously engage in manipulative trading behaviour.    

 

ASIC (2024) noted that “additional guidance to industry clarifying the application of corporations and financial services laws” is necessary and has since released Consultation Paper 386 (2025), proposing updates to the trading-systems obligations in the Securities and Futures Market Integrity Rules. These proposals aim to modernise regulation to better reflect current trading practices and to ensure greater consistency in requirements for financial market participants. ASIC is currently reviewing industry feedback, and no commencement date for new rules has been set.

Policy Options

Concern surrounding ambiguous accountability frameworks and the opacity of AI-powered decision-making in financial markets highlights the need for targeted regulatory measures. Current laws, while technically applicable, fail to provide the specificity or enforceability needed to mitigate the risks posed by adaptive AI-driven trading. Effective policy intervention should emphasise clearly defined accountability and robust oversight mechanisms to bolster public trust and promote ongoing market stability. There are several options available to achieve this:  


Option 1: Revise the Financial Accountability Regime Act 2023 (Cth) to clarify accountability in the context of AI  


This would involve updating accountability obligations to require that accountable persons ensure responsible governance of AI-specific processes and systems, including risk management for unintended consequences resulting from AI-formulated decisions. Incorporating AI-specific governance into the key functions assigned to accountable persons would help to clarify accountability by assigning clear, actionable responsibilities (Porter, 2025).    


Regulatory responsibility for AI governance under the FAR Act would fall on both ASIC and the Australian Prudential Regulation Authority (APRA) who jointly administer the Act. ASIC’s role would primarily focus on where AI impacts market integrity and investor outcomes, while APRA would oversee prudential soundness of integrated AI processes.   


This measure clarifies that AI governance falls within the remit of a designated accountable person, creating a clear incentive for institutions to test their models before deployment and discourages reliance on algorithms without adequate oversight or explainability.    


However, increased regulatory burdens could impose additional compliance costs, especially for smaller institutions. Moreover, the risk of “automation bias”, whereby accountable persons defer to automated systems in spite of their own judgement, may undermine the effectiveness of such a policy (Doctorow, 2024).     


Option 2: Mandate human oversight for high-risk decisions made by adaptive AI systems within sector-specific regulation of financial markets  


This policy option involves ASIC amending the Market Integrity Rules (Securities Markets) 2017 to formally designate Practice 6 ("Maintain Human Control") of the GfAA as a mandatory requirement for high-risk AI systems used in financial markets. This would require that a “human appropriately oversees any AI systems” (NAIC, 2025), entailing both an ex-ante review of system logic and continuous monitoring of adaptive drift to ensure the system remains within human-defined operational boundaries, supported by mandatory “kill-switches” (Practice 6.2).  


Such oversight should apply where those decisions are made by adaptive systems and could cause reasonably foreseeable negative effects to the economy at large – the outcomes of which are not easily reversible. This recommendation seeks to enhance public trust in high-risk AI applications by reinforcing human oversight in critical scenarios (OECD, 2023). However, it may slow down decision-making processes, impacting efficiency (Kelly, 2023) and also be susceptible to the risk of automation bias.    


Option 3: Integrate AI-driven trading applications into ASIC’s existing market manipulation regulation


This policy option would entail amending the ASIC Market Integrity Rules (Securities Markets) 2017 to make an explicit reference to instances where market manipulation can be caused by AI-driven trading systems.         


Classifying AI-induced distortions within the scope of regulation would enhance visibility and provide clear legal attribution of accountability for AI-driven trading systems. However, financial institutions could face difficulties identifying and isolating AI-induced distortions, leading to enforcement challenges. The amendment may also discourage the use of innovative AI trading strategies, affecting the international competitiveness of Australia’s capital markets (Hyams, 2023).   

Policy Recommendation

This paper recommends Option 3, integrating “AI-driven trading applications into ASIC’s existing market manipulation regulation” as the most appropriate and actionable response to address regulatory ambiguity surrounding AI-driven trading in financial markets. This small but targeted update would clarify how market misconduct rules apply equally to AI-driven trades, ensuring Australia’s regulatory framework evolves alongside emerging financial risks.   


Suggested amendments are detailed in red for consideration below: 


5.7 - Market Manipulation

5.7.1 - False or misleading appearances


A market participant must not place Bids or Offers for, or trade: 

  • With intent to mislead about market activity or price. 

  • Where the effect of the dealing is likely to send false or misleading signals about the price of a financial product. 

  • On the account of any other person if they know or should reasonably suspect another party's intent is to create a false market signal. 


5.7.2 - Circumstances of Order


When evaluating if an order is manipulative, participants must consider: 

  • Whether the Order deviates from recent trading patterns in that financial product. 

  • If the Order would artificially move the market or price. 

  • The timing, size, frequency, and repetition with which Orders are placed by the person. 

  • Whether the order is part of a pattern or series. 

  • If there's a genuine commercial purpose for the order. 


5.7.3- Obligations apply to Automated Order Processing including where such Orders are the result of AI-driven trading systems.


A Market Participant must also comply with this Part 5.7 in respect of Orders the subject of Automated Order Processing, including Orders the subject of AI-driven trading systems. 


Rationale for Policy

Since ASIC’s definition of "market participant" already encompasses entities using automated systems, the original clause 5.7.3 serves a clarifying function by explicitly applying manipulative trading rules to orders generated through AOP.   


The proposed amendment – to be implemented, overseen, and enforced by ASIC – extends this clarity to orders generated by AI-driven systems. By clearly stating that these systems are governed by the same obligations, this amendment mitigates against interpretive ambiguity in enforcement or compliance. The clarification also reflects the need to evolve legislation alongside evolving technology to address emerging risks. 


Alignment with Australia's regulatory approach


Australian legislation emphasises technology-neutrality which helps regulation remain relevant and enforceable as technology evolves. By clarifying that market participants are responsible for AI-driven trading systems without prescribing specific technologies or methods to achieve compliance, the proposed amendment stays within the framework of a technologically neutral approach.   


Additionally, the proposal focuses on outcomes (i.e., ensuring the integrity of markets) rather than detailing exact procedures or technological constraints, thereby aligning with Australia’s principles-based approach to AI regulation.  


Estimated cost of proposal


The cost will mainly depend on the technology and staffing investments deemed necessary for ASIC to enhance its technological capabilities to effectively monitor and enforce compliance.   

 Areas requiring funding include:    

  • Legal, consultative, and administrative costs associated with updating the Market Integrity Rules   

  • Expanding ASIC’s monitoring systems to detect AI-driven manipulation and enforce ongoing compliance     

  • Educating financial institutions on the expanded scope of regulatory obligations   


Cost estimation:    

  •  Initial funding: Likely $10-25M over 3-4 years    

  • This is primarily for consultation, technological updates, and raising awareness of additional regulation     

  • Comparable benchmark: Payment Times Reporting Initiative (2024-25)   

  •  Ongoing costs: Approximately $5-10M annually    

  • This will fund operational costs associated with ongoing compliance, monitoring, and enforcement 

  • Comparable benchmark: Beneficial Ownership Register Initiative (2024-25) 


Points of comparison: 

Previous Funding Example 

Cost 

Use of funds 

Consumer Data Right (CDR) Scheme (2020)  

$19.2 million over 12 months 

·         

Allocated to Treasury and the ACCC 

  • $6.6 million towards implementing the scheme 

  •  $12.6 million towards an informational campaign to raise awareness of the CDR   

Beneficial Ownership Register (2024-25)  

  

$41.7 million over four years 

  

+ $9.6 million per year ongoing  

  

Allocated to Treasury, ASIC, and the Attorney-General’s Department 

  • Funds to regulate new beneficial ownership transparency requirements 

Payment Times Reporting (2024-25)   

$25 million over four years 

Allocated to the Payment Times Reporting Regulator 

  • Funds to increase the Regulator’s resourcing and technology upgrades    

  

Risks

Barriers

Industry pushback    

Trading institutions operating in Australia may resist increased regulatory oversight, due to the cost of ongoing compliance (Hyams, 2023). They may argue that the proposed regulatory changes disrupt the efficiency payoffs of AI-driven trading, framing additional regulation as a disincentive to further innovation in financial technology (Moutoi, 2024). 


Adopting a phased implementation approach might be helpful in allowing firms time to adapt to new obligations while spreading compliance deadlines over several years (Crayon, 2024). 


Definitional and enforcement challenges


Defining AI-driven market manipulation and establishing clear parameters for identification may be difficult due to the opaque nature of AI systems (Jain, 2024).    

     

Where autonomous AI systems operate as “black boxes”, their decision-making processes often lack explainability. This is further complicated by the adaptability of AI systems, which can autonomously evolve their strategies over time. Consequently, attributing manipulation specifically to the AI system’s logic rather than human misbehaviour or a coincidental anomaly may be challenging.    

     

This is particularly the case in Australia's technology-neutral framework which avoids prescriptive classifications. Extensive consultation with industry experts will be necessary to develop a definition that is flexible while remaining sufficiently precise to support enforcement.    


Technical capacity of regulators


Tracking how AI systems make decisions, and proving when they break the rules, requires advanced regulatory tools and expertise. Given the novelty of the risks posed by AI, it is unlikely ASIC currently has the staff or infrastructure needed to detect and respond to AI-based market manipulation. To address this barrier, further investment will be required to scale up its technological capacity to enforce the proposed policy effectively (ASIC, 2024).    


Risks


Australia's investment climate might become less attractive


By imposing stricter oversight on AI trading systems, there is a possibility that international market participants may perceive the Australian trading environment as overly burdensome (Chan, Papyshev, & Yarime, 2024). This may prompt a shift of trading activities to jurisdictions with looser regulatory environments, potentially reducing Australia’s market activity and liquidity in the short term. However, in the long-term, effective regulation could also position Australia as a safer destination for capital flow, attracting investment from those seeking lower risk exposure (King, 2025).   

    

Unintended market consequences


Stricter human intervention requirements, such as mandatory compliance and review processes, could make financial markets less efficient. Although the intention is to safeguard against AI misbehaviour, regulation may slow down trade execution and hamper the operational efficiency of compliant firms, thereby reducing liquidity and inadvertently contributing to brittle financial markets (Gai, Yao, & Ye, 2013).


Increased burden for the legal system


If the definition of AI-induced market manipulation lacks clarity, it may undermine enforceability of the policy (Chan et al., 2024). While a flexible definition is required by Australia’s technology-neutral framework, it risks creating loopholes vulnerable to AI exploitation which may lead to more frequent legal disputes.


References

Abbas, N., Cohen, C., Grolleman, D. J., & Mosk, B. (2024, October 15). Artificial intelligence can make markets more efficient—and more volatile. International Monetary Fund. https://www.imf.org/en/Blogs/Articles/2024/10/15/artificial-intelligence-can-make-markets-more-efficient-and-more-volatile 


Allied Market Research. (2024). Algorithmic trading market research, 2032. Allied Market Research. https://www.alliedmarketresearch.com/algorithmic-trading-market-A08567#:~:text=Algorithmic%20Trading%20Market%20Research%2C%202032... 


ANZ Bank. (2023, August). ANZ launches AI-powered chatbot ZGPT. ANZ Bluenotes. https://www.anz.com.au/bluenotes/2023/08/anz-news-tim-hogarth-zgpt-chatbot-australia/ 


ASIC. (2024, October 29). ASIC Report 798: ASIC’s response to the AI guardrails discussion paper. https://download.asic.gov.au/media/mtllqjo0/rep-798-published-29-october-2024.pdf 


ASIC. (2024). ASIC submission to DISR’s AI guardrails discussion paper. https://download.asic.gov.au/media/0fifk1th/202410-submission-to-disr-ai-guardrails-discussion-paper.pdf 


ASIC Market Integrity Rules (Securities Markets) 2017 (Cth). https://www.legislation.gov.au/current/F2017L01474 


ASIC Supervisory Cost Recovery Levy Bill 2017 (Cth). https://ministers.finance.gov.au/financeminister/media-release/2017/06/15/asic-industry-funding-model-passed-law 


Breeden, S. (2024, October). Keynote speech at the Hong Kong Monetary Authority–Bank for International Settlements High-Level Conference. Bank of England. https://www.bankofengland.co.uk/speech/2024/october/sarah-breeden-keynote-speech-at-the-hong-kong-monetary-authority 


Chan, A. Y., Papyshev, A., & Yarime, M. (2024). AI regulation in the context of national innovation systems: A comparative analysis. Technology in Society, 76,

102508. https://www.sciencedirect.com/science/article/pii/S0160791X24002951 


Chartered Accountants ANZ. (2024–25). Federal Budget 2024–25: Key technology and anti-corruption measures. https://www.charteredaccountantsanz.com/news-and-analysis/news/federal-budget-2024-25-key-technology-and-anti-corruption-measures 


CISRO. (2024). Artificial Intelligence: Australia’s Ethics and Roadmap. CSIRO Data61. https://www.csiro.au/-/media/D61/Reports/AI-Roadmap/19-00346_DATA61_REPORT_AI-Roadmap-_7_.pdf 


Commonwealth Bank of Australia. (2024, March). Microsoft and CBA announce partnership to bring generative AI to banking. https://www.commbank.com.au/articles/newsroom/2024/03/microsoft-ai-partnership.html 


Cornell. (2020, November 13). The 2010 Flash Crash: How information cascades shape our world. https://blogs.cornell.edu/info2040/2020/11/13/the-2010-flash-crash-how-information-cascades-shape-our-world/ 


Crayon. (2024). New EU AI Act: What businesses need to know. https://www.crayon.com/resources/insights/new-eu-ai-act/ 


Danielsson, J. (2021). Artificial intelligence and systemic risk. SSRN. https://papers.ssrn.com/sol3/Delivery.cfm/SSRN_ID3906163_code2867800.pdf?abstractid=3410948&mirid=1 


Department of Industry, Science and Resources. (2025). National AI Plan. https://www.industry.gov.au/publications/national-ai-plan 


Department of Industry, Science and Resources (DISR). (2024, September). Proposals paper for introducing mandatory guardrails for AI in high-risk settings. https://storage.googleapis.com/converlens-au-industry/industry/p/prj2f6f02ebfe6a8190c7bdc/page/proposals_paper_for_introducing_mandatory_guardrails_for_ai_in_high_risk_settings.pdf 


Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market. (2005). Official Journal of the European Union, L 149, 22–39. http://data.europa.eu/eli/dir/2005/29/oj 


Doctorow, C. (2024). AI’s ‘human in the loop’ isn’t – Cory Doctorow. Medium. https://doctorow.medium.com/ais-human-in-https-pluralistic-net-2024-10-30-a-neck-in-a-noose-is-also-a-human-in-the-loopthe-loop-isn-t-4b9510251ce5 


European Securities and Markets Authority (ESMA). (2024, May). Public statement on AI and investment services. https://www.esma.europa.eu/sites/default/files/2024-05/ESMA35-335435667-5924__Public_Statement_on_AI_and_investment_services.pdf  


European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending certain Union legislative acts (AI Act). Official Journal of the European Union, L, 202/1–202/178. http://data.europa.eu/eli/reg/2024/1689  


Evans, R. (2025, March 12). The current state of affairs for AI regulation in Australia. International Association of Privacy Professionals. https://iapp.org/news/a/the-current-state-of-affairs-for-ai-regulation-in-australia/ 


Frazier, J. (2024, April). Selling spirals: Avoiding an AI flash crash. Lawfare. https://www.lawfaremedia.org/article/selling-spirals--avoiding-an-ai-flash-crash 


Financial Stability Board (FSB). (2024, November). The financial stability implications of artificial intelligence. https://www.fsb.org/2024/11/the-financial-stability-implications-of-artificial-intelligence/ 


Gai, P., Yao, A., & Ye, M. (2013). The externalities of high frequency trading. SSRN. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2066839 


Gillespie, N., Hardy, C., McKeown, S., & Sharma, R. (2025). Trust in AI: Global insights 2025. KPMG. https://kpmg.com/au/en/home/insights/2025/04/trust-in-ai-global-insights-2025.html#:~:text=As%20well%20as%20being%20wary,lowest%20ranking%20of%20any%20country 


Hall, J. (2024, May). The impact of AI on systemic financial stability. Speech at University of Exeter. Bank of England. https://www.bankofengland.co.uk/speech/2024/may/jon-hall-speech-at-the-university-of-exeter 


Hendry, J. (2020, July 23). Govt sinks another $20m into consumer data right. iTnews. https://www.itnews.com.au/news/govt-sinks-another-20m-into-consumer-data-right-550799 


Hyams, J. (2023). Will regulating AI hinder innovation? Trullion. https://trullion.com/blog/ai-regulation/ 


IMF. (2024, October). Chapter 3: Advances in artificial intelligence: Implications for capital market activities. In Global Financial Stability Report – October 2024 (pp. 75–98). https://www.imf.org/en/Publications/GFSR/Issues/2024/10/22/global-financial-stability-report-october-2024 


International Organization of Securities Commissions [IOSCO]. (2025, March). Artificial intelligence in capital markets: Use cases, risks, and challenges (CR/01/2025). IOSCO. https://www.publicnow.com/view/3839EDF5773EDC86D804E13441BD24E47D76B2DF?1741780065  


Jain, R. (2024). Ethical implications of autonomous AI systems in financial services. In M. S. Roy (Ed.), Responsible AI for Financial Technologies (pp. 101–122). Emerald Publishing. https://www.emerald.com/insight/content/doi/10.1108/978-1-83549-001-320241002/full/html?skipTracking=true 


Kelly, J. (2023, June 5). Artificial intelligence is getting regulated. Forbes. https://www.forbes.com/sites/jackkelly/2023/06/05/artificial-intelligence-is-getting-regulated/ 


King, S. (2024). AI regulation and innovation in Australia. https://www.pc.gov.au/media-speeches/articles/ai-regulation 


King, S. (2025). AI regulation: Global approaches and Australia’s positioning. https://www.pc.gov.au/media-speeches/articles/ai-regulation#:~:text=In%20contrast%2C%20Australia%20is%20a,version%20of%20an%20AI%2Dtechnology 


Krakovna, V. (2018, April 2). Specification gaming examples in AI. https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/ 


Lim, C., & Evans, B. (2024, September 16). AI regulation is coming to Australia: What you need to know. King & Wood Mallesons. https://www.kwm.com/au/en/insights/latest-thinking/ai-regulation-is-coming-to-australia-what-you-need-to-know.html 


Macrae, R., Uthemann, A., & Danielsson, J. (2020, March 6). Artificial intelligence as a central banker. VoxEU. https://cepr.org/voxeu/columns/artificial-intelligence-central-banker 


Markets in Financial Instruments Directive II (MiFID II). (2014). Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014. https://eur-lex.europa.eu/eli/dir/2014/65/oj/eng 


Microsoft & Tech Council of Australia. (2023, July). Australia’s Gen AI opportunity: Seizing the opportunity in the era of AI. https://msftstories.thesourcemediaassets.com/sites/66/2023/07/230714-Australias-Gen-AI-Opportunity-Final-report.pdf 


Moutoi, C. (2024, May 20). EU AI Act: How stricter regulations could hamper Europe’s AI innovation. IREF Europe. https://en.irefeurope.org/publications/online-articles/article/eu-ai-act-how-stricter-regulations-could-hamper-europes-ai-innovation/ 


National Australia Bank (NAB). (2023). AI optimism: How NAB is thinking about embracing artificial intelligence. https://news.nab.com.au/news/ai-optimism-how-nab-is-thinking-about-embracing-artificial-intelligence/ 


National AI Centre. (2025). Guidance for AI adoption. Department of Industry, Science and Resources. https://www.industry.gov.au/publications/guidance-ai-adoption  


National AI Centre. (2025). Guidance for AI adoption: Implementation practices. Department of Industry, Science and Resources. https://www.industry.gov.au/publications/guidance-for-ai-adoption/guidance-ai-adoption-implementation-practices 


NextDC. (2024). How AI is reshaping the finance and insurance sectors across Australia. https://www.nextdc.com/blog/how-ai-is-reshaping-the-finance-and-insurance-sectors-across-australia 


Organisation for Economic Co-operation and Development (OECD). (2023). Generative artificial intelligence in finance (OECD Artificial Intelligence Papers, No. 9). https://doi.org/10.1787/ac7149cc-en 


Pagram, T. (2024). Unlocking Australia’s growth potential: Insights from the 2024 AI Jobs Barometer. PwC Australia. https://www.pwc.com.au/services/artificial-intelligence/unlocking-australias-growth-potential.html 


Pepperstone. (n.d.). What is automated trading? https://pepperstone.com/en-au/learn-to-trade/trading-guides/what-is-automated-trading/ 


Porter, T. (2025). AI governance: Avoid risks and empower your teams. VisualSP Blog. https://www.visualsp.com/blog/ai-governance/ 


Regulation (EU) No 596/2014. (2014). On market abuse (market abuse regulation) and repealing Directive 2003/6/EC and Commission Directives 2003/124/EC, 2003/125/EC and 2004/72/EC. Official Journal of the European Union, L 173, 1–61. http://data.europa.eu/eli/reg/2014/596/oj 


Reserve Bank of Australia (RBA). (2024, September). Focus topic: Financial stability implications of artificial intelligence. Financial Stability Review. https://www.rba.gov.au/publications/fsr/2024/sep/focus-topic-financial-stability-implications-of-artificial-intelligence.html 


Westpac. (2023, November). How AI will shape the future of banking. https://www.westpac.com.au/news/in-depth/2023/11/how-ai-will-shape-the-future-of-banking/ 

 

Appendix - Glossary of key terms

Adaptive Drift A change in the statistical properties of input data over time that causes an AI model to diverge from its original training parameters. If left unmonitored, this can significantly compromise the accuracy and integrity of an adaptive system.

AI Hallucinations When an artificial intelligence system confidently disperses false or misleading information, creating false realities with widespread influence. 

Algorithmic Trading The use of computer programs, pre-set rules, or algorithms to automatically allocate and place trade orders within capital markets. 

Capital Management The allocation and maintenance of financial resources by institutions to absorb losses, meet regulatory requirements, and remain solvent under different risk scenarios. 

Collusion  A secret agreement between trading institutions to cooperate in ways that influence market prices or outcomes, often to the detriment of consumers or other stakeholders. 

ETF (Exchange-Traded Fund) An investment fund traded on an exchange, composed of a basket of securities such as stocks, bonds, or commodities, which can be bought and sold as a single unit. 


Fire Sale The forced sale of an asset at a price significantly lower than its market value due to urgent circumstances, often in a state of financial distress. 

Herding A trading behaviour where investors mimic the actions of others, leading to large market movements in one direction. 

Highly Correlated Markets Occurs when investment decisions are moving in the same direction at the same time such that there is a strong correlation between the increase in the price of one asset and the prices of similar asset types. In the context of AI-driven trading, this is likely the result of systems being trained on similar data points and being optimised for similar objectives.  (Market) Liquidity Crises A market state where assets cannot be easily converted into cash without a significant price drop, often resulting from a sudden decrease in demand for the relevant asset class. 

Market Homogeneity A situation where many financial institutions act similarly, such as responding uniformly to regulatory changes, investment trends, or market signals, causing stock prices to rise and fall together. 

Market Interconnectedness  The extent to which financial markets are correlated in terms of asset allocation, strategies, and use of intermediaries. In highly interconnected systems, local shocks can propagate into global events. 

Market Manipulation Actions that artificially inflate or deflate the value of a financial product for personal gain. 

Material Risk Risks arising from the high-impact capabilities of AI models, characterised by their extensive reach and the potential for actual or reasonably foreseeable negative effects on the economy, which can propagate at scale throughout the value chain. 


Non-Reversible Risk Outcomes produced by an AI system that are not easily corrigible or reversible, particularly those with long-term adverse impacts on market stability. 


One-Way Markets Markets dominated by agents either simultaneously trying to buy or simultaneously trying to sell, causing prices to move strongly in one direction, and making trading difficult without large price fluctuations. 


Procyclical Behaviour

Actions that amplify economic trends, such as increased lending during a boom or credit restriction during a recession, exacerbating economic volatility. 


Safe Assets

Investments considered low-risk and likely to maintain or increase value during economic uncertainty or market downturns. 


Trading Participant 

A market participant with the right to submit trading messages (e.g., orders, amendments, or cancellations) on a trading platform for financial products.


The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.

Global Voices Logo (Blue world with great continents, Australia in focus at the bottom)
Global Voices white text
  • Instagram
  • LinkedIn

Careers

 

The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.

Global Voices is a registered charity.

ABN: 35 149 541 766

Copyright Ⓒ Global Voices Ltd 2011 - 2020

Global Voices would like to acknowledge Aboriginal and Torres Strait Islander peoples as Australia’s First People and Traditional Custodians.

We value their cultures, identities, and continuing connection to country, waters, kin and community. We pay our respects to Elders, both past and present, and are committed to supporting the next generation of young Aboriginal and Torres Strait Islander leaders.

bottom of page