top of page

Robo Cop: Protecting Our Super From Algorithmic Bias in RoboAdvice

  • Writer: 2025 Global Voices Fellow
    2025 Global Voices Fellow
  • Sep 26
  • 12 min read

Updated: Nov 3

Adele Dang, 2025 AI for Good Global Voices Fellow


Executive Summary


Superannuation is the main income source for 30% of male and 17% of female retirees, and will soon become the primary income source for Australia’s ageing population. The renewed focus on a robust retirement income system providing targeted and affordable income security has prompted a transition towards RoboAdvice by superannuation funds, which are low-cost tools providing personalised investment advice through algorithms. However, if the algorithms behind RoboAdvice are incorrect, retirees with money in superannuation funds may unwittingly become subject to algorithmic bias (AB) that could misallocate their investments and completely destabilise retirees’ financial security. The urgency behind this problem lies in the scale and speed at which this bias can be amplified, which could entirely destabilise Australia’s retirement income system, and exacerbate the vulnerability of retirees. 


This detriment can be minimised by mandating superannuation funds undergo annual APRA-managed algorithmic audits of their data and source code. These audits would ensure continued mitigation of AB as funds continue to update their algorithms and rely on new datasets. They would be funded through a 0.005-0.008% increase in APRA’s restricted annual levy rate, representing $20,000 - $40,000 per fund. However, the success of these audits rely on APRA-deployed teams having access to a wide range of interdisciplinary experts. Furthermore, there is a risk of backlash from funds as the audits would pose a financial and operational burden.


Problem Identification

As of December 2024, Australia’s superannuation holdings totalled $4.2 trillion and is the main income source for 30% of male and 17% of female retirees (AIHW, 2024). As compulsory contributions rise, it will become the primary retirement income for a majority of Australians. Thus, superannuation funds must ensure members receive accurate advice and strategies (Treasury, 2023). Increasingly, this advice is delivered through RoboAdvice; automated, algorithm-driven recommendations based on risk appetite, financial goals and market conditions (KPMG, 2021).


However, as algorithms are coded by humans and trained by datasets which reflect human decision making and historical market conditions, they may amplify pre-existing human prejudice (Australian Human Rights Commission, 2020). This may lead to investors’ funds being misallocated and poorly managed. Incorrect algorithms may overestimate investors’ risk profile based on traits such as their gender or race and expose them to unnecessary risk. For example, the algorithm may allocate non-Caucasian investors’ funds exclusively towards low-risk and low-return investments based on historical evidence of their avoidance of high-risk and high-return investments. 


Unlike human bias, AB can spread rapidly and at scale, posing a significant risk to Australia’s retirement income system (Treasury, 2023). If unaddressed, it could completely destabilise retirees’ financial security, eroding public trust in superannuation funds and the Australian Prudential Regulation Authority (APRA), the statutory body tasked with its regulation and supervision.


As algorithms rely on mathematical models, AB in RoboAdvice only has two plausible causes: incorrect algorithms or the inherent ‘unpredictability’ of financial markets (Kostova, 2024). This policy proposal will focus on the former given its ability to be mitigated.

Context

Background

The Superannuation Guarantee (Administration) Act 1992 was introduced as an alternative to the pension to address Australia’s ageing population, promoting both retirement saving (accumulation) and income security in retirement (decumulation) (Treasury, 2001; 2023). The compulsory Superannuation Guarantee (SG) rate, currently 11.5% of pre-tax earnings, will rise to 12% in July 2025 (ATO, 2024). While rising SG rates have strengthened accumulation, reforms have placed ‘less consideration’ on decumulation (Treasury, 2023).


In July 2024, the first wave of Delivering Better Financial Outcomes amendments were passed, following the Quality of Advice Review which identified gaps in retirement advice due to increasing costs of personalised financial advice (Bowes, 2024). The amendments thus aim to modernise financial advice to increase affordability (The Treasury, 2024). Following these aims, funds accelerated their uptake of RoboAdvice; a low-cost alternative to human-provided financial advice (Bowes, 2024). Although only 8% of assets currently utilise RoboAdvice in 2024, this is expected to grow (KPMG, 2024). This uptake shifts the focus on decumulation, as more older Australians can access lower-cost investment advice to maximise returns and thus ensure a larger retirement income pool.


The Australian Prudential Regulation Authority

APRA is an independent body focused on protecting the financial interests of beneficiaries, including fund members (APRA, n.d.). 


It regulates funds under the Superannuation Industry (Supervision) Act 1993 (Cth) (‘SIS Act’) and its activities are funded through levies set out in the APRA Supervisory Levies Determination 2024.


Table 1: APRA’s levy rates imposed on superannuation funds
Table 1: APRA’s levy rates imposed on superannuation funds

Under Prudential Standard SPS 510 Governance, Registrable Superannuation Entity (RSE) licensees must maintain an independent and adequately resourced internal audit function to assess financial and risk frameworks. RSEs include registrable superannuation funds, approved deposit funds or pooled superannuation trusts (s 10). s 35AC of the SIS Act stipulates the auditor requirements


Current Policy Landscape

Australia has no AI-specific legislation (Sun, n.d.). Instead, it relies on existing laws to manage the challenges posed by AI, such as the Privacy Act 1998 (Cth), Corporations Act 2001 (Cth), anti-discrimination laws and voluntary measures. However, these laws have not been updated to address the novel aspects of these challenges. For example, anti-discrimination legislation does not account for AI’s potential to generate new forms of bias (AHRC, 2021).


In 2019, the government introduced voluntary AI Ethics Principles, focusing on human-centred values and safety (DISR, 2019). These were expanded in 2024 with the Voluntary AI Safety Standard, which provides 10 non-binding guardrails. For instance, Guardrail One promotes internal accountability structures and Guardrail Nine promotes record keeping to allow third parties to assess compliance (DISR, 2024a).


Mandatory guardrails for high-risk AI applications are in early development. RoboAdvice used by funds is likely to be considered high-risk based on its ‘intended and foreseeable uses’, given its potential to affect vulnerable groups (Principle D) and the broader Australian economy (Principle F) (DISR, 2024b).


Current Policy Landscape

European Union AI Act


The EU AI Act adopts a risk-based framework, assigning developers’ responsibilities based on their systems’ risk level (EU AI Act, 2024). In finance, algorithmic systems assessing creditworthiness are classified as high-risk (EU AI Act, 2024, Annex III). High-risk AI providers must undergo conformity assessments and implement risk management systems, including ongoing self-assessment and mitigation (art 9). However, RoboAdvice for investment management is unlikely to meet this definition as RoboAdvice assesses individuals’ risk profiles.


The EU also has separate privacy laws, the General Data Protection Regulation (GDPR). The GDPR mandates Data Protection Impact Assessments (DPIAs) for processing likely to pose a ‘high risk to the rights and freedoms of natural persons’ (art 35(1)). DPIAs are a process that help firms identify and minimise data protection risks of a project or plan, weighing them against the benefits they wish to achieve (ICO, n.d.). RoboAdvice is likely to trigger this requirement as its processing of personal information generates effects on individuals’ investments which may be as significant as ‘legal effects’ (art 35(3)(a)).


Current Policy Landscape

In Australia, several suggestions have been made to mitigate the adverse outcomes of AB in financial systems:


  1. Implementing national legislation akin to the EU AI Act (AHRC, 2021)

  2. Modifying existing legislation such as the federal Anti-Discrimination Laws, the Online Safety Act 2021 and the Privacy Act 1998 to account for AI-related issues (AHRC, 2021)

  3. Using regulatory sandboxes to identify and mitigate AB (Lee, Resnick and Barton, 2019)


Regulatory sandboxes would allow funds to test the algorithms behind their systems in a supervised setting, allowing for risks to be rectified before full-scale deployment (ASIC, n.d.; ASIC, 2024b). 


ASIC’s Enhanced Regulatory Sandbox (ERS), introduced in September 2020, allows firms to test innovative financial products for up to 24 months without a licence (ASIC, n.d.). However, strict entry requirements, such as ineligibility for previously tested services and a $5 million customer exposure cap, limit its accessibility. Under its non-authorisation model, firms must submit notices 30 days in advance, but ASIC is not required to review them, creating uncertainty. As a result, the ERS has faced criticism for inadequately supporting Australia’s growing fintech sector (Didenko, 2021). Proposed reforms suggest shifting to an authorisation model, requiring ASIC approval before testing (FinTech Australia, 2024; Didenko, 2021).


  1. Introducing algorithmic auditing


This refers to the process of reviewing an algorithm’s outputs, the quality of the code or the governance of the system, depending on the technical level of the audit (DRCF, 2020; Engler, 2021). In the context of AB, this could include reviewing whether the code mathematically incorporates substantive equality or whether outputs are free from AB (AHRC, 2020).  


However, algorithmic auditing is novel compared to financial auditing and thus lacks standards or an oversight body (Engler, 2021). 


It is also unresolved whether algorithmic auditing is more preferably undertaken internally or externally:

  • Internal: Internal auditing provides auditors direct access to primary data (Raji et al., 2020). However, an established process has yet to exist, undermining the potential to guarantee objectivity. 

  • External: External auditing ensures access to a broader group of independent interdisciplinary experts (Xu et al., 2022; Department for Science Innovation and Technology & AI Safety Institute, 2025). However they are more costly and dependent on the capacity of these external companies. There may also be privacy concerns if access to primary data is granted.  

Policy Options

To reduce the prevalence of AB in investment management, the Australian government needs to regulate and ensure the use of trustworthy RoboAdvice AI models within the Australian Superannuation industry which are ethical and fair. There are several ways this could be achieved:


  1. Mandate superannuation funds to undergo annual algorithmic auditing


    This auditing would require APRA’s scope of regulation of funds to include mandatory algorithmic auditing. The Federal Government would need to amend the RSE licensee law, Prudential Standard SPS 510 Governance, to include the requirement for ‘registrable superannuation entities’ seeking to deploy RoboAdvice to undergo algorithmic auditing. This auditing would be carried out by an APRA-appointed interdisciplinary team of tech, ethics and behavioural experts. 


    However, as algorithmic auditing is still in its infancy (Engler, 2021), the lack of standardised methods could produce inconsistent results. Thus, this policy option should be more holistic and incorporate the implementation of standards. Audit costs could be covered by a one-off 0.005–0.008% increase in APRA’s annual restricted levy, representing $25,000-$40,000 per fund.


    This policy would strengthen accountability, reduce AB, and build trust with end-users. However, interdisciplinary audits would also be resource-intensive, and fast-evolving AI models risk making audit findings quickly obsolete.

  1. Mandate superannuation funds to participate in an authorisation-model sandbox which improves upon ASIC’s current ERS t


    An authorisation sandbox would improve ASIC’s current ERS by requiring active approval for funds using RoboAdvice, rather than allowing them to proceed unless ASIC objects. Running the sandbox would cost at least $1.6 million annually (Appay & Jenik, 2019).


    This policy option reflects the early stage of RoboAdvice and allows for early detection of AB and  offers funds more certainty than the current non-authorisation model. However, its limited scale due to its simulatory nature may not reflect the complexity of real world markets, thus reducing the generalisability of findings. As sandboxes would only be available to firms who have not yet tested their algorithms, regular testing would not be possible. 


  2. Mandate superannuation funds to undergo and publish Algorithmic Impact Assessments (AIAs) on the ASIC website


    AIAs would mirror the format of the EU’s current DPIAs. AIAs would be mandatory before the deployment of RoboAdvice and focus on minimising the risks of AB. The annual publishing of these reports would mitigate against the risk of self-reporting bias. 


    Based on estimations of GDPR compliance from EU-based firms, funds would incur average compliance costs of $2.03 million (Veritas, n.d.). 

    This policy option would allow early identification of biases and ensure proactive risk mitigation. However, compliance could be resource-intensive and may act as a barrier to entry for smaller funds and reduce innovation. 

Policy Recommendation

Option 1, to “mandate superannuation funds to undergo routine algorithmic auditing”, is recommended as the most viable means to ensure long-term mitigation of AB in RoboAdvice, thus creating ethical and trustworthy algorithmic systems in Australia's superannuation industry.


Implementation


The Federal Government would need to amend Prudential Standard SPS 510 Governance to mandate annual external algorithmic audits. This would expand the scope of APRA’s regulation of superannuation funds.


This proposal suggests the below insertion: 


External Audit


39. An RSE licensee must undergo an annual external audit. The external audit will be carried out by a select interdisciplinary team deployed by APRA. 


40. The objective of the external audit is the mitigation of algorithmic bias to ensure the deployment of trustworthy, ethical and fair algorithm-based models.  To fulfil its functions, the external audit team must, at all times, have unfettered access to all the RSE licensee’s data and source code. 


Key Elements


Due to the lack of standardisation across industries and countries (Engler, 2021), APRA should consider the following elements to encourage certainty in the audit results. 


Form

An annual technical audit of the data and source code for AB would be most appropriate, given the serious nature of handling retirement funds (DCRF, 2020). Given the resource-intensive nature of technical audits, APRA would need an interdisciplinary team, including specialists in psychology, behavioural economics and ethics, to effectively address bias (Department for Science, Innovation and Technology & AI Safety Institute, 2025; DRCF, 2020).


Standards APRA should create standards common to the Australian superannuation industry (Raji et al., 2020). These standards should combine the Australian financial auditing standard, where applicable, and the AI Ethics Principles and Voluntary Guardrails to ensure alignment and certainty. 


Funding

As the audits will be undertaken by external teams, they will need to be funded through APRA’s levy collection. 


APRA would need to raise its restricted levy percentage in order to raise the revenue required to sustain the auditing team and process. Survey data estimated funds' legal, audit, and insurance costs at $200,000 for funds with $5 billion worth of assets, or 0.04% of assets (Clare, 2006). A reasonable one-off levy increase would be around 0.005–0.008%, or $25,000–$40,000 per fund (Clare, 2006). 


Barriers and Risks


Barriers

The success of the policy depends on APRA having a sufficiently large, interdisciplinary workforce to staff the external audit teams. Failure to do so would risk the audit teams themselves being biased and thus fail to adequately ascertain whether the algorithms exhibit AB. To mitigate this, the policy should be implemented in three-year phases to accommodate for the capacity of the APRA auditing teams. Participation would depend on the funds’ asset sizes, with the first phase including funds with assets above $20 billion, then $5-$20 billion, and finally all other funds. 


External audits are more costly and less efficient than internal ones, and without clear timelines, annual audits could cause backlogs. To mitigate this barrier, APRA could allow auditors to set audit frequencies after the initial audit, capped at three years. Furthermore, audits fail to perfectly replicate real-world deployment conditions, which may limit generalisability (DRCF, 2020).


Risks

Audits are static as they report insights at a point of time. However, the systems that they report on are constantly evolving. There is thus a risk that the algorithm behind RoboAdvice has changed materially at its deployment from when the audit team validated that it was AB free (DRCF, 2020). 


There is also a risk of backlash. Funds are already mandated to undergo financial audits under the SIS Act, and must pay annual levies to APRA to fund these audits. They may perceive the additional mandated audits as financially and organisationally burdensome.


References

Appay, S., & Jenik, I. (2019, August 1). Running a sandbox may cost of $1M, survey shows. (CGAP). https://www.cgap.org/blog/running-sandbox-may-cost-over-1m-survey-shows 


APRA. (n.d.). APRA’s objectives. https://www.apra.gov.au/apras-objectives#:~:text=APRA's%20statutory%20objectives%20therefore%20require%20it%20have,system:%20efficiency%2C%20competition%2C%20contestability%20and%20competitive%20neutrality.&text=These%20Acts%20are%20the%20Banking%20Act%201959%2C,and%20the%20Superannuation%20Industry%20(Supervision)%20Act%201993


APRA. (2025, February 27). APRA releases superannuation statistics for December 2024 [Media release]. https://www.apra.gov.au/news-and-publications/apra-releases-superannuation-statistics-for-december-2024#:~:text=Total%20superannuation%20assets%20increased%20by,was%20in%20APRA%2Dregulated%20funds


Australian Human Rights Commission. (2020). Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias. Australian Government. https://apo.org.au/sites/default/files/resource-files/2020-11/apo-nid309692.pdf 


Australian Human Rights Commission. (2021). Human Rights and Technology. Australian Government. https://humanrights.gov.au/our-work/technology-and-human-rights/publications/final-report-human-rights-and-technology   


Australian Institute of Health and Welfare. (2024). Older Australians. Australian Government.https://www.aihw.gov.au/reports/older-people/older-australians/contents/income-and-finances 


Australian Taxation Office. (2024). How much super to pay. https://www.ato.gov.au/businesses-and-organisations/super-for-employers/paying-super-contributions/how-much-super-to-pay 


ASIC. (2024a). ASIC Annual Report 2023-24. https://download.asic.gov.au/media/b5wldbv0/asic-annual-report-2023-24_chapter-3.pdf 


ASIC. (2024b). Beware the gap: Governance arrangements in the face of AI innovation (Report 798). https://download.asic.gov.au/media/mtllqjo0/rep-798-published-29-october-2024.pdf 


ASIC. (n.d.). Enhanced Regulatory Sandbox (ERS). https://asic.gov.au/for-business/innovation-hub/enhanced-regulatory-sandbox-ers/ 


Bowes, M. (2024, September 27). Financial advice for $88: Super funds launch low-cost tools. Australian Financial Review. https://www.afr.com/policy/tax-and-super/financial-advice-for-88-super-funds-launch-low-cost-tools-20240924-p5kd0g 


Clare, R. (2006). Benefits and costs of regulation of superannuation. https://www.superannuation.asn.au/wp-content/uploads/2023/09/0611-Regulation_paper.pdf 


Department of Industry, Science and Resources. (n.d.). Australia’s AI Ethics Principles. Australian Government. https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-principles/australias-ai-ethics-principles 


Department of Industry, Science and Resources. (2024a). Voluntary AI Safety Standard. Australian Government. https://www.industry.gov.au/publications/voluntary-ai-safety-standard 


Department of Industry, Science and Resources. (2024b). Safe and responsible AI in Australia: Proposals paper for introducing mandatory guardrails for AI in high-risk settings. https://storage.googleapis.com/converlens-au-industry/industry/p/prj2f6f02ebfe6a8190c7bdc/page/proposals_paper_for_introducing_mandatory_guardrails_for_ai_in_high_risk_settings.pdf 


Department for Science, Innovation and Technology & AI Safety Institute. (2025). International AI Safety Report. https://www.gov.uk/government/publications/international-ai-safety-report-2025 


Didenko, A. (2021). A Better Model for Australia’s Enhanced FinTech Sandbox. UNSW Law Journal, 44(3), 1078-1113. https://www.unswlawjournal.unsw.edu.au/wp-content/uploads/2021/09/Issue-443_final_Didenko.pdf 


Digital Regulation Cooperation Forum. (2020). Auditing algorithms: the existing landscape, role of regulators and future outlook. https://assets.publishing.service.gov.uk/media/626910658fa8f523c1bc666c/DRCF_Algorithmic_audit.pdf 


Engler, A. (2021). Auditing employment algorithms for discrimination. Brookings. https://www.brookings.edu/articles/auditing-employment-algorithms-for-discrimination/ 


EU Artificial Intelligence Act. (2024). High-level summary of the AI Act. https://artificialintelligenceact.eu/high-level-summary/ 


FinTech Australia. (2024). RE: FinTech Australia - 2024-25 Pre-Budget Submission. https://drive.google.com/file/d/1Qx5CW4iiJd2CvSLrfH-6bIYCTNls0tbM/view 


Information Commissioner’s Office. (n.d.). What is a DPIA?. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/accountability-and-governance/data-protection-impact-assessments-dpias/what-is-a-dpia/ 


Kostova, T. (2024, August 15). GenAI Vs Robo-Advisors: Considerations For The Financial Industry. Forbeshttps://www.forbes.com/councils/forbesbusinesscouncil/2024/08/15/genai-vs-robo-advisors-considerations-for-the-financial-industry/ 


KPMG. (2021). Algorithmic bias and financial services: A report prepared for Finastra International. https://www.finastra.com/sites/default/files/documents/2021/03/market-insight_algorithmic-bias-financial-services.pdf 


Lee, N., Resnick, P., & Barton, G. (2019). Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Brookings. https://www.brookings.edu/articles/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/ 


Raji, I., Smart, A., White, R., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability and Transparency, 33-44.  https://dl.acm.org/doi/abs/10.1145/3351095.3372873 


Sun, R. (n.d.). Global AI Regulation Tracker. https://www.techieray.com/GlobalAIRegulationTracker#google_vignette 


The Treasury. (2001). Towards higher retirement incomes for Australians: a history of the Australian retirement income system since Federation. https://treasury.gov.au/sites/default/files/2019-03/round4.pdf 


The Treasury. (2023). Retirement phase of superannuation. https://treasury.gov.au/sites/default/files/2023-12/c2023-441613-dp.pdf 


The Treasury. (2024). Fact Sheet: Ensuring access to quality and affordable financial advice. https://treasury.gov.au/sites/default/files/2024-12/p2024-607305.pdf 


Xu, P., Raji, I., Honigsberg, C., & Ho, D. (2022). Outsider Oversight: Designing a Third Party Audit Ecosystem for AI Governance. Proceedings of the 2022 AAAI/ ACM Conference on AI, Ethics and Society, 557-571. https://dl.acm.org/doi/pdf/10.1145/3514094.3534181  


Veritas. (n.d.). Organisations Worldwide Fear GDPR Non-Compliance Could Put Them Out of Business. https://uk.insight.com/content/dam/insight/EMEA/blog/2017/06/GDPR-Infographic-design-final.pdf 


The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.

Global Voices Logo (Blue world with great continents, Australia in focus at the bottom)
Global Voices white text
  • Instagram
  • LinkedIn

Careers

 

The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.

Global Voices is a registered charity.

ABN: 35 149 541 766

Copyright Ⓒ Global Voices Ltd 2011 - 2020

Global Voices would like to acknowledge Aboriginal and Torres Strait Islander peoples as Australia’s First People and Traditional Custodians.

We value their cultures, identities, and continuing connection to country, waters, kin and community. We pay our respects to Elders, both past and present, and are committed to supporting the next generation of young Aboriginal and Torres Strait Islander leaders.

bottom of page