Utilising Generative AI in Healthcare to Ease Physician Burnout
- 2024 Global Voices Fellow
- 4h
- 17 min read
Singithi Herath, 2024 AI for Good Fellow, Freya Phillips National Scholar.
Executive Summary
Australia's healthcare system faces a critical challenge; a predicted shortage of 10,600 General Practitioners (GPs) by 2031. To urgently address the burden of excessive non-clinical workload and improve GP retention, this paper recommends a policy to safely and ethically integrate Generative Artificial Intelligence (AI) into primary care. AI tools, such as AI scribes, have proven capable of significantly reducing administrative duties and documentation time, allowing GPs to dedicate more time to focus on patient-centred care. The current challenge is that the existing regulatory framework, including the Therapeutic Goods Act 1989, does not adequately provide clear guidance for the clinical, ethical, and professional use of these versatile technologies.
This paper recommends that the Australian Health Practitioners Regulation Agency (AHPRA) and the National Medical Board of Australia define clear guidelines for the appropriate use of AI within the existing GP Code of Conduct. This approach leverages AHPRA’s established role in professional standards to provide GPs with the necessary clarity on when AI is suitable for clinical support such as administrative tasks and diagnostic assistance and when it is not such as making final decisions or high-risk assessments. This targeted policy avoids the risks of over-regulation that could stifle innovation, while ensuring patient safety, protecting against potential data bias, and reinforcing GP accountability. The policy is estimated to cost between $700,000 and $800,000 for initial development, with an ongoing annual cost of $90,000 for continuous monitoring and updates. By setting clear professional guidelines, this policy ensures GPs can responsibly adopt AI, thereby improving workflow, reducing burnout, and strengthening the quality of primary care for all Australians.
Problem Identification
Australia is predicted to experience a shortage of 10,600 General Practitioners (GPs) by 2031. While there has been a mild rise in practitioners, demand for services has increased by 58% in recent years (Australian Medical Association [AMA], 2022). Many vacancies have been left unfilled for years due to the undersupply of physicians. Existing GPs are leaving the profession at an alarming rate, and 13% fewer medical students are choosing to go into the GP field. This has increased strain on the existing doctors' work, negatively impacting the quality of care for patients and service delivery. Many GPs are now burdened with an increased workload and are taking on more patients than the recommended caseload of 1,000 patients per doctor (James, 2024). Studies by Beyond Blue show that 32% of Australian doctors suffer from emotional exhaustion concerning burnout (Baigent & Baigent, 2018). The Health of the Nation report further found that from 2,000 practising GPs, 71% reported having experienced burnout in the past 12 months (Toukhsati et al, 2024). GPs are stressed and overworked, with the Australian Medical Association (AMA) saying if the problem is left unaddressed “Australia could face a further exodus of GPs” (AMA, 2023). Effects of clinical fatigue and burnout have caused major problems in healthcare service delivery, with patients experiencing misdiagnosis, mistreatment, and improper care (Toukhsati et al, 2024).
Generative AI systems could be used in the GP office in a variety of different ways to alleviate non-clinical task pressure, enhancing doctor-to-patient interactions. This includes use for clinical documentation tasks, writing case notes, streamlining health record systems and administrative duties, mitigating burnout caused by excessive non-clinical duties (You et al, 2025). Moreover, in remote and rural settings, AI could help facilitate better care as it addresses issues of understaffing by giving GPs another resource from which they could make sound decisions to benefit the health of their patients (Schneider, 2025).
Context
There are a variety of factors that have led to the GP shortage currently occurring in Australia. One is the ageing population, which increases demand for chronic healthcare services (Agarwal, 2024). As people in Australia live longer, they are more likely to experience multiple health conditions (comorbidity) and associated disabilities that need ongoing management, increasing the frequency and complexity of GP visits (Australian Institute of Health and Welfare [AIHW], 2014; AMA, 2018). Moreover, the growing complexity of medical cases currently has further increased the time and attention required for GPs to treat patients (Agarwal, 2024). Second, the current challenges of attracting and retaining full-time GPs have become a factor in the projected GP shortage. Younger generations are placing increased importance on maintaining a healthy work-life balance, thus prioritising flexible work arrangements (Shretha et al, 2011). Moreover, GPs face significantly lower remuneration compared to other medical professions, a situation that is worsened due to the medicare rebate that is unable to keep pace with the rising cost of running a practise (AMA, 2022). Furthermore, junior doctors face pay and entitlement cuts when leaving the hospital system to pursue GP training (AMA, 2022). The combination of these factors has increased pressure on physicians who are currently working in the field.
Generative AI is an emerging technology which could alleviate issues brought on by the impending GP shortage and assist the existing cohort to better cope with the increasing workload. Generative AI describes algorithms that can be used to create new content, whether it be audio, visual or text. These algorithms undertake machine learning to mimic human intelligence to perform tasks (Zewe, 2023). This type of technology allows AI models to continually learn from ‘data patterns’ in order to improve. Generative AI systems are trained through their algorithms to generate outputs similar to the data that had been inputted and trained on. This allows for Generative AI to sift through large amounts of data without human help to create, find, and answer questions and prompts given to it. Currently, Generative AI is being used by consumers in the form of Chat GPT, Microsoft Copilot, Google Gemini, Perplexity, and Poe AI. However, its usage is rapidly expanding beyond general consumer use into more technical and specialised fields (McKinsey and Company, 2023).
In the medical field, Generative AI has been used to help interpret medical imaging such as x-rays, and analyse complex genetic and molecular data sets, providing rapid interpretation of this information for doctors (Bhasker et al, 2023). These AI models process vast amounts of data, identify patterns and generate insights within a condensed time frame. This assists doctors in diagnosis, prediction of patient outcomes, and personalisation of treatment plans, supporting them in making informed decisions in a significantly shorter time frame compared to traditional methods (Khosravi et al., 2024). However, the full capabilities of Generative AI has not been embraced, with the current legislative framework barring its use in more human-centred settings, such as in the GP office.
Current Legislation
The Therapeutic Goods Act 1989 regulates and controls medical devices that are of high quality, safe to use, and work as intended to ensure that patients receive the best outcomes possible from using these devices (TGA, 2024). A medical device as defined by the Therapeutic Goods Act 1989 is “an instrument, apparatus, appliance, software, implant, reagent, material or other article that can be used to diagnose, prevent, monitor, predict outcomes of, treat easy symptoms of medical conditions, replace or enhance parts of the body, control or support contraception, or examine specimens from the human body” (TGA, 2024).
As Generative AI in GP offices has a wider range of uses than what is stated in the definition, the TGA does not adequately cover the uses in these situations. This poses a risk, as AI, though potentially not fulfilling the definition of a medical device, can still be applied to a medical environment/situation unregulated. Some technology that uses Generative AI may not be considered a medical device. However, if these Large Language Models (LLMs) are considered to be a medical device, then this technology would have to undergo vigorous testing. The producers of this technology would have to be able to demonstrate the reliability, appropriateness, and quality of this technology for its consumption by Australians in a medical environment. It is still unclear how regulatory measures and AI programs will intersect practicality with clinical practice.
In Victoria, the use of LLMs is strongly discouraged for any clinical use by the Health Service Advisory Board (Moodie, 2023). Liability is a significant concern; the use of an unapproved Generative AI system in clinical decision-making may lead to ethical and legal ramifications for healthcare providers. Privacy concerns are another leading reason for the discouragement of AI in Victoria, as the use of these systems could potentially breach Australian privacy laws and lead to the misuse of data (Victorian Government, 2024).
Banning this technology will not allow doctors to leverage the full potential of AI and the revolutionary impact it can have to ease burden. Utilising Generative AI and LLMs should become one part of a plethora of solutions to allow healthcare professionals to better identify illnesses and decrease workflow pressures.
The Medical Board of Australia and the Australian Health Practitioners Regulation Agency (AHPRA) are two other national organisations that govern the role of GPs and lay out the policies and regulations for how GPs should engage in their work. They help promote and support safe standards in the industry by publishing a range of standards, codes, regulatory guidelines and resources. Due to their role in maintaining safe practice standards for GPs, utilising these organisations to help maintain and set up the standard for AI use in GP offices is a viable solution.
Currently, AHPRA has a code of conduct for medical professionals in Australia, with several guidelines listed applicable to the use of AI, such as ensuring and maintaining patient privacy and personal data, assuming accountability for decisions made, and being transparent with patients about the use of AI. However, there are some gaps with this method of only applying the code of conduct to situations where AI is being used in primary healthcare. Though the current code of conduct can guide GPs’ use of AI, the lack of explicit reference and instruction can cause ambiguity for GPs, as there are no clear guidelines on how to incorporate AI into clinical decision-making.
Moreover, existing guidelines such as transparency and informed consent are not updated to appropriately include how GPs should inform their patients of AI use. With the complexities of AI and its data collection and decision-making, explicit reference on how to gain informed consent is necessary to ensure that all parties are appropriately informed of how the technology is impacting care provisions. Finally, the code does not hold GPs accountable for their decisions, nor how accountability should be shared in the instance of decision-making. There are no guidelines on how to appropriately navigate conflicting AI and GP recommendations, leading to further issues with a concrete understanding of when and how AI can be appropriately used by GPs.
While AHPRA could not regulate specific technologies or approve AI models directly, it provides the ethical and professional framework on which GPs must operate. As such, AHPRA could offer essential guidance on the responsible and ethical use of AI, particularly in upholding patient trust and professional integrity. This guidance could complement the TGA’s role in approving medical technologies, with both agencies together supporting the legal and ethical responsibilities of GPs. This dual oversight could help ensure that AI use in general practice remains both compliant and principled, while allowing AHPRA to continue assessing practitioners against its established Code of Conduct.
Case Studies
Currently, Chinese researchers from Tsinghua University have designed an AI- powered simulation model known as “Agent AI”. The technology was created to train doctors through simulations that mock real-world patient-doctor interactions, generated through GPT 3.5 (Keyue & Qiongfang, 2024). Moreover, this simulation could have the potential to connect virtual patients with real-world doctors, cutting down on wait times, and connecting rural patients to better healthcare. This demonstrates the capabilities of AI technology to revolutionise the healthcare industry.
In the US, the Food and Drug Administration (FDA) oversees the regulation of AI medical devices and formulates policies surrounding the application of AI in the medical field. At a state level, legislatively, California is considering a law concerning Automated Decision tools (AB311). This law would apply to the use of AI decision-making tools in healthcare, and would ensure that those using this technology would have to:
Submit annual impact assessments communicating the reasons behind each of the system’s decisions, if the subject of the decision is a natural person
Allow the subject of the decision the opportunity to be considered by a non-automated system, if they are a natural person
Develop safeguards, document the system’s limits, and communicate them to users and subjects of the system
Establish a designated governance/compliance person
This law would act alongside existing privacy and anti-discrimination legislation in California. As this is a state-level law, AB311 would regulate the use of AI within the state’s jurisdiction regarding its deployment, how decisions are explained and how individuals are protected. AB311 would operate alongside the FDA as it addresses different dimensions of AI governance (technical safety vs ethical use).
This case study highlights that the use of AI in clinical settings is both feasible and capable of being ethically and transparently regulated. The proposed California law demonstrates how governments can balance innovation with accountability by ensuring safeguards, transparency, and human oversight in automated decision-making. For Australia, it provides a valuable model for developing regulatory frameworks that support the safe integration of AI into healthcare, protecting patient rights while enabling technological advancement. Moreover, this model demonstrates that federal and state systems can collaborate and operate in parallel with each other to broaden safety measures across multiple dimensions of AI use.
Similarly, Canada’s Pan-Canadian Artificial Intelligence for Health (AI4H) Guiding Principles provide another example of how governments can take a coordinated and values-based approach to AI regulation in healthcare (Government of Canada, 2025). These nationally endorsed principles emphasise person-centred care, transparency, safety, and equity in the use of AI technologies. While not legislation, they demonstrate how shared governance and ethical frameworks can support consistent, responsible AI adoption across jurisdictions. For Australia, this offers a useful model for developing nationally coherent guidelines that align with both federal oversight and state-level healthcare delivery.
Artificial intelligence products are developing at a rapid pace globally, and Australia risks missing out on the positive outcomes that arise from a regulatory environment that takes into account the ever-evolving market. Rather than remaining as a barrier, policy reform should be viewed as an opportunity for Australia to take a proactive approach and develop regulatory requirements to ensure the safe and ethical use of AI while allowing for the timely adoption of the technology. This will allow Australia to benefit from the advancements of AI, allowing AI technology to be effectively utilised while maintaining robust oversight that protects both patients and healthcare providers.
Policy Options
Option 1. Amend the definition of what a medical device is as stated by the Therapeutic Goods Act 1989 Section 41BD to include the use of general AI technology which has been developed specifically to be applied in a medical setting
This would help regulate the use of Generative AI and LLMs that are not classified as medical devices. This option seeks to expand the current medical device definition to include general-use AI models when they are deployed in a clinical setting. This would allow for this sort of technology to be regulated and tested properly under the circumstances of a medical health setting so that it can be safely used in a medical setting.
However, over-regulation of this technology would make it inaccessible to healthcare practitioners. The regulations and guidelines set out by the TGA for technology to be approved as a medical device are stringent and may turn off developers from pursuing device regulation for AI. Another key consideration is the need for significant upskilling of TGA regulators to ensure they can appropriately assess which AI tools should be approved for clinical use in Australia and determine the necessary requirements developers must meet for regulatory approval. This option focuses on establishing the legal classification and approval process for AI tools entering into the medical field.
Option 2. Define when it is and is not suitable to utilise AI in healthcare practice within the guidelines and frameworks as set out by AHPRA and the National Medical Board of Australia.
This would provide clarity and guidance as to when a GP or health care practitioner should use AI as a part of their treatment and how to go about it safely. It allows for ethical considerations and supports GPs to make informed decisions on the appropriate situation for AI usage. It also ensures that there is a framework that aligns with legal requirements, protecting healthcare workers from any potential issues that could arise.
However, developing and understanding when AI can be used can be difficult. This would be outlined by AHPRA (Australian Health Practitioners Regulation Agency) and the National Medical Board of Australia through a framework for GPs. Guidelines such as this would protect GPs from liability. For example, it would be suitable to use AI in Administrative support, such as automating appointment scheduling or patient triage based on symptoms, diagnostic assistance, and monitoring chronic conditions, where algorithms can help track health data. It would not be suitable to use AI in making final clinical decisions without human oversight, mental health assessments where nuanced judgment is required, and high-risk procedures or emergency interventions where delays or misinterpretation could lead to serious harm.
Option 3. Amend the Therapeutic Goods Act 1989 to require developers and providers of healthcare AI-decision making tools to submit annual impact reports.
This amendment would prepare Australia’s regulatory landscape for the ethical and safe adoption of AI in clinical settings. This proposed amendment would improve transparency and accountability, promote patient autonomy through opt-out provisions, and ensure ongoing oversight via designated compliance officers. These reports will be submitted to the TGA for transparency of AI systems in environments with sensitive information. The amendment should also ensure that patients are given the option to opt out of AI-led decision-making, that the limitations and capabilities of the AI tools are clearly communicated to both users and patients, and that a designated compliance officer is responsible for ongoing oversight. These measures can help prevent over-reliance on AI by reinforcing the role of human judgement in clinical decision-making.
However, the proposal also presents challenges, including increased regulatory and operational burdens for developers and healthcare providers, potential delays in the deployment of new AI tools, and difficulties in clearly assigning accountability when errors occur. Despite these drawbacks, the amendment represents a proactive step toward the safe, ethical, and transparent integration of AI in Australian healthcare. These provisions would ensure that AI is used as part of a broader decision-making process, rather than acting as the sole authority. By mandating transparency, human oversight, and regular review, this legislative approach reduces the risk of errors, ensures accountability, and prevents over-reliance on automated systems. This policy option focuses more on the governance, monitoring and ethical use of AI tools once they have been approved for use in a healthcare setting.
Policy Recommendation
Policy option 2 “Define when it is and is not suitable to utilise AI in healthcare practice within the guidelines and frameworks as set out by AHPRA and the National Medical Board of Australia,” is recommended as it focuses on the practical implementation of AI usage without the risk of over-regulation or stifling future innovation in the field. Creating clear guidelines of when it is and is not suitable to use AI in primary care ensures that AI usage is ethically considered and safe. This recommendation strikes a good balance between offering adequate protection to both GPs and their patients without having to impose excessive burdens on developers. This recommendation offers clarity on AI regulation, supporting innovation while managing implementation challenges. GPs who want to utilise and integrate AI technology for their routine practice such as managing appointments, providing diagnostic support during consultation, and triaging patients would benefit most from this policy recommendation.
This policy will be implemented within the existing code of conduct for GPs across Australia, adding a specific section on how to appropriately use AI technologies in primary healthcare. The development of this policy will include AHPRA and the Medical Board of Australia, as well as the Royal Australian College of General Practitioners, to create a comprehensive guideline that addresses as many ambiguities as possible.
The implementation of the My Health Records (MHR) governance framework by the Australian Digital Health Agency, as well as the National AI Framework, demonstrates the scale of investment required to establish and govern health technologies in Australia. The Australian Government initially invested $1.2 billion in the initial implementation of MHR and allocated an additional $374.2 million for ongoing governance in the 2021-2022 budget (Australian National Audit Office, 2019). Similarly, the National AI Framework used $124.1 million in funding over 4 years, as outlined in the 2021-22 budget (Australian Government, 2021).
Based on benchmarking these initiatives, this policy is estimated to cost AHPRA between $700,000 and $800,000 with an ongoing annual cost of $90,000 to ensure continual monitoring, updates and revisions in line with developing technologies. This is a proportionate investment for the scale and target of implementing a narrow policy approach such as this. As AHPRA is a self-funding organisation, a grant from the government would be required to fund the revisions made to the code of conduct. Such a grant could come from the Department of Health to support changes. The success of these policy recommendations would be assessed through surveys aimed at GPs to gauge how effectively they were able to use AI technologies and if they provided sufficient guidance. Moreover, patients would also be surveyed to understand whether the changes made led to better health outcomes and a more effective patient-GP interaction.
This practical policy recommendation would allow GPs better access to AI technology without being hindered while giving them clarity as to when usage is appropriate. This would help ease GP burnout and lift the burden off of GPs who struggle to maintain their workload. GPs would have the opportunity to utilise another technology that would minimise time spent on non-clinical tasks, allowing them the time and effort to properly treat patients and increase the accuracy of diagnosis, resulting in better health outcomes.
Risks
There are several risks associated with implementing this policy recommendation. The major risk is that the changes made to the code of conduct will be unable to keep up with the continual fast-paced nature of AI technologies, running the risk of lagging behind and becoming obsolete. This risks making any financial support invested into the project to be deemed wasted.
When developing guidelines for the use of AI technologies in general practice, it is essential to ensure that only AI tools with demonstrable efforts to mitigate data bias are recommended. Given the skewed nature of many data sets used to train AI programs, there is a significant risk that such technologies may exacerbate existing health inequalities, particularly for marginalised groups such as Indigenous communities, who are often underrepresented in health data. To address this, the guidelines should include clear criteria for assessing whether an AI tool has undergone validation to account for data gaps and biases before it can be implemented in clinical decision-making.
Additionally, the guidelines should require that GPs receive appropriate training to understand the risks associated with data bias in AI technologies. This training should equip practitioners with the skills to critically assess AI-generated outputs, identify potential inaccuracies, and mitigate unintended consequences rather than becoming overly reliant on AI recommendations. Although some liken AI tools to advanced search engines, it is important to acknowledge that many AI-generated results may lack transparency regarding their data sources, making it difficult for GPs to verify the accuracy of the information.
There is a significant risk regarding the reliability and accountability of clinical recommendations generated by AI technologies. Many AI tools currently available have not been appropriately validated for clinical decision-making. As a result, the claims or recommendations offered by these systems often cannot be substantiated, nor can their sources be clearly identified or verified by the GP. This lack of transparency poses a major risk, as it undermines the GPs ability to ensure the accuracy, safety, and appropriateness of advice provided to patients. Therefore, the guidelines should emphasise the importance of clinical oversight, ensuring that AI recommendations are used to supplement, rather than replace, the clinician’s judgment.
Another crucial risk to identify is the issues surrounding patient data and privacy. Though the code of conduct states how a GP should handle patient privacy and data, this does not ensure that the programs used by the GP are following the appropriate laws surrounding patient data in the healthcare field. As a result, a GP needs to assume the accountability role and ensure that they minimise the input of sensitive information into AI technologies, which could violate privacy laws. In tandem with this, conversations surrounding informed consent should be carried out by the doctor with the patient to ensure that they have a full and comprehensive understanding of the role of AI technology and the risks that accompany its usage.
References
Adam Zewe, MIT News. (2023). Explained: Generative AI. Retrieved from https://news.mit.edu/2023/explained-generative-ai-1109
Ageing population will need more medical support. (2019). Australian Medical Association. Available at: https://www.ama.com.au/media/ageing-population-will-need-more-medical-support (Accessed: 16 April 2025).
Australian Government. (2021). Direct AI 2021–2022 Budget Measures: Implementation and next steps. https://webarchive.nla.gov.au/awa/20220816114242/https://www.industry.gov.au/data-and-publications/australias-artificial-intelligence-action-plan/direct-ai-2021-22-budget-measures-implementation-and-next-steps
Australian Institute of Health and Welfare. (2014). Ageing and the health system: challenges, opportunities and adaptations. Australia’s health series no. 14. Cat. no. AUS 178. Canberra: AIHW.
Australian Medical Association. (2023). Doctors Stressed, burnt out and susceptible to distressing regulatory processes, reports show. https://www.ama.com.au/ama-rounds/31-march-2023/articles/doctors-stressed-burnt-out-and-susceptible-distressing-regulatory
Australian Medical Association. (2022). The General Practitioner Workforce: Why the Neglect Must End. Retrieved From https://www.ama.com.au/articles/general-practitioner-workforce-why-neglect-must-end
Australian National Audit Office. (2019). Implementation of the My Health Record System. https://www.anao.gov.au/work/performance-audit/implementation-the-my-health-record-system#:~:text=My%20Health%20Record%20is%20an,infrastructure%20between%202012%20and%202016.
Bhasker, S., Bruce, D., Lamb, J., & Stein, G. (2023). Tackling healthcare’s biggest burdens with generative AI. Retrieved from https://www.mckinsey.com/industries/healthcare/our-insights/tackling-healthcares-biggest-burdens-with-generative-ai
Government of Canada. (2025). Pan-Canadian AI for Health (AI4H) Guiding Principles. Retrieved from https://www.canada.ca/en/healthcanada/corporate/transparency/health-agreements/pan-canadian-ai-guiding-principles.html
Keyue, X., Qiongfang, D. (2024). China’s first AI hospital town debuts. Global Times. Retrieved from https://www.globaltimes.cn/page/202405/1313235.shtml#:~:text=And%20now%2C%20Chinese%20researchers%20have%20developed%20an%20AI%20hospital%20town!&text=Recently%2C%20researchers%20from%20Tsinghua%20University%20have%20developed,(LLM)%2Dpowered%20intelligent%20agents%2C%20capable%20of%20autonomous%20interaction.
Khosravi, M., Zare, Z., Mojtabaeian, S. M., & Izadi, R. (2024). Artificial Intelligence and Decision-Making in Healthcare: A Thematic Analysis of a Systematic Review of Reviews. Health Services Research and Managerial Epidemiology, 11. https://doi.org/10.1177/23333928241234863
McGuinness, A. (2025). Legal experts warn patients’ medical data at risk as GPs adopt AI scribes. Australian Broadcasting Corporation. https://www.abc.net.au/news/2025-03-06/concern-for-private-medical-data-as-gps-adopt-ai-scribes/105002674
McKinsey and Company. (2023). The Economic Potential of Generative AI: The Next Productivity Frontier. https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier#introduction
Moodie, C. (2023). “Cease immediately”: Doctors forbidden from using artificial intelligence amid patient confidentiality concerns. Retrieved from https://www.abc.net.au/news/2023-05-28/ama-calls-for-national-regulations-for-ai-in-health/102381314 RACGP (Royal Australian College of General Practitioners). (2022). General Practice: Health of the Nation 2022.
Schneider, J. (2025). AI: The Inevitable Frontier in Rural Healthcare. General Catalyst. Retrieve From https://www.generalcatalyst.com/stories/ai-the-inevitable-frontier-in-rural-healthcare
Shrestha, D., & Joyce, C. M. (2011). Aspects of work-life balance of Australian general practitioners: determinants and possible consequences. Australian Journal of Primary Health, 17(1), 40–47. https://doi.org/10.1071/PY10056
Therapeutic Goods Administration (TGA). (2024). Regulation of software-based medical devices. Retrieved from https://www.tga.gov.au/how-we-regulate/manufacturing/medical-devices/manufacturer-guidance-specific-types-medical-devices/regulation-software-based-medical-devices
Toukhsati, S., Kippen, R., & Taylor, C. (2024). Burnout and Retention of General Practice Supervisors: Prevalence, Risk Factors and Self-Care. Australian Journal for General Practitioners, 53(11/10), 85–90. https://www1.racgp.org.au/ajgp/2024/supplement-december/burnout-and-retention-of-general-practice-supervis.
Victorian Government. (2024). Guidance for the safe and responsible use of generative artificial intelligence in the Victorian Public Sector. https://www.vic.gov.au/guidance-safe-responsible-use-gen-ai-vps
You, J., Dbouk, R., Landman, A. (2025). Ambient Documentation Technology in Clinician Experience of Documentation Burden and Burnout. Retrieved From https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2837847
The views and opinions expressed by Global Voices Fellows do not necessarily reflect those of the organisation or its staff.
.png)