Artificial Intelligence (AI) is increasingly being integrated into healthcare, promising to enhance efficiency, reduce administrative burdens, and improve patient outcomes. Yet why do so many doctors remain sceptical about its adoption?
A recent AusDoc survey of 242 doctors highlights the major concerns among GPs, revealing a “trust gap” between AI’s capabilities and its acceptance in clinical settings.
The most revealing finding was a general reluctance or ambivalence across all age groups to use AI tools at this stage. Even among the youngest cohort (25–34-year-olds), just under half said they were unlikely to use AI in the next year. It was a similar story with the 35-44s.
The picture differed slightly among the 45–55-year-olds, where only about 10% expressed the same strong disinclination to use these programs. In contrast, 40% of these GPs were ambivalent. In the oldest group (55-plus), just 35% said they were open to using an AI tool anytime soon.
Despite this apparent reluctance by many to adopt the recent technology, the survey found most respondents viewed AI as a boon for administrative efficiency.
The Promise of AI in Healthcare
AI has been touted with having the potential to revolutionise healthcare in several ways:
- Faster Diagnoses – AI-powered tools can rapidly analyse imaging scans and medical records, allowing doctors to diagnose conditions more efficiently. According to the survey, 67% of GPs believe AI can significantly speed up diagnostic processes. Many GPs highlighted the efficiency afforded AI with one summarising the benefits here: “Time saving, ability to focus on patient in consult and then be able to quickly generate a succinct summary of the complexity discussed in my consults, and then easily generate documents like care plans, letters to specialists, summary to patients etc.”
- Pattern Recognition in Diagnostics – AI has demonstrated success in analysing medical images, such as identifying anomalies in X-rays and MRIs. 69% of GPs expressed confidence in AI’s potential to assist radiologists.
- Decision Support – AI can assist doctors by providing data-driven insights, improving treatment accuracy. One respondent noted, “AI can enhance decision-making, but should never replace a doctor’s judgement.”
- Predictive Analytics – AI models can analyse vast datasets to predict disease outbreaks and patient deterioration, aiding in preventive care.
- Administrative Efficiency – AI-driven automation can streamline scheduling, billing, and medical documentation, reducing the burden on healthcare professionals and enabling them to focus more on patient care. 76% of respondents viewed AI as beneficial for administrative efficiency. GPs reported easier, more accurate/comprehensive note-taking during consultations, particularly among the over 55s who were often not proficient in touch-typing. One GP said AI made them significantly more efficient: “I prefer workflows that are quick, efficient, and optimal. Not achievable easily with AI”.
Why Doctors Are Hesitant to Trust AI
Several factors may contribute to the scepticism surrounding AI in healthcare:
- Lack of Transparency – Many AI systems function as “black boxes,” making it difficult for doctors to understand how they arrive at recommendations. 71% of doctors expressed concern about AI’s lack of explainability. One respondent stated: “AI recommendations should be like a second opinion, not a final decision maker.”
- Ethical and Legal Concerns – Who is responsible if an AI-driven diagnosis is incorrect? For many doctors, the legal and ethical implications of AI errors remain unclear. 54% of respondents felt there should be clear guidelines on AI liability. A doctor noted: “If AI gets something wrong, who takes responsibility? It’s a legal grey area.”
- Potential for Bias – AI models are only as good as the data they are trained on. If the data is flawed or unrepresentative, AI can produce biased or misleading results. A specialist noted: “Bias in AI could disproportionately affect vulnerable populations if not properly addressed.”
- Data Security and Privacy – AI systems handling sensitive patient data must meet stringent security standards to protect against breaches.
4 Ways that AI Can Earn the Trust of Healthcare Professionals
To bridge the trust gap, the survey findings suggest AI developers and healthcare institutions must:
- Enhance Transparency – Provide clear explanations for AI-driven recommendations. A respondent suggested: “AI needs to show its reasoning in a way that doctors can interpret quickly and easily.”
- Maintain Human Oversight – Ensure that AI remains a tool to support, rather than replace, clinical decision-making. One GP stated: “We need AI to assist, not dictate. There must always be a human final check.”
- Improve Data Integrity – Regularly audit AI systems for biases and inaccuracies. “If we can’t trust the data, we can’t trust the AI,” commented one respondent.
- Offer AI Training for Doctors – Educate healthcare professionals on how to use AI effectively and interpret its insights. The survey found that 63% of doctors would be more willing to use AI if they received formal training on its capabilities. A respondent mentioned they would be open to AI if they had “proper” training. “Right now, many of us don’t fully understand its strengths and weaknesses,” the GP said
Striking the right balance
While AI presents exciting opportunities, healthcare professionals emphasise the need for careful oversight and gradual implementation. The consensus from survey respondents is that AI should be used as a supplementary tool rather than a replacement for human decision-making.
With the right approach, AI may evolve from a controversial tool into a trusted asset in healthcare.
Source:
AusDoc survey ‘AI in healthcare: Your thoughts on the benefits and risks’, February 2025 (n=212)
This article was written with the assistance of AI.