A recent Guardian piece argued that by surrendering parts of healthcare to algorithms, we risk stripping healing of its political, cultural, and human roots. It’s a provocative claim - yet anyone working at the intersection of AI and clinical care will recognise the tension. AI is superb at structure, pattern, and paperwork. It is far less capable when the task requires attunement, nuance, or compassion.
In my work as an AI and Human Intelligence (HI) growth strategist for clinics, I see two truths every day:
- AI is indispensable for solving bottlenecks, reducing administrative burden, and improving consistency.
- Human intelligence is irreplaceable when it comes to empathy, shared decision-making, and moral reasoning.
The challenge for clinicians and practice owners is deciding where these boundaries sit - and how to uphold them.
The Risk of Empathy Erosion
Commentators such as Ewan Morrison have warned of the widening gap between what AI can output and what humans actually need. The concern is less about rogue superintelligence and more about boringly predictable errors from flawed training data. When AI models absorb inaccuracies, cultural biases, or skewed assumptions, they replicate them with confidence.
Recent evidence has shown:
- AI can fabricate scientific citations and misrepresent evidence without signalling uncertainty.
- Patients are already reporting interactions that feel ‘transactional’ when AI handles early triage or routine follow-up.
- Clinicians risk de-skilling, outsourcing too much cognitive work to tools that cannot reason contextually.
The outcome is subtle but significant: a gradual erosion of empathy. Not because clinicians care less, but because systems shift their attention away from listening, contextualising, and noticing. And once empathy thins out of a system, it is astonishingly difficult to reinstate.
The Emerging Backlash
At HLTH 2025, the headline number was striking: 86% of health systems now use AI somewhere in the clinical pathway. Yet, adoption alone does not reflect satisfaction. Many organisations implemented tools under pressure - staffing shortages, regulatory demand, or commercial incentives - before considering the cultural implications.
A recent Washington Post op-ed captured this dichotomy perfectly, calling AI healthcare’s ‘last best hope’ only if its deployment remains balanced. In other words, AI must serve patients, not the reverse.
Clinicians echo this sentiment. They recognise that AI can reduce burnout, streamline decision-making, and improve accuracy. But they also express:
- Concern that chatbots and virtual agents are replacing meaningful touchpoints.
- Frustration with poorly-implemented tools that create more work.
- Fear that leadership is chasing efficiency metrics at the expense of trust.
What’s emerging is less ‘AI fatigue’ and more AI realism - a desire to move past hype and towards mature, ethical application.
The Most Controversial Frontier: AI in Life-and-Death Decisions
A flashpoint erupted recently around so-called Patient Preference Predictors - algorithms designed to predict what a patient ‘would have wanted’ in an end-of-life situation when they cannot speak for themselves.
On X/Twitter, clinicians, ethicists, and patient advocates fiercely criticised the idea. The objections were clear:
- No model is qualified to overrule family in profoundly human decisions.
- Data sources used for prediction could embed cultural or socio-economic bias.
- Trust in healthcare could be permanently damaged if families felt sidelined by software.
Whether or not PPPs ever become mainstream, the controversy highlights the central question: Where should AI never tread?
For many of us, the line is uncomplicated: AI may advise, inform, and support, but it should never become the decision-maker in emotionally or morally complex scenarios. Those moments require presence, compassion, and shared humanity - qualities not modelled in code.
Where AI Can and Should Help: A Human-Centred, Hybrid Model
If the pitfalls are obvious, so too are the opportunities. When implemented thoughtfully, AI can restore the one thing clinicians desperately need: time.
In our clinic partnerships, we apply a hybrid model grounded in a simple principle: AI handles the tasks; humans handle the touch.
This model includes four practical pillars:
1. Automate the burdens that drain clinicians
AI excels at:
- Clinical documentation
- Routine patient queries
- Scheduling and follow-up prompts
- Basic triage
- Data extraction and summarisation
When clinics introduce effective AI assistants, we often see:
- Shorter patient wait times
- More face-to-face presence from clinicians
- Significant reduction in after-hours administration
- Higher staff satisfaction
One oncology team reported that once routine questions were managed by an AI assistant, nurses could offer more personal check-ins - a shift that patients described as ‘feeling more cared for’.
2. Protect the human domain deliberately
We explicitly ring-fence certain functions for humans:
- Complex decision-making
- Delivering difficult news
- Interpreting patient cues
- Supporting anxiety, grief, or uncertainty
- Value-based discussions
This clarity ensures AI never quietly drifts into roles it shouldn’t occupy.
3. Measure empathy as rigorously as efficiency
Clinics typically track throughput, appointment volume, cost, and time saved. Rarely do they track the warmth, tone, and emotional impact of their communication. We encourage clients to include:
- Patient-reported empathy scores
- Sentiment analysis of messaging
- Reflection logs from clinicians
- Quality-of-interaction audits
In one pilot where empathy markers were monitored alongside AI-assisted workflows, patient satisfaction in follow-up oncology conversations rose by roughly 40%.
The message is clear: you get the behaviour you measure.
4. Design with clinicians and patients, not for them
The most successful AI deployments involve co-creation. When clinicians shape the workflow and patients comment on the tone, the technology becomes a support, not a threat. Adoption rises, resistance falls, and culture stabilises.
The Real Future of Medicine: AI + HI, Not AI vs HI
The fear of dehumanised care is understandable, but it is not inevitable. Empathy doesn’t disappear because technology arrives; it disappears when design decisions ignore human needs.
If we choose to build systems that honour human intelligence - not overshadow it - then AI becomes an extraordinary ally:
- A guardian of clinician time
- A safety net for errors
- A partner in personalisation
- A catalyst for more humane interactions
But that future depends on intentionality. The question is not ‘Will AI change healthcare?’ - it already has.
The question is: Will we shape AI to strengthen human connection, or allow it to erode it?
Clinicians and practice owners are uniquely positioned to answer that.