
Eduardo Ugalde
General Director & Founder Partner. Business Strategy | Strategic Marketing & Advertising | Commercial Excellence & Operations | Business Transformation (Digital Evolution) | Commercial Project Management

Across Europe, and globally, healthcare systems are under pressure: aging populations, rising chronic and complex conditions, workforce shortages, and increasing costs. The promise of AI is to relieve some of that pressure: faster diagnosis, more efficient workflows, better outcomes.
Yet, despite a large number of AI tools already on the market, their deployment in real clinical practice “remains limited”.
A recent study for the European Commission (prepared by PwC EU Services EEIG and Open Evidence) takes a hard, structured look at why. It identifies four families of challenges that repeatedly block AI deployment in hospitals and health systems:
- Technological and data challenges
- Legal and regulatory challenges
- Organisational and business challenges
- Social and cultural challenges
Reading this study, I see a very clear message:
We talk a lot about the destination of AI in healthcare. We talk much less about the journey, and who actually pays for it, runs it, supervises it, and lives with its consequences.
In this article, I am not questioning whether AI can help healthcare. It clearly can, and the report includes compelling use cases. I am questioning how we are deploying it, what we are ignoring in the process, and what questions leaders should be asking before they sign the next AI contract.
The reflections in this article stem directly from my analysis of the European Commission, commissioned study; they represent my own interpretation of the evidence presented.
1. HIDDEN COSTS: Are we budgeting for the tool… or for the whole system around it?
The report is very explicit: deploying AI in healthcare is expensive, and the cost is not just the license fee.
It highlights that AI deployment and maintenance require significant investment in personnel, infrastructure and technology to test, validate, implement, and improve AI tools.
It also notes:
- Underinvestment in IT infrastructure creates interoperability problems and increases demand for human resources, for example, because staff have to enter data manually in parallel to digital systems.
- High deployment and licensing costs, combined with unclear reimbursement mechanisms, limit the ability to scale AI beyond large university hospitals with research grants.
- Lack of published evidence of added value makes it harder to attract funding or build reimbursement frameworks.
According to the surveys in the study, lack of funding and financial incentives is seen as a significant challenge by:
- 62% of healthcare professionals
- 50% of hospital representatives
- 61% of AI developers
If so many people across the ecosystem say money and financing models are a core barrier, why do so many AI narratives still behave as if “buying the model” is the main decision?
Questions to be asked before committing to AI:
- Have we budgeted not only for licenses, but also for: Additional IT infrastructure, integration work with legacy systems, ongoing monitoring and updates, training and backfilling of clinical staff time?
- What happens to our operating budget after year one, when the pilot money or innovation grant disappears?
- Do we really understand the total cost of ownership of this AI system in our local context, or are we buying based on idealised case studies from very different hospitals?
2. POST-DEPLOYMENT REALITY: Who owns the “Forever Work”?
One of the most important, and often ignored, parts of the report is about post-deployment monitoring and maintenance.
The study states clearly that AI deployment is an ongoing process requiring continuous monitoring and adaptation. AI performance can decline over time due to changes in local data, infrastructure, software updates, or patient demographics.
Without effective monitoring to detect this “drift”, healthcare providers may rightly hesitate to trust AI for critical decisions, as undetected degradation can directly affect patient safety.
The study recommends that:
- Ongoing performance oversight should be part of local AI governance.
- Monitoring should be tailored to risk, with clear baselines for input data so drift can be detected and trigger re-evaluation.
Yet only 35% of surveyed hospital representatives reported having mechanisms in place to monitor AI performance after deployment.
Therefore, we are introducing tools that can silently drift, and in most cases, we do not yet have robust structures to catch that.
Questions to be asked:
- Who, in our organisation, is accountable for ongoing AI performance after go-live (not just for the initial validation)?
- Do we have the people, processes, and data pipelines to actually monitor drift and trigger retraining or withdrawal when needed?
- How are we funding and staffing this “forever work”? Or are we implicitly assuming that the vendor will somehow do it for us, on our data, in our context, at no real cost?
3. WORKFORCE AND SKILLS: Are we saving time, or just moving the work?
The narrative that “AI will free clinicians’ time” is very powerful. The study recognises that potential, particularly in use cases around radiology and clinical documentation.
But it also shows that new work is being created: tagging data, supervising outputs, managing AI systems, and educating staff.
Some key points:
- The report highlights the low level of digital health literacy among healthcare providers and the public as a significant barrier.
- 43% of healthcare professionals, 58% of hospital representatives, and 27% of AI developers pointed to limited digital literacy as a challenge.
- 59% of patients and patient associations expressed concern about lack of competence among healthcare professionals to use AI.
- The study notes that using AI without adequate training not only limits value but introduces patient safety risks.
- It also describes the need for new roles and teams: multidisciplinary AI teams, new digital leadership positions (e.g. Chief AI Officer), and ongoing human oversight roles.
At the same time, there are mixed feelings about job security and overreliance on AI:
- Only about 10 to 12% of surveyed HCPs and hospital reps selected job security as a top challenge, so fear of immediate replacement is not dominant.
- But 59% of patient respondents, and significant shares of HCPs and hospital representatives, expressed concerns about overreliance on AI and loss of critical thinking.
So, we are not just talking about “labour savings”. We are talking about shifts in skills, tasks, and responsibilities, some of which have not yet been fully recognised in workforce planning.
Questions I would put on the table:
- Where is the extra human work required by AI going to sit? Data preparation, supervision, validation, training, governance, etc.
- Have we counted those hours, roles, and salaries in our business cases, or are they treated as “invisible” internal absorption by already-stretched clinical teams?
- How will we prevent deskilling and overreliance, particularly among younger clinicians who grow up with AI tools?
- Are we designing AI to augment clinicians, or quietly expecting it to replace them without being honest about that risk and its consequences?
4. TRUST, TRANSPARENCY, AND THE DOCTOR-PATIENT RELATIONSHIP
The study spends significant time on trust, both from healthcare professionals and from patients.
On transparency and explainability:
- 41% of healthcare professionals, 58% of hospital representatives, and 38% of AI developers see lack of transparency and explainability as a challenge to deployment.
- 59% of patient respondents expressed concern about the lack of information on how AI systems make decisions.
On the doctor to patient relationship:
- 56% of patients and patient associations were concerned about losing the human relationship with their doctor due to AI.
The report explicitly warns that extensive AI integration could intensify feelings of alienation and reshape the relationship into a more “consumer-provider” model if not carefully managed.
The concerns documented here are not emotional reactions; they emerge from structured consultations across clinicians, patients, and hospital leaders.
Questions to be asked:
- When we deploy AI into clinical workflows, how are we measuring its impact on trust, not just on throughput or turnaround time?
- Do patients and clinicians truly understand when AI is involved, and how? Or are we creating a “black box” where nobody outside the vendor can fully explain what is happening?
- Are our communication and consent processes evolving as fast as our AI deployments?
5. EVIDENCE GAPS: Are we learning from success stories only? What about failures?
One very honest part of this study is its limitations section. The authors acknowledge that:
- The literature may over-represent successful deployments and under-represent unsuccessful or highly problematic ones.
- Stakeholders consulted were necessarily those who already had some exposure to AI deployments, which introduces bias.
- Many accelerators come from “advanced regions” outside the EU; not all practices will transfer cleanly into European health systems.
- Even basic indicators about real-world use of AI-enabled medical devices are hard to obtain; existing databases do not robustly track actual deployment in practice.
These limitations do not diminish the value of the study; they highlight the structural difficulty of measuring AI performance at scale, which in itself is an important insight.
In other words: We do not yet have a complete, unbiased picture of how AI is performing in real healthcare environments, at scale, in day-to-day practice today.
If our public evidence base is skewed towards success, and our monitoring systems are still under-developed, that should make us cautious about aggressive promises.
Questions I would invite business leaders, HCPs, policymakers, and investors to reflect on:
- What proportion of our AI narrative comes from independent, long-term evaluations, versus vendor marketing and early adopter case studies?
- Are we systematically capturing failed or abandoned deployments, and using those lessons to improve our next decisions?
- Before pushing for “AI everywhere”, are we honest about the fact that even regulators and researchers say data on actual deployment is incomplete?
Therefore… Should we use AI in healthcare? My answer is YES… but only with eyes wide open!
The same study that surfaces all these challenges also points to accelerators and good practices:
- Investing in interoperable infrastructure and data standards.
- Establishing local performance testing and post-deployment monitoring through AI hubs, sandboxes, and assurance labs.
- Creating clear financing and reimbursement models based on demonstrated added value, not hype.
- Involving end-users from the start, building multidisciplinary teams, and investing seriously in digital health literacy for both professionals and patients.
I am supportive of AI in healthcare and life-sciences, but only when:
- We design for the journey, not just the destination.
- We count all the costs (direct, indirect, human, infrastructural).
- We are transparent about what is not working, not only what looks good on a slide.
- We treat clinicians and patients as partners, not as obstacles.
As professionals, we have the responsibility to ensure that AI is introduced safely, realistically, and with full awareness of its operational implications.
AI can absolutely help healthcare, but only when we acknowledge the real operational, organisational, and human challenges documented in real hospitals today. The study shows that the barriers are not theoretical; they are already shaping deployments on the ground. Ignoring them will only slow down the very transformation we hope to accelerate.
An invitation…
If you are a clinician, hospital manager, policymaker, business leader, vendor, or investor involved in AI for healthcare or life-sciences (transformation business cases), I would invite you to reflect on three simple questions before your next AI decision:
- What evidence, beyond marketing, do you have that it works in a context similar to yours, over time?
- Have you mapped the real process and resources required from “idea” to “stable routine use”, including monitoring, governance, and human oversight?
- Who will carry the risk (clinical, financial, reputational), if this does not perform as expected, or drifts silently after go-live?
If you cannot answer these questions honestly, maybe the problem is not that “healthcare is slow to adopt AI”.
Maybe the problem is that the AI industry, the market, or ourselves… are still selling a destination without acknowledging the road.
My objective is to keep sharing and discussing real-world, evidence-based experiences (successful or not; good or bad), so that AI in healthcare and life-sciences can evolve in a way that is safe, sustainable, and genuinely useful for patients, professionals, businesses, and the world.
STUDY ON THE DEPLOYMENT OF AI IN HEALTHCARE BY EUROPEAN COMMISSION, FINAL REPORT 2025:
https://health.ec.europa.eu/publications/study-deployment-ai-healthcare-publications-office-eurep_en


