💪 Health & Fitness

AI in healthcare: Adoption grows, but physician concerns linger

8 min read3 views
Share
AI in healthcare: Adoption grows, but physician concerns linger

AI use among physicians has surged from 38% in 2023 to 72%, yet only 37% report being more excited than concerned, citing liability and privacy issues.

Artificial intelligence (AI) is rapidly transforming industries across the board, and healthcare is no exception. A recent survey conducted by the American Medical Association (AMA) highlights significant growth in AI adoption in healthcare, particularly among physicians. But while utilization is on the rise, a sense of skepticism still clouds the enthusiasm of many healthcare professionals.

AI adoption in healthcare skyrockets

The AMA's latest survey shows that AI use among physicians has surged dramatically in just three years. In 2023, only 38% of doctors used AI tools in their practice; today, that figure stands at 72%. This sharp increase signals a growing reliance on AI to streamline administrative tasks and alleviate some of the most significant burdens physicians face.

Advertisement

Key areas where physicians have embraced AI include:

  • Summarizing medical research and care standards
  • Generating discharge instructions and care plans
  • Drafting progress notes and documenting billing codes
  • Scribing visit notes

Much of this adoption stems from AI's ability to simplify tedious yet essential documentation tasks, reducing the long hours doctors typically spend on administrative work after patient visits.

Why AI adoption is growing

Physicians who have implemented AI often cite its ability to boost efficiency and reduce burnout. Administrative and clerical duties have long been a major pain point for healthcare providers, and AI has stepped in as a valuable tool to alleviate these challenges. For example, generative AI systems can prepare clinical notes more quickly, allowing physicians to focus on patient care.

However, as the industry increasingly integrates this technology, concerns continue to weigh heavily on physicians' minds.

Significant concerns remain

Despite its growing utility, AI in healthcare raises a variety of issues. The AMA survey reveals that although 72% of physicians now use AI, only 37% report being more excited than concerned about its implementation. This disconnect highlights ongoing challenges related to trust and regulatory frameworks.

Top physician concerns about AI

  1. Patient privacy: Doctors hold grave concerns about the confidentiality of patient data used in AI tools. They worry that sensitive information could be vulnerable to misuse by vendors or other parties.

  2. Liability frameworks: Healthcare remains an inherently high-risk field, and many physicians want clarity around liability in cases where AI provides incorrect suggestions. If an AI system leads to poor clinical outcomes, who bears responsibility—the doctor or the technology provider?

  3. Skill loss risk: Increased reliance on AI in clinical workflows could diminish physicians' manual expertise over time. For new doctors entering the field, this raises questions about how they will develop critical skills and clinical judgment.

  4. AI bias and efficacy drift: Physicians are wary of inaccuracies stemming from biased AI algorithms. Moreover, AI models can experience "drift," where their efficacy wanes over time due to outdated training data, posing risks to patient care.

Bridging the disconnect between adoption and unease

The AMA survey offers insights into what physicians need to feel more confident about adopting AI more fully. These measures include:

  • Robust liability frameworks: Doctors want assurance that their careers won't be jeopardized by a misstep from AI tools. They seek legal protections while maintaining ethical and professional integrity.
  • Continuous monitoring and updates: Healthcare professionals emphasize the importance of ongoing evaluation of AI tools to ensure accuracy and reliability.
  • Comprehensive training: Many physicians prefer incorporating AI training into existing electronic health record (EHR) systems or practicing with simulated tools in non-critical environments.
  • Involvement in decision-making: Physicians want a greater voice in decisions about AI implementation, ensuring tools meet their needs and optimize safety for patients.

The role of governance and oversight

With federal oversight of healthcare AI lagging, efforts to regulate this space have largely fallen to individual states, industry groups, and health systems. Organizations like the National Institute of Standards and Technology (NIST) and the AMA have proposed frameworks to guide responsible implementation.

Current governance measures include:

  • AI governance committees: Prominent health systems like Mayo Clinic and Cleveland Clinic lead the charge with governance committees that consist of representatives from IT, administration, leadership, finance, and medical staff. These committees evaluate emerging AI tools, draft implementation protocols, and assess associated risks.
  • Guideline frameworks: NIST's AI risk-management framework and the AMA's "Trustworthy Augmented Intelligence" provide a foundation for assessing AI risks, patient privacy concerns, and vendor selections.

As AI continues to evolve, these governance approaches need to adapt. For instance, generative AI and point-focused solutions may require different oversight mechanisms as capabilities expand.

Practical takeaways for healthcare leaders

From the AMA survey, healthcare leaders can glean meaningful steps to foster physician confidence in AI:

  • Invest in education and training to ensure physicians feel equipped to use AI effectively.
  • Build robust governance frameworks with clear policies to address liability, efficacy monitoring, and patient privacy.
  • Promote physician involvement in selecting and implementing these tools to balance technological capabilities with the realities of clinical practices.

Addressing these areas is critical as the industry moves into an era where AI capabilities will only grow stronger and more integrated into patient care.

Why trust remains foundational

One central theme stands out: For AI to succeed in healthcare, trust—between physicians, technology, and patients—must be prioritized. While AI offers undeniable benefits, its integration into clinical workflows must be carefully managed to maintain patient relationships, uphold ethical standards, and reduce risks.

AI's ability to reduce administrative loads is a clear win, but without addressing liability, privacy, and trust issues, its potential remains underutilized. The next steps for healthcare leaders and policymakers involve creating an environment where AI's strengths don't come at the expense of security, confidence, or human expertise.

With careful coordination between healthcare professionals, developers, and regulatory bodies, AI can more seamlessly transition from an emerging tool into a trusted ally in clinical practice.

Advertisement
Share
Was this helpful?

Comments

Loading comments…

Leave a comment

0/1000

Related Stories