Analysis

Shadow AI in Healthcare: Why Security Teams Must Adapt Instead of Resist

April 11, 2026 01:50 · 5 min read
Shadow AI in Healthcare: Why Security Teams Must Adapt Instead of Resist

An Unstoppable Force Meets an Underprepared Industry

The healthcare sector is quietly grappling with a security problem it largely created itself: shadow AI. Physicians, nurses, and clinicians across the industry are reaching for unsanctioned artificial intelligence tools and chatbots to claw back precious minutes in environments where a single saved second can translate directly into saved lives. The trouble is that security teams cannot defend against threats they cannot see — and right now, a significant portion of AI activity in hospitals and medical facilities is completely invisible to those responsible for protecting it.

When healthcare workers use personal devices, unvetted platforms, or public large language models (LLMs) to assist with clinical tasks, they inadvertently introduce new vulnerabilities, expand attack surfaces, and risk funneling highly sensitive protected health information into unmanaged environments. The downstream consequences include data leaks, potential breaches, and recovery nightmares when ransomware strikes.

A Problem Highlighted at RSAC 2026

Joe Izzo, MD, chief medical information officer for San Joaquin General Hospital, addressed the shadow AI challenge directly during his presentation at the RSAC 2026 Conference last month. He outlined how healthcare professionals routinely adopt AI tools to assist with dosing calculations, information retrieval, medical searches, clinical summaries, and even billing-cycle management.

Izzo was careful to note that many of these tools are not inherently dangerous, but their unvetted, ungoverned use creates heightened security challenges — particularly when a hospital is already struggling through ransomware recovery and managing operational chaos. Raising awareness and promoting secure AI usage, he argued, are essential steps to ensure those moments of crisis are not made worse by invisible technological sprawl.

Visibility Gaps and Unlimited Blast Radii

Doug Merritt, CEO of Aviatrix, frames shadow AI as a twofold threat. First, it creates a significant visibility gap for security teams who have no insight into what tools are running inside their environments. Second, it generates workloads with effectively unlimited blast radii, largely because AI tools — and AI agents in particular — typically require broad, elevated privileges to function.

Merritt told Dark Reading that AI infrastructure is already insufficient in some healthcare settings, and shadow AI only compounds the underlying weakness. He emphasized that healthcare environments "hold the most sensitive data in any industry," making the stakes uniquely high.

The Pressure to 'Use AI, Use AI'

Shadow AI adoption is accelerating partly because burnt-out healthcare professionals are under relentless pressure, and partly because organizational leadership across industries — including healthcare — is actively urging employees to embrace AI for productivity gains. The tension arises when workers bring their own tools into the environment without informing or registering them with security teams, muddying asset visibility and undermining the organization's cybersecurity posture.

Merritt acknowledged the contradiction inherent in his own position: "With my own employees, too, I'm badgering them, 'Use AI, use AI. [But] if you want to bring your own tools, register them in our domain set.'" He does not blame healthcare workers for seeking relief from administrative burdens, rapid clinical documentation needs, and other repetitive tasks. Patient care, he noted, remains the unambiguous top priority.

"Try telling those folks you can't use the tool that saves these guys 30 minutes to an hour per shift so they can spend more time with patients," Merritt said. "It doesn't make any sense at all."

What the Data Shows

Research from global infotech company Wolters Kluwer underscores just how widespread shadow AI adoption has become in healthcare settings. Key findings from the company's shadow AI healthcare report include:

Izzo also flagged a growing trend where AI vendors now market their products directly to physicians at medical conferences, sometimes encouraging clinicians to sign individual agreements with them — bypassing hospital policies and governance frameworks entirely. These agreements, he warned, typically place all liability on the individual physician.

"Surprise, those agreements typically put all the onus entirely on physicians, but it is very tempting," Izzo said. "Especially because typically physicians and nurses and clinical staff aren't doing this to be evasive. They want to be more efficient."

His recommendation: engage directly with clinicians to understand their workload pain points and identify where sanctioned, vetted tools can genuinely improve their day-to-day experience.

Denial Is No Longer a Strategy

Jeremy Banon, CEO and founder of The Cyber Health Company, put it plainly: organizations that continue to deny or ignore shadow AI activity are choosing a losing strategy. The bring-your-own-device trend has made shadow AI adoption increasingly seamless, and the productivity benefits are too compelling for prohibition to be effective.

"It's important for companies to not bury their heads in the sand," Banon told Dark Reading. He recommended that leadership develop a comprehensive enterprise AI plan and partner with a vendor capable of implementing proper security and privacy controls suited to the organization's specific use cases. He also advocated for patient opt-in mechanisms whenever a new AI tool is introduced into clinical workflows.

Merritt echoed that framing. Asking how to stop employees from using unapproved AI tools is, in his words, a "losing question" — the tools are too accessible and too productive to eliminate through policy alone. Business pressure to adopt AI is simply too intense.

Containment Over Prohibition

The consensus among experts is that the conversation needs to shift from prevention to containment. Organizations should operate under the assumption that shadow AI tools are already running somewhere within their environments and focus their energy on limiting the blast radius when something goes wrong.

Merritt described the goal as finding ways to "bubble-wrap AI workloads so they're allowed to be there, but you see exactly who they're communicating with and what's going out." He pointed to a workload zero-trust policy stance as a more practical and achievable framework than outright prohibition.

Effective discovery mechanisms — the ability to detect what AI tools are operating, where, and with what level of access — are essential first steps. Paired with containment strategies that restrict data exposure and limit privilege escalation, healthcare organizations stand a far better chance of managing the risk without sacrificing the clinical efficiency gains that make these tools so appealing in the first place.

A Security Posture Built for Reality

Shadow AI in healthcare is not a temporary phenomenon that stricter policies will resolve. It is a reflection of structural pressures that will only intensify as patient volumes grow, administrative burdens expand, and AI capabilities improve. The security community's task is not to fight human nature but to build frameworks resilient enough to accommodate it — ensuring that the tools clinicians rely on don't become the vulnerabilities attackers exploit.


Source: Dark Reading

Source: Dark Reading

Powered by ZeroBot

Protect your website from bots, scrapers, and automated threats.

Try ZeroBot Free