Skip to content
Back to Blog
hipaa-compliancehipaa-complianceshadow-aihealthcare-cybersecurityai-governanceprotected-health-information

The Hidden HIPAA Risk in Your AI Strategy: Shadow AI and Employee Data Exposure

IPS0 Team

Your Biggest HIPAA Threat Isn't Hackers — It's Your Own Staff Using ChatGPT

Healthcare organizations have spent decades building firewalls, encrypting databases, and training employees on phishing awareness. But a quieter, more pervasive compliance threat has emerged: employees feeding protected health information (PHI) into generative AI tools and personal cloud accounts without authorization.

A May 2025 study by Netskope found that healthcare workers routinely expose sensitive patient data by uploading information to generative AI platforms and personal cloud storage services. This isn't malicious behavior — it's well-intentioned staff trying to work faster, draft clinical notes, summarize patient histories, or automate repetitive tasks. But from a HIPAA perspective, every unauthorized upload is a potential breach.

As proposed HIPAA regulation updates move forward in 2026, healthcare IT leaders need to confront this "shadow AI" problem head-on — before regulators do it for them.

What Is Shadow AI and Why Should You Care?

Shadow AI refers to the use of artificial intelligence tools — particularly generative AI like ChatGPT, Google Gemini, or Claude — by employees without the knowledge, approval, or oversight of their IT department. In healthcare, this creates a direct collision course with HIPAA's Privacy and Security Rules.

Real-World Scenarios That Create Liability

  • A clinician pastes a patient encounter summary into a consumer AI chatbot to generate a referral letter. That data now resides on a third-party server with no Business Associate Agreement (BAA) in place.
  • An administrative assistant uploads a billing spreadsheet containing patient names and diagnosis codes to a personal Google Drive to work from home.
  • A researcher uses a free-tier AI tool to analyze de-identified data that, combined with other inputs, could be re-identified.

None of these employees intended to violate HIPAA. All of them did.

The 2026 HIPAA Updates Target AI Directly

The proposed HIPAA regulatory changes expected to advance in 2026 represent the most significant overhaul in years. Two provisions are especially relevant to the shadow AI problem:

  • Elimination of the "addressable" safeguard distinction. Under current rules, some security measures are "addressable" rather than "required," giving organizations flexibility. The proposed changes would make virtually all safeguards mandatory, removing the gray area that some organizations have used to justify weaker AI governance policies.
  • Mandatory annual risk assessments that explicitly include AI systems. Organizations will need to document every AI tool that touches PHI — including tools employees may be using informally. If you can't inventory it, you can't assess it, and you can't comply.

These changes signal that HHS recognizes AI as a distinct category of compliance risk, not just an extension of existing IT infrastructure.

Building a Compliant AI Strategy: Actionable Steps

The good news is that the path forward doesn't require banning AI. It requires governing it. Here's how healthcare organizations can get ahead of the curve:

1. Deploy HIPAA-Compliant AI Alternatives

Give employees tools that do what they're trying to do — but within a compliant framework. Organizations like UTHealth Houston partnered with OpenAI in late 2024 to deploy HIPAA-compliant AI tools for clinicians and students, proving that sanctioned AI adoption reduces the incentive for shadow workarounds. Products like Orchid's HIPAA-compliant AI scribe for mental health providers demonstrate that purpose-built, BAA-covered solutions exist and are maturing rapidly.

2. Conduct an AI-Specific Risk Assessment Now

Don't wait for the new rules to take effect. Survey departments, interview power users, and audit network traffic for unsanctioned AI and cloud service usage. Tools like Netskope and similar CASB (Cloud Access Security Broker) platforms can identify where data is flowing.

3. Update Policies and Training

  • Add explicit language to your Acceptable Use Policy covering generative AI tools.
  • Conduct targeted training sessions that explain why pasting PHI into ChatGPT is a HIPAA violation — not just that it's prohibited.
  • Create a simple, visible process for employees to request new AI tools through IT.

4. Explore Privacy-Preserving AI Techniques

Emerging approaches like federated learning and synthetic data generation allow organizations to leverage AI without centralizing sensitive data. A November 2025 research framework called MedHE demonstrated that combining adaptive gradient sparsification with homomorphic encryption can enable collaborative AI model training on medical data while maintaining HIPAA compliance. These aren't theoretical anymore — they're becoming practical options.

5. Pursue Formal AI Security Certifications

HITRUST announced AI-specific control requirements and certifications in December 2024, giving organizations a structured framework for validating their AI security posture. Achieving certification demonstrates due diligence to regulators and partners alike.

The Bottom Line

Shadow AI isn't a future problem — it's happening in your organization right now. The employees using unauthorized AI tools are often your most productive and tech-forward staff. The solution isn't punishment; it's providing secure, compliant alternatives and clear governance.

Organizations that need help conducting AI-focused risk assessments, implementing compliant infrastructure, or preparing for the 2026 HIPAA updates can lean on experienced partners like IPS0, where healthcare IT compliance has been a core focus for over two decades.

The window between awareness and enforcement is closing. Use it wisely.