← Back to all articles

AI is already inside your clinic. Not coming. Already there.

If your practice uses an electronic medical record, a scheduling platform, or a patient communication tool, there is a strong chance that AI features are running in the background. Some were enabled by default. Some were added in a software update nobody reviewed. Some were adopted by staff who found a faster way to get through paperwork.

The Ontario Information and Privacy Commissioner's January 2026 guidance on AI scribes made something clear: the custodian is responsible for governing AI in clinical practice. Not the vendor. Not the software company. The custodian. That means the physician, the clinic owner, or the health information custodian under PHIPA.

Most clinics have not caught up. Here are five privacy mistakes I am seeing across Ontario practices right now, and what to do about each one.

1 Staff Using ChatGPT and Other Consumer AI Tools for Clinical Work

This is the most common and least visible governance gap in Ontario clinics right now.

A medical receptionist pastes a patient's clinical notes into ChatGPT to draft a referral letter. A physician uploads a discharge summary to get help writing a patient-friendly explanation. An office manager uses an AI tool to summarize a batch of patient complaints.

None of these tools have a business associate agreement with the clinic. None have been assessed for PHIPA compliance. None store data exclusively in Canada. And in most cases, the clinic owner has no idea it is happening.

The moment personal health information leaves the clinic's controlled systems and enters a consumer AI platform, the clinic has a potential privacy breach. Under PHIPA, the custodian is accountable for that data regardless of which staff member sent it or which tool they used.

What to do

Establish a clear AI use policy that specifies which tools are approved for clinical and administrative work. Include a blanket prohibition on entering patient information into consumer AI platforms unless the tool has been assessed, approved, and documented in your AI system registry. Communicate this policy to every staff member and make it part of onboarding.

2 Deploying an AI Scribe Without a Privacy Impact Assessment

The Ontario AI Scribe Program has made it easier than ever for clinics to adopt AI scribes. Supply Ontario's Vendor of Record arrangement pre-qualifies vendors on technical, privacy, and security requirements. OntarioMD provides change management support and consent toolkits.

But here is the gap many clinics are falling into: they assume that because the vendor passed the VOR process, the clinic's own privacy obligations are covered.

They are not. The VOR qualifies the vendor. It does not qualify the clinic.

The IPC's January 2026 guidance explicitly calls for custodians to conduct privacy impact assessments before deploying AI scribes. A PIA evaluates how patient data flows through the AI system, what personal health information is collected, how it is used and disclosed, what risks exist, and what safeguards are in place. Without one, the clinic is operating an AI tool that processes some of the most sensitive data in healthcare (audio recordings of patient-physician conversations) with no documented assessment of the privacy risks.

What to do

Conduct a privacy impact assessment specific to your AI scribe deployment before going live, or as soon as possible if the tool is already in use. The PIA should document the purpose and scope of the AI scribe, what PHI is collected and how it flows through the system, data retention and destruction policies, vendor safeguards, and risk mitigation measures. This is not optional. The IPC expects it.

3 No Patient Consent Workflow for AI-Assisted Documentation

An AI scribe records the patient-physician conversation, transcribes it, and generates clinical notes. That means a third-party AI tool is processing a patient's spoken words about their health, their symptoms, their medications, their mental health, their family history.

Patients have a right to know this is happening. Under PHIPA, informed consent is foundational. Yet many clinics using AI scribes have no formal consent process. Some mention it verbally at the start of the appointment. Some post a sign in the waiting room. Some do nothing at all.

A verbal mention is not a consent workflow. A sign in the waiting room is not informed consent. Patients need to understand what the AI tool does, what data it collects, where it goes, how long it is retained, and what their options are if they decline.

What to do

Design a patient consent workflow that includes a clear explanation of the AI tool, what data it processes, and the patient's right to opt out without affecting their care. Document consent in the patient record. Train front desk staff and physicians on how to introduce the AI scribe and handle refusals. OntarioMD provides consent toolkit resources, but the clinic is responsible for implementing them in practice.

4 No One Has Reviewed the Vendor Contract for Privacy Safeguards

The AI scribe vendor sent a terms of service agreement. Someone at the clinic signed it. Probably nobody read it carefully, and almost certainly nobody reviewed it against the IPC's recommended contractual safeguards.

Some clinics assume the VOR program covers this. It does not. The IPC's own guidance addresses this directly: while organizations like the Ministry of Health, Supply Ontario, Ontario Health, OntarioMD, and Canada Health Infoway have created vendor of record programs to help clinics navigate procurement, the IPC is explicit that "even when relying on a vendor of record program, every custodian must still exercise their own due diligence to ensure that they meet their obligations under PHIPA and other laws and regulations." The VOR vets the vendor. It does not vet the contract your clinic signed with that vendor.

This matters because vendor contracts are where the liability boundaries are drawn. The IPC guidance recommends that custodians negotiate specific contractual protections: limits on vendor access to patient data, restrictions on secondary use, data retention and destruction obligations, subcontractor controls, PHIPA-specific breach notification requirements, and audit rights.

Most standard vendor agreements do not include all of these protections. Some explicitly disclaim liability for clinical outcomes. Some grant the vendor broad rights to use aggregated or de-identified data. Some do not specify where data is stored or processed.

If the IPC investigates a complaint related to AI use in your clinic, one of the first things they will look at is your vendor agreement. If the contractual safeguards the IPC recommends are not there, the clinic's compliance position is weakened regardless of how good the vendor's technology is.

What to do

Pull every AI vendor contract your clinic has signed. Review each one against the IPC's recommended safeguards. Identify gaps: missing breach notification clauses, absent data destruction obligations, broad secondary use permissions, unclear data residency. Negotiate amendments where possible. Document the review and its findings. If you do not have the expertise to do this in-house, engage a governance advisor or privacy professional to conduct the review.

5 No AI Governance Committee or Oversight Structure

The IPC's January 2026 guidance calls for custodians to establish AI governance committees. This is not a suggestion buried in the footnotes. It is a core recommendation.

Most clinics have no such structure. AI tools are adopted ad hoc. Individual physicians decide to try an AI scribe. Office managers sign up for a new scheduling tool with AI features. Nobody evaluates these tools collectively, tracks them in a registry, or assesses whether new tools meet the clinic's privacy and governance requirements before they go live.

Without a governance committee, there is no process for approving new AI tools before deployment. There is no mechanism for ongoing monitoring. There is no one responsible for tracking regulatory changes and updating policies. And there is no documentation that the clinic has a governance structure in place, which is exactly what a regulator or college will look for if something goes wrong.

What to do

Establish an AI governance committee appropriate to your clinic's size. For a small practice, this might be the clinic owner, the office manager, and an external advisor meeting quarterly. For a larger organization, it should include clinical leadership, IT, privacy, and administration. Define the committee's terms of reference: what decisions it makes, how new AI tools are evaluated, how often it meets, and how its decisions are documented. Build an AI system registry that tracks every AI tool in the practice, its risk level, its approval status, and when it was last reviewed.

The Common Thread

Every one of these mistakes shares the same root cause: clinics are adopting AI tools faster than they are building the governance infrastructure to support them.

That is not a criticism. It is the reality of a healthcare system under pressure, where physicians are working to reduce administrative burden and improve patient care. AI tools promise to help with both. But the regulatory expectations are clear and the liability framework is settled: the custodian is responsible.

The good news is that none of these mistakes are irreversible. A clinic that starts building its governance framework today, even if AI tools are already in use, is in a far stronger position than one that waits for a complaint, an audit, or a breach to force the conversation.

The physicians and clinic owners who govern AI proactively will be the ones who adopt it most effectively. Governance does not slow down AI adoption. It makes it sustainable.

Not sure where your clinic stands?

Book a free 20-minute AI Readiness Check. We'll identify which AI tools are running in your practice, where the governance gaps are, and what to prioritize first. No cost, no obligation.

Book Your Free AI Readiness Check

Sources