RCDSO Guidelines on AI in Dentistry: What Ontario Dental Clinics Need to Know
The RCDSO's AI guidance for Ontario dentists clarifies existing obligations around accountability, transparency, and patient data protection. Here's what your clinic needs to do.
The Royal College of Dental Surgeons of Ontario (RCDSO) published its official Guidance: Artificial Intelligence in Dentistry in September 2025. The core message: Ontario dentists can use AI tools, but they remain fully responsible for patient care, clinical decisions, and data protection whether or not AI played a role. The RCDSO AI dentistry guidelines do not create new professional requirements. They map existing obligations under the Dentistry Act, PHIPA, and the RCDSO Standards of Practice onto the AI landscape, covering everything from diagnostic imaging software to AI voice receptionists and appointment notification systems. If you're evaluating AI tools for your clinic, this guidance is the regulatory baseline you need to understand first.
What the RCDSO AI Guidance Actually Says
The most important thing to understand is what this guidance is not. It is not a new regulation. It is not a ban. It does not restrict which AI tools you can use or require pre-approval from the College.
The RCDSO states explicitly that the guidance "does not set out new professional requirements, but instead highlights existing responsibilities that may be relevant to the use of AI in dentistry." The rules you already follow around patient care, informed consent, record-keeping, and privacy apply when you add AI into your practice.
The guidance defines AI broadly as computer systems that perform tasks associated with human intelligence, and separately flags generative AI as carrying heightened risks due to the potential for inaccurate or biased outputs. The RCDSO's companion FAQ confirms the guidance covers AI tools used for "writing or for sending notifications about appointments," but notes that "dentists may need to exercise greater caution and oversight depending on the specific tool."
The Risk-Based Framework: What Determines Your Obligations
The RCDSO's central principle is proportional oversight. Risk increases along three dimensions:
- Does the tool directly influence clinical decision-making? A diagnostic AI sits at the highest risk level. A scheduling tool does not.
- Does the tool affect patient health and safety? Tools informing diagnosis carry more risk than those managing FAQs.
- Does the tool use patient health information (PHI)? Any AI accessing patient records triggers PHIPA obligations, even for administrative functions.
The Oral Health Group published a companion framework in October 2025 by Dr. Peter C. Fritz, Chair of the Royal College of Dentists of Canada's (RCDC) AI Task Force, mapping this onto four tiers: minimal risk (patient education materials), limited risk (AI receptionists and scheduling tools), high risk (diagnostic and clinical decision-support), and unacceptable risk (apps promoting unsupervised self-diagnosis).
Risk classification can shift based on data handling. A closed, encrypted system that doesn't reuse patient data for training carries less risk than an open system with unclear data governance. When evaluating AI vendors for your practice, the RCDSO's risk factors give you a ready-made checklist.
The 3 Pillars: Accountability, Transparency, and Data Protection
The guidance organises 16 recommendations under three pillars.
Pillar 1: Accountability and Responsibility
Before adopting an AI tool, verify it complies with PHIPA, understand the vendor's data storage policies (including where data is stored geographically), assess validation across your patient populations, and examine training data for bias. The guidance is blunt: if you can't get satisfactory answers, "avoid using the AI tool."
During use, critically review all AI-generated outputs before acting on them. Train all staff who use AI, including front-desk teams, on appropriate uses, limitations, and risks. The guidance warns specifically about AI scribes generating documentation that includes information never discussed, and diagnostic tools suggesting unnecessary treatment.
Pillar 2: Transparency and Disclosure
Two requirements stand out. First, patients must know when they're interacting with AI rather than a human. This applies directly to AI phone receptionists and chatbots. Second, for AI tools affecting clinical care, patients must be informed about how AI will be used before the tool is applied. Disclosure methods are flexible: consent forms, signage, automated messages, or direct conversation. Document whatever method you choose.
The guidance also requires reasonable accommodation for patients who prefer no AI involvement in their care.
Pillar 3: Protecting Patient Health Information
PHIPA compliance is non-negotiable. AI-generated outputs containing PHI must not be used for other purposes, including model training or third-party sharing, unless patients provide express consent under PHIPA Section 18. The FAQ recommends privacy impact assessments before deploying AI tools that handle PHI, and warns that foreign jurisdictions may allow broader use of de-identified health information without consent. Data residency matters.
For details on how compliant AI tools handle patient data, see our security and compliance overview, our full PHIPA compliance guide, and (for US-based or cross-border clinics) the HIPAA AI receptionist compliance guide.
Professional Liability Stays with the Dentist
The RCDSO's position is unambiguous. AI does not transfer clinical responsibility. Dentists remain accountable for care, decision-making, and documentation regardless of AI involvement. Professional misconduct applies whether or not AI contributed to the issue. The FAQ confirms dentists may face misconduct proceedings for recommending unnecessary services based on AI suggestions.
Civil liability involving AI manufacturers and developers is complex and falls outside the RCDSO's scope. Dentists are directed to the Professional Liability Program or legal counsel.
How This Applies to AI Phone Systems and Administrative Tools
For dental clinics evaluating AI voice receptionists and scheduling assistants, the guidance creates a clear compliance pathway. These tools fall under the guidance (confirmed in the FAQ), but occupy the lower end of the risk spectrum when properly configured.
Under the Oral Health Group framework, AI receptionists are classified as limited risk, where the main consequence of error is inefficiency, not clinical harm. Compliance requirements:
- Transparency: The AI must identify itself at the start of every patient interaction.
- PHI protection: If the tool accesses patient records, PHIPA's full requirements apply. Tools handling general queries without health records carry less risk.
- Data residency: Verify Canadian storage. The RCDSO flags risks with foreign cloud providers.
- No training on patient data: Vendors must not reuse PHI for model training without express consent.
- Staff training: Front-desk staff need to understand the system's limitations and escalation procedures.
At JustReva, we built REVA with these requirements as defaults. REVA identifies as AI on every call, never provides clinical advice, stores Canadian clinic data in Canada with zero data retention on voice processing, and is PHIPA compliant by design. Under the RCDSO's framework, it sits in the lowest risk tier. See how it works.
Take the Next Step
If you're exploring AI for your dental clinic's phone operations, we built REVA for exactly this use case. PHIPA compliant, transparent by design, and in the RCDSO's lowest risk category. Start with a free 30-day pilot at justreva.com.
Sources: RCDSO Guidance: Artificial Intelligence in Dentistry (September 2025), RCDSO FAQ on AI, Oral Health Group Risk Framework (October 2025)
Ready to stop missing patient calls?
REVA answers every call in under 1 second, 24/7. Book a demo to see it in action.