Governing Artificial Intelligence at the Point of Care
"When I am in that room with a patient, they are my sole focus. They are my universe. Not logistics. Not a population cohort. That individual — irreplaceable, irreducible, mine to protect."
— John C. Ferguson, MD, FACS | Surgeon & AI Systems DeveloperThe governance framework currently under development addresses most AI applications appropriately. But there is a category that is structurally absent — and its absence is the most significant gap the Global Dialogue has the opportunity to close.
When a Medical AI system contributes to a failure of patient care, the physician carries the consequence — their license, their liability, their relationship with that patient. The AI company carries a disclaimer.
That asymmetry is not a legal technicality. It is a governance signal.
The patients endure the consequences. The physicians carry the accountability. The AI systems do neither. That asymmetry of consequence must determine the asymmetry of authority.
Done right, Medical AI delivers individual-focus care to patients who have never had access to it — the patient in a rural clinic in sub-Saharan Africa, the patient in an underserved community with no specialist within reach, the patient whose disability makes access to traditional care harder than it should be.
Done wrong, Medical AI exports the accountability asymmetry to the populations with the least recourse when it fails. Governing Medical AI correctly is not in tension with the accessibility mission. It is the precondition for achieving it.
Formally recognize Medical AI as a separate governance category in the Co-Chairs' summary and thematic framework for Geneva.
Require Medical AI systems to demonstrate specialty board-level clinical reasoning before deployment in a given domain.
Mandate independent adversarial testing before deployment to identify safety-circumvention vulnerabilities.
Embed explicit physician decision-making authority into deployment standards — not left to institutional interpretation.
A formal written submission will be entered into the record of the United Nations Global Dialogue on Artificial Intelligence Governance, established under General Assembly Resolution 79/325. The inaugural Dialogue convenes in Geneva, July 6–7, 2026.
Physician signatories will be included in the official submission record presented at Geneva.
Physicians, healthcare professionals, and policy makers who support the recognition of Medical AI as a distinct governance category are invited to add their names. Signatories will be included in the formal UN submission record and presented at the Geneva Dialogue in July 2026.
"I support the recognition of Medical AI — artificial intelligence operating at the point of individual patient care — as a distinct governance category requiring its own deployment standards, competency thresholds, and accountability framework, separate from population-level healthcare AI."
Have a story to tell?
A bedside moment. A case where the distinction between Medical AI and Healthcare AI became real. A call to your colleagues. Share it and we will publish it.
Share Your Story 2192Displaying — publicly listed signatures. All signatories appear in the official UN submission record.