🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.
So much of the conversation around AI focuses on what it can do. What I want to focus on this week is how well it can do it, and who gets to make that determination?
It's a question this issue returns to repeatedly.
👉 A former White House health official describes developing his own ad-hoc verification workflow at the bedside because no institutional standard exists for evaluating AI clinical recommendations.
👉 A Senate report finds the FDA's AI strategy was built for a different technological era.
👉 A legal analysis documents patients as de facto test subjects in the absence of reliable evaluation benchmarks.
What connects these stories is that AI is being deployed into healthcare faster than our capacity to evaluate it.
The evaluation gap.
We have existing governance frameworks. HIPAA, FDA's clearance pathways, CMS coverage rules, and anti-discrimination regulations. But as the American Hospital Association noted in a detailed response to an HHS request for information this week, these were designed around static technologies. They are straining under the weight of governing something that learns, adapts, and evolves after deployment.
The AHA's core recommendation to HHS is instructive: don't create standalone AI regulation. Rather, synchronize AI policy through existing frameworks. Update HIPAA. Strengthen FDA post-market standards. Build clinician-in-the-loop requirements into CMS rules. It's a reasonable instinct. These frameworks represent decades of enforcement capacity and clinical legitimacy that a new regulatory regime would take years to replicate. But reasonable institutional instincts have limits when the technology moves faster than the institutions.
A different model is gaining traction.
Fathom, a nonprofit focused on AI governance, has been advancing a concept in which independent, expert-led bodies called Independent Verification Organizations (IVOs) certify AI systems against evolving safety standards. Governments authorize the marketplace and set outcome goals; IVOs develop technical criteria; companies that voluntarily submit and pass, earn a certification signaling a heightened standard of care. Those that fail to maintain standards lose certification. IVOs that lose independence from industry lose their license.
The advantages? IVOs update standards continuously rather than waiting for legislative cycles, elevate technical expertise in standard-setting, and can scale nationally through state-level implementation. Most importantly for healthcare, Fathom's framework contemplates states authorizing certified companies to receive reduced liability exposure or tort protections. And in a sector where liability ambiguity is actively chilling adoption, that's a meaningful incentive.
Speaking of liability.
At an AMA conference panel this week, AMA CEO John Whyte drew what may become a defining threshold: when AI is wholesale replacing what a physician would otherwise do, liability should shift toward the AI, but only when care can be proven equivalent or better. "I wouldn't want a world like that," he said, "unless we can prove in some demonstrable way that the care is equivalent or better." For now, liability remains with the clinical team.
That framing makes liability contingent on evaluation. You cannot transfer accountability to an AI system until you can demonstrate that it is trustworthy. Which returns us directly to the problem we started with: we don't yet have the evaluation frameworks to make that demonstration reliably. The AMA's liability threshold isn't just a legal question. It's a measurement question. And we lack the tools to answer it.
The stakes in healthcare are specific.
Evaluation failures in most industries are costly. In healthcare, they can be irreversible. That asymmetry is why the governance question is more urgent here than almost anywhere else. "Better evaluation, not just better models, is the prerequisite for trustworthy AI in healthcare," as researchers from Stanford and Harvard have argued. The question is whether we can build that infrastructure before deployment outpaces our capacity to know what we've actually built.
🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.
🏗️ Industry news
The Bottleneck Is Organizational, Not Technical OpenAI has formalized enterprise partnerships with BCG, McKinsey, Accenture, and Capgemini under a "Frontier Alliance" framework, with a shared diagnosis: the barrier to AI value is change management and adoption culture, not model capability. Healthcare-focused partners Abridge and Ambience are listed in the alliance.
AI Adoption Mandates Are Coming to the Workplace Amazon, Google, Meta, Microsoft, and Salesforce are integrating AI usage tracking into performance reviews and hiring decisions, with 42% of tech workers now reporting their manager expects daily AI use, up from 32% eight months ago. In healthcare settings, where clinician trust is a precondition for patient safety, coercive adoption models carry risks that don't exist in logistics or finance.
Perplexity Launches "Computer": Multi-Agent Orchestration Goes Mainstream Perplexity's new "Computer" platform routes tasks across 19 specialized AI models simultaneously — users describe a goal once, and the system plans, delegates, and executes autonomously for hours or months without re-prompting. The key architectural contrast with open-source rival OpenClaw: Computer runs in a managed cloud environment with centralized safeguards, while OpenClaw runs locally with full system access, shifting security responsibility entirely to the user.
Anthropic Launches Cowork and Plugin Marketplace Anthropic has launched Cowork, a desktop tool for non-developers to automate file and task management, alongside an enterprise plugin marketplace for role-specific AI agents. The launch is a direct competitive response to OpenAI's enterprise positioning, with PwC as a launch partner.
🩺 At the point of care
AI Didn't Replace Me as a Doctor. It Made Me Better. Former White House COVID-19 coordinator, Ashish Jha, describes using ChatGPT on hospital rounds — entering de-identified patient summaries and asking "what else should I be considering?" — with at least one case where AI flagged a guideline-supported test the team might have missed. His workflow: always request citations and verify any consequential recommendation against its source before acting.
When AI Should — and Shouldn't — Make Decisions FDA Digital Health Advisory Committee chair Morish Shah and co-author Ami Bhatt offer an framework for task-matching — deciding which clinical decisions AI should support and which require human judgment. It's a useful lens for health system leaders navigating where to deploy AI tools and where to protect clinician authority.
Your Medical Records Are Now in ChatGPT. Who's Accountable? ChatGPT Health consolidates medical records, pharmacy data, and consumer health information outside clinical institutions — and outside HIPAA's reach. The authors identify a structural accountability gap: the systems generating and holding clinical data are increasingly separate from the systems responsible for patient safety.
What HBO's The Pitt Gets Right — and Wrong — About Clinical AI ⚠️ Spoiler alert Season 2 of the hit ER drama introduces an AI transcription tool that promptly hallucinates a history of appendicitis and confuses "urology" with "neurology" — while the attending's defense ("the error rate is small") lands as exactly the reassurance this week's evaluation coverage would complicate.
🏛 Government & policy
Harrison.ai Asks FDA to Reduce Premarket Review for Radiology AI Harrison.ai has petitioned the FDA to exempt follow-on radiology AI products from premarket review across six device categories — roughly 200 of the 1,300 AI-enabled devices currently cleared — with the implicit assumption that post-market surveillance can catch what reduced premarket review might miss. Expert consensus suggests that assumption is not well-founded; a former Harrison.ai executive now directs the FDA Digital Health Center of Excellence.
State AI Laws Are Filling the Federal Vacuum — and Creating a Compliance Patchwork A legal briefing maps healthcare AI legislation across states, organized around anti-discrimination requirements, preservation of clinical decision-making authority, and transparency mandates. A December 2025 Trump executive order signals the administration may seek federal preemption of state AI laws — though it cannot override state law directly without congressional action.
Community-Led Governance Must Follow State AI Law New York's RAISE Act and NYC's GUARD Act establish explicit anti-discrimination requirements for AI systems, but author Oni Blackstock argues these laws are a floor, not a ceiling — technical compliance is insufficient without ongoing community-led governance that can identify harms formal audits miss.
AHA to HHS: Synchronize AI Policy Through Existing Frameworks The AHA's response to the HHS AI request for information calls for building AI policy through existing frameworks — HIPAA, FDA SaMD, CMS coverage rules — rather than creating standalone regulation, with specific asks including full HIPAA federal preemption and clinician-in-the-loop mandates.
😇 Ethics & responsible use
AMA Weighs When Liability Should Follow the AI At an AMA conference panel, CEO John Whyte drew a liability threshold that could become a defining framework: when AI wholesale replaces physician work, liability should shift to the AI — but only when care can be proven equivalent or better. The implication is significant: you cannot transfer liability to an AI system until you can demonstrate it is trustworthy, making liability resolution a measurement problem we don't yet have the tools to solve.
🔬Research & evidence
When AI Outputs Look Finished, Users Stop Checking Analysis of 9,830 Claude.ai conversations finds that when AI produces polished, finished-looking outputs (like a MS Word document), users become measurably less likely to question reasoning, check facts, or identify missing context. The healthcare implication? Clinicians reviewing polished AI-generated patient summaries or prior authorization letters are in exactly the situation this research flags as highest risk.
Mapping MMR Vaccination Gaps Researchers from Mount Sinai, Boston Children's Hospital, and Harvard have developed an innovative resource for improving immunization strategies and mitigating measles outbreaks through geographically targeted interventions. They developed an interactive website that enables users to explore county-level MMR vaccination estimates across the USA. Model-based surveillance can complement traditional systems by identifying at-risk communities earlier, guiding geographically targeted interventions and strengthening local preparedness, ultimately advancing national vaccine equity and disease prevention goals

Source: Nature Health - Assessing MMR vaccination coverage gaps in US children with digital participatory surveillance
🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.
A Framework for Equitable AI Adoption in Mission-Driven Organizations Developed with 34 nonprofit practitioners and subject matter experts, this implementation guide offers eight components for equitable AI adoption — from data governance and risk to stakeholder engagement and culture of learning — structured around administrative, programmatic, and high-trust use cases.
15 NotebookLM Features You're Probably Not Using Google's Gemini-powered NotebookLM grounds responses solely in user-uploaded documents, significantly reducing hallucinations compared to general-purpose chatbots — and its Audio Overviews feature generates podcast-style summaries for time-pressed readers who want to process research on the go.
Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected]
🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.
👉 Mar. 12–18, 2026 — SXSW 2026, Austin, TX
👉 Mar. 27 — “The AI Doc: Or How I Became An Apocaloptimist” opens in theaters. Watch the trailer
👉 Mar. 30–31, 2026 — IAPP Global Privacy Summit, Washington DC
👉 Apr. 6–9, 2026 — HumanX 2026, San Francisco, CA
👉 Apr. 10 — Ethical AI: Leadership and Governance, Virtual
And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.
Till next time,
BC
1

