🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.

I'll start with a small confession: what happened in AI this week was too much for any single newsletter to fully capture. I've done my best in the sections below, but I've undoubtedly missed things. I’m sorry. If something significant slipped through, I hope you'll send it my way.

Also, this is a longer newsletter (so many words). And no pictures, videos, or charts to break it up. I’m sorry again, dear reader.

And finally, this week's Mission View is going to do something a little different. Rather than opining on the latest news, I want to share my takeaways from Human[X] 2026, which I attended in San Francisco.

Six thousand people came together to talk about where AI is headed. It wasn't a health-focused conference, but there was no shortage of relevance for those of us working at the intersection of AI and health. Here's what I took away.

No one knows how this turns out.

This was the thing that struck me first, and most. The people who are closest to this technology — the ones building it, deploying it, betting their companies on it — do not fully understand it. They don't understand its second-order effects. They don't understand its third-order effects. We are all, in some meaningful sense, navigating without a map. The CEO of Human[X] described it as operating in “speed and fog”. Yikes. While I found this unsettling, it was also clarifying.

Clarifying because it gives you permission to stop pretending you have all the answers. Unsettling because the people you might have expected to have the answers don't have them either. Anyone who speaks about the future of AI with real certainty, those who tell you they know how this pans out, is not being serious with you. The honest posture, even among those doing the most consequential work in this space, is: we're doing the best we can, day by day.

For those of us working in health, this ought to recalibrate how we approach AI. We are not waiting for someone smarter to hand us the foolproof framework. It does not exist. We have to figure it out as we go, alongside everyone else. And with great care. That is not a reason for paralysis. It is a reason to build in more humility, more feedback loops, more rigor, and more willingness to change course/iterate.

There is a vast gap between those building this technology and the rest of the world.

The people most deeply engaged in AI development and deployment are operating in a different reality than most of the country. And I say that having spent much of my career in Washington, D.C., where the distance from the frontier of this technology is particularly stark.

The people at Human[X] are anticipating changes that most Americans have not yet begun to contemplate. The scale of disruption they are predicting — to labor markets, to professional roles, to the nature of knowledge work itself – is not something the policy/political world, the health care system, or frankly most institutions have seriously begun to prepare for. The gap between what those at the frontier believe is coming and what the rest of us are ready for is enormous. And that gap is not closing fast enough. In fact, I fear it is widening.

This is not a complaint about technologists moving too fast. Nor is it a call to recklessly speed up. It is a call for everyone to meaningfully engage. There is an enormous amount of work to do. Upskilling, bridge-building, preparing people and organizations for disruption that is not hypothetical. It is already here. And it is accelerating.

For health care, this gap has particular consequences. The clinicians, administrators, and patients who will be affected by AI are, in many cases, the least prepared for it. The spaces shaping the future of this technology are not primarily attended by the people who will need to figure out the consequences of AI in insurance, the exam room, the emergency department, the operating room, or the lab. We need more people across all of healthcare engaging in this space.

There is real optimism. And with it, real responsibility.

Despite everything I just said, the mood at Human[X] was not one of dread. There is genuine, substantive optimism about what this technology can do. People who have been working in this space for a long time believe that AI has the potential to do things that matter: improve care, accelerate discovery, reduce the administrative burden that is grinding down clinicians and health systems alike, and surface insights from data that would otherwise remain invisible.

I share that optimism. But optimism is not a strategy. And the gap between what this technology could do and what it will actually do for people depends almost entirely on the quality of the leadership that guides its adoption.

This is where I want to be direct with anyone running an organization right now: a health system, a payer, a public health agency, a nonprofit, a government office. Your job has changed. Not incrementally. Fundamentally. If you are sitting at the top of an organization in 2026, your responsibility is not just to manage what exists today. It is time to think seriously about how you are shepherding your organization, and the people in it, into a future that is arriving faster than most institutional planning cycles can accommodate.

Summing it up.

We are at an inflection point. I’m not even sure that’s the right way to describe it, because it feels bigger than that term of art. But the technology is real. The change is real. The opportunity is real. What remains to be determined is whether the people and institutions responsible for health care in this country will rise to meet it. Or whether we will find ourselves, a few years from now, having squandered the moment. I hope it’s not the latter. But if not, I do think it will require more from us than we have so far been willing to give.

🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.

🏗️ Industry news

Hospitals Are Fueling AI Innovation. Should They Own a Piece of It? Two health care attorneys from Sheppard argue that hospitals are contributing the clinical data, workflow expertise, and patient relationships that drive AI vendors' market value — yet rarely negotiate to share in the upside. The piece offers a practical framework for health systems entering AI partnerships: clearly delineate data ownership at the outset, ensure HIPAA-compliant de-identification, implement ongoing governance to catch model drift and bias, and build contractual protections — indemnification, liability allocation, and data security warranties — before a deal closes, not after.

UnitedHealth Group Is Making a $3 Billion Bet on AI. What Does It Mean for Patients? A STAT investigation traces how UnitedHealth is embedding AI across its core operations — claims processing, fraud detection, prior authorization, clinical documentation, and billing code selection — with a stated goal of generating $1 billion in savings this year and a planned $3 billion investment over the next two years.

Anthropic Just Blew Past OpenAI in Revenue Anthropic says its annual revenue run rate has climbed past $30 billion, overtaking OpenAI's reported $25 billion and marking one of the fastest revenue ramps in AI history — the company added roughly $11 billion in annualized revenue in just over a month, after sitting at approximately $19 billion in February and $9 billion at the end of 2025. The number of enterprise customers spending more than $1 million per year has doubled from over 500 to more than 1,000 in less than two months, alongside an expanded partnership with Google and Broadcom that will provide access to 3.5 gigawatts of TPU-based compute starting in 2027.

AI's Impact on the Job Market Is Starting to Show Up in the Data New analyses from Goldman Sachs and Morgan Stanley find that AI has raised the overall unemployment rate by just 0.1 percentage point — reducing employment in roles easily substituted by AI while simultaneously lowering unemployment in roles that AI augments, like those requiring human judgment and interpersonal interaction. A telling example from the Morgan Stanley paper: despite Geoffrey Hinton's 2016 prediction that radiologists would be replaced within five years, the number of radiologists has increased and their pay has risen as clinicians broadly adopted AI to do their jobs better.

Anthropic Says Claude Code Subscribers Will Need to Pay Extra for OpenClaw Usage Anthropic announced that Claude Code subscribers can no longer use their subscription limits for third-party coding harnesses including OpenClaw, which will now require separate pay-as-you-go billing — a move the company framed as an engineering constraint driven by usage patterns that its subscriptions weren't built to support. The timing drew scrutiny: the policy change came shortly after OpenClaw's creator announced he was joining OpenAI, with the open source project continuing under OpenAI's support, prompting the founder to suggest Anthropic first copied popular OpenClaw features into its own closed tool before locking out the open source competition.

OpenAI, Anthropic, and Google Team Up Against Chinese Labs Accused of Copying Their AI Models Three companies that typically compete for the same engineers are now sharing intelligence through the nonprofit Frontier Model Forum to combat "distillation attacks" — a technique where rivals feed prompts to a frontier AI model, collect the outputs, and use them to train a cheaper knockoff. The financial stakes are significant: US officials estimate unauthorized distillation costs Silicon Valley labs billions in annual profit, and Anthropic has specifically alleged that three Chinese AI companies used over 24,000 fake accounts to generate 16 million exchanges with Claude.

Anthropic Launches Its Most Powerful — and Most Dangerous — AI Model Yet Anthropic announced Claude Mythos alongside Project Glasswing, a coalition of more than 40 tech companies — including Apple, Google, Microsoft, Cisco, and Broadcom — that will use the model to find and patch vulnerabilities across critical digital infrastructure, backed by $100 million in usage credits. The launch comes with an uncomfortable acknowledgment: Mythos is too dangerous to release to the general public, having already identified thousands of high-severity vulnerabilities across major operating systems, browsers, and the Linux kernel — including one in OpenBSD that had gone undetected for 27 years — and is capable of chaining multiple vulnerabilities into novel attacks. Cybersecurity experts quoted in the piece warn that open-weight models with similar capabilities could be accessible to ransomware actors within six months.

Everyone Agrees AI Scribes Are Increasing Health Care Costs. No One Agrees What to Do About It. A STAT investigation finds that behind closed doors, both insurers and health systems agree that AI ambient scribes are driving up billing intensity — through more accurate documentation of visit complexity, AI-prompted code suggestions, and higher patient volume as freed-up clinicians see more people — but neither side has a solution, and health economists predict the inevitable outcome is insurers lowering reimbursement rates across the board. The equity concern is pointed: the providers most likely to bear the cost of those rate cuts are under-resourced safety-net practices and community health centers that haven't had the resources to adopt the tools generating the uplift in the first place.

Meta Debuts Muse Spark, First AI Model Under Alexandr Wang Meta launched Muse Spark, a homegrown AI model built over nine months under Scale AI founder Alexandr Wang that the company says significantly narrows its performance gap with models from OpenAI and Anthropic — competitive at multimodal understanding and health information processing, though Meta acknowledges it still trails on coding. The model will power Meta AI across Facebook, Instagram, and WhatsApp, with a shopping mode that combines language models with user behavior data and plans for an open-source release — though the piece flags that Meta's privacy policy sets few limits on how data shared with its AI system can be used.

🩺 At the point of care

I Uploaded My Blood Work to AI. Am I Oversharing? A WSJ personal tech columnist tested Claude and Perplexity's new health data connectors — which link wearables and medical records directly to consumer chatbots — finding the bots capable explainers for low-stakes queries but prone to overstatement, particularly around wearable data that clinicians don't consider clinically reliable. A UCSF physician who reviewed the experiment declined to endorse the practice outright, and the piece surfaces a finding from a KFF survey worth watching: after using an AI health tool, 42% of respondents said they didn't follow up with a doctor.

AI in the Mental Health Care Workforce Is Met With Fear, Pushback — and Enthusiasm An NPR report examines the uneven adoption of AI across mental health care — from administrative tools like session transcription and EHR documentation, where there is broad clinician support, to clinical triage, where the picture is far more contested. The piece centers on the Kaiser Permanente mental health strike, where 2,400 workers walked out in part over the replacement of licensed triage clinicians with unlicensed staff following scripts, and features a psychiatrist at Beth Israel Deaconess who argues the field is moving toward a "hybrid" model — AI assistants handling homework and skill-building between sessions while human providers continue to deliver therapy — but warns that most small practices and community mental health centers lack the infrastructure to safely evaluate or implement the tools now flooding the market.

Former Geisinger CEO: U.S. Health Systems Must Replace Huge Numbers of People with AI Former Geisinger CEO Glenn Steele argues that U.S. health systems have spent two decades building administrative infrastructure that now consumes more resources than clinical care — and that survival requires replacing 35% to 40% of back-office revenue cycle jobs with autonomous AI within five years, not incrementally augmenting them with copilots. His clinical argument is sharper: with an estimated 800,000 patients harmed annually by misdiagnosis or suboptimal therapy, he contends it may be a patient safety violation not to deploy autonomous clinical reasoning for prevalent chronic conditions, freeing physicians to focus on the cases that genuinely require their judgment.

Patients Are Using Chatbots to Fight Medical Bills, With Mixed Results A New York Times investigation finds patients increasingly turning to Claude and ChatGPT to dispute medical bills and insurance denials — sometimes successfully, as in the case of a couple who used Claude to challenge a $22,604 hospital bill that was ultimately waived — but with significant limitations, including legal misinterpretations, missed options, and advice that assumes users have enough health system knowledge to supply the right context. Two structural concerns run through the piece: effective use of these tools requires a baseline of health literacy that many patients lack, and because chatbot companies are not bound by HIPAA, sensitive health and billing information shared with them carries privacy risks that most users don't fully understand.

Healthcare CIOs See AI Integration as a Competitive Necessity A survey of more than 60 senior health IT leaders finds that 94% believe delaying AI deployment creates a competitive disadvantage — yet only 4% have scaled AI with measurable outcomes, while 45% remain stuck in pilot phases, with 74% citing EHR vendor dependency as the primary execution barrier. A separate consumer survey adds a trust dimension worth watching: while 89% of clinicians said AI-driven clinical decision support leads to better patient outcomes, 64% of health consumers said they would still prefer to see a provider who does not use AI at all.

🏛 Government & policy

FDA Rejects Deregulatory AI Proposal From Regulator's Former Parent Company The FDA denied Harrison.ai's petition to exempt six categories of AI-enabled radiology devices from premarket authorization requirements, signaling a limit to how far the Trump administration will go in deregulating health AI.

Anthropic Loses Appeals Court Bid to Temporarily Block Pentagon Blacklisting A federal appeals court denied Anthropic's request to temporarily block the Department of Defense's supply chain risk designation — meaning defense contractors cannot use Claude in their Pentagon work — while a separate federal court in San Francisco has barred the Trump administration from enforcing a broader ban on Claude across all government agencies. The underlying dispute traces to Anthropic's refusal to grant the Pentagon unfettered access to its models for all lawful purposes, including fully autonomous weapons and domestic mass surveillance, making this as much a governance question as a legal one.

😇 Ethics & responsible use

As AI Booms, a Challenge Emerges: Automation Complacency An Altera Digital Health executive argues that as AI moves from pilots to enterprise deployment, the field's focus on hallucinations and bias is obscuring a subtler risk: clinicians becoming overly trusting of AI outputs and disengaging from active review. The piece uses ambient documentation as its primary example — where unreviewed notes can propagate small errors into the clinical record over time — and recommends organizational countermeasures including parallel rollouts, intentional error training simulations, and system design that requires active provider sign-off before AI outputs reach the patient record.

Beware Dr. Chatbot: Privacy Laws Don't Protect Health Care Data from AI An ophthalmologist and clinical director argues that Americans are sharing detailed health histories with consumer AI platforms under the false assumption that HIPAA protections apply — when in fact HIPAA covers only providers, hospitals, and insurers, not conversational AI tools. The piece extends the concern into the exam room, where ambient scribes are capturing full clinical conversations that flow into third-party systems for billing and analytics — creating what the author calls a new health data supply chain that operates almost entirely outside traditional medical confidentiality.

OpenAI's Latest Release Is a Social Contract Revamp OpenAI released a policy blueprint outlining how it believes governments should manage the disruptions its products are expected to cause — including incentivizing four-day workweeks, expanding healthcare and childcare benefits, funding Social Security and SNAP through increased taxes on AI-benefiting companies, and establishing a public wealth fund to redistribute AI-related investment returns to citizens. The document also addresses AI safety risks, calling for a greater government role in managing threats from the company's own models, including large-scale cyberattacks and engineered bioweapons.

An Update on Our Mental Health Work Google announced updates to Gemini's mental health safeguards, including a redesigned "one-touch" crisis interface that surfaces immediate connections to hotline resources — chat, call, text, or web — when a conversation signals potential suicide or self-harm risk, with the option remaining visible for the remainder of the conversation. The announcement also includes $30 million in Google.org funding over three years to help scale global crisis hotlines, and specific protections for minors including guardrails preventing Gemini from acting as a companion, simulating intimacy, or claiming to be human.

Introducing the Child Safety Blueprint OpenAI released a policy blueprint for combatting AI-enabled child sexual exploitation, developed in coordination with the National Center for Missing and Exploited Children, the Attorney General Alliance, and child safety organization Thorn. The framework focuses on three areas: modernizing laws to address AI-generated and altered CSAM, improving provider reporting and coordination with law enforcement, and embedding safety-by-design measures directly into AI systems — with state attorneys general co-chairing the AI Task Force noting that effective safeguards require layered defenses that adapt continuously, not static technical controls.

🔬Research & evidence

Small Language Models for Developing Agentic AI in Healthcare: A Comprehensive Systematic Review and Critical Analysis A systematic review published in Cureus examined 35 studies on small language models — defined as systems with fewer than 10 billion parameters — deployed for agentic tasks in healthcare settings, including clinical documentation, patient triage, decision support, and administrative automation. The authors conclude that SLMs offer sufficient capability for most routine clinical workflow tasks while delivering meaningful advantages over larger models in cost, latency, and deployability — particularly in resource-constrained settings like rural hospitals and community health centers. The evidence base, however, remains limited by heterogeneous study designs and a lack of long-term clinical outcome data, and the authors call for standardized evaluation frameworks before widespread adoption.

Polls Show Most Americans Still Prefer Providers to AI for Health Advice Two new polls find Americans remain skeptical of AI as a health information source: a Pew Research Center survey of more than 5,000 adults found that 85% rely on providers for health advice while only 22% have used AI chatbots, and an Ohio State Wexner Medical Center poll found that the share of Americans open to AI being used in their care fell from 52% in 2024 to 42% in 2025. The uninsured are more likely than those with coverage to turn to AI and social media for health guidance — a finding that adds an equity dimension to the question of who is most exposed to the risks of unverified AI health information.

Indiana Researchers Study the Potential of AI-Powered Health Care A new Indianapolis-based Center for AI and Robotic Excellence in Medicine (CARE) — a collaboration between Purdue's engineering school, the IU School of Medicine, and Indiana's Clinical and Translational Sciences Institute — is organizing research around four areas: AI-assisted surgical robots, digital patient twins for risk-free treatment simulation, autonomous smart laboratories, and AI-powered care for remote and combat settings. The center's founding report flags persistent challenges including opaque AI decision-making, biased training data, unresolved liability questions, and robotic systems that cost millions and remain accessible only to large urban hospitals.

🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.

10 Things I Changed to Stop Hitting Claude's Usage Limits An X thread making the rounds offers practical token-management habits for heavy Claude users — including editing prompts instead of sending follow-ups (each new message forces Claude to re-read the entire conversation history), batching multiple questions into a single message, starting fresh chats every 15–20 exchanges, and using lighter models like Haiku for drafts and simple tasks.

There's a Good Reason You Can't Concentrate Georgetown computer scientist and "Deep Work" author Cal Newport argues that the same forces degrading attention spans — social media, constant connectivity, and now AI — are converging into a cognitive fitness crisis that warrants a public health response on par with the diet-and-exercise revolution of the mid-20th century. His warning for AI users is specific: offloading cognitively demanding work to AI doesn't just save time, it atrophies the thinking muscles that make the output worth anything — and any use of AI that mainly serves to make core professional tasks less mentally demanding should be treated with caution.

Decision-Making by Consensus Doesn't Work in the AI Era A Harvard Business Review piece argues that consensus-driven decision-making — the dominant management model of the past half-century — has two fatal weaknesses in the AI era: it's too slow and it distorts information as signals get filtered and smoothed on their way up the hierarchy. The authors propose two structural replacements: "Autonomous Scrums" — interdisciplinary teams of six to eight people empowered to own outcomes rather than just recommend — and the OVIS framework, in which one person Owns the decision, two or three Veto or Influence it, and everyone else Supports the outcome, eliminating the diffused accountability that makes consensus cultures resistant to speed.

AI Skill of the Day: Break Big Tasks Into Stages A practical tip from The Neuron: handing AI a large, open-ended task in a single prompt is one of the most reliable ways to get output that looks polished but isn't actually useful. The better approach is to break the work into sequential steps — outline first, then pressure-test the outline, then draft one section at a time, then refine tone, then extract action items — a method that applies to writing, research, presentations, and anything else with more than one moving part. The insight generalizes: the professionals getting the most out of AI aren't writing cleverer single prompts, they're better at decomposing messy work into smaller, clearer steps.

To Succeed with AI, You've Got to Nail the Basics A Harvard Business Review piece argues that most organizations are failing at AI not because of the technology but because they are layering it on top of weak data foundations, unclear customer definitions, and unmeasured outcomes — and expecting AI to hide those problems rather than expose them. The author's five-part framework draws from decades of quality management: know who your customers are (internal and external), manage your processes so AI has trusted data to work with, measure what actually matters rather than just productivity proxies, embrace continuous small improvements, and recognize that none of it works without people genuinely engaged in the effort.

Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected] 

🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.

👉 Jun. 8–10, 2026 — Fortune Brainstorm Tech, Aspen, CO

And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.

Till next time,

BC

Keep Reading