🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.

Some people say that intelligence is about recognizing patterns. So here's a pattern I'm seeing emerge from research published over the past two weeks: While AI is rapidly advancing into healthcare, healthcare itself remains a uniquely human endeavor.

When the Human Is the Variable

Last week, I highlighted research from Oxford University's Internet Institute showing that LLMs performed no better than traditional methods like online searches when people tried to identify conditions and assess treatment options. But the more interesting finding was this: participants didn't know what information the LLMs needed to offer accurate advice.

In other words, the reliability of AI's output was contingent on the user's ability to provide accurate information and the right prompts. The system's usefulness depended on the human's skill at using it.

This week, a study from Saudi Arabia published in Healthcare (MDPI) provides quantitative evidence for something similar on the trust side. The research examined what drives trust in using AI for health-related decision-making among adults. The answer? It's not primarily about the technology itself.

Trust in AI was strongly associated with patient satisfaction and, surprisingly, inversely associated with the quality of the patient-doctor relationship. Patients with strong relationships with their physicians showed lower trust in AI, perhaps because they already had what they needed.

The takeaway? Efforts to promote AI adoption can't focus exclusively on technical performance. As the paper’s authors note, "AI trust is closely linked to existing trust structures within healthcare systems, rather than representing a distinct or independent form of trust."

When the Art Can't Be Automated

Two other publications this week show how providers are grappling with similar limitations from the other side of the stethoscope.

In Scientific American, Hilke Schellmann documents how nurses are navigating AI tools in clinical settings. A recurring theme: AI's ability to accurately support patient care is limited by the inputs it receives, and by what it can't receive.

Melissa Beebe, an RN at UC Davis Health, put it plainly: "I can't tell you how many times I have that feeling, I don't feel right about this patient. It could be just the way their skin looks or feels to me." Elven Mitchell, an ICU nurse at Kaiser Permanente, echoed the point: "Sometimes you can see a patient and, just looking at them, [know they're] not doing well. It doesn't show in the labs, and it doesn't show on the monitor. We have five senses, and computers only get input."

Meanwhile, in STAT News, Angus Chen examines AI's promise in oncology, particularly in digital pathology tools that can predict treatment response. The tools show real potential in situations when oncologists don't have clear indications which treatment will work better.

But oncologist Danielle Bitterman at Dana-Farber flags where this gets complicated: "That can be tricky if AI tools start providing recommendations in situations that are clinically murkier." For example, when the standard of care is clearly superior in trials but carries much higher toxicity than a less effective alternative. "Deciding which therapy is a balancing act between the medication's efficacy and how well the patient may be able to tolerate it. That's where the art of medicine occurs."

The Pattern

Across all four pieces mentioned above, from patient interactions with health chatbots to nurse bedside assessments to oncology treatment decisions, the pattern that is standing out to me is that AI systems struggle most at the interfaces where healthcare is least algorithmic.

Where patients don't know what questions to ask. Where nurses sense something is wrong before it shows up in data. Where physicians must weigh efficacy against a patient's ability to tolerate suffering. Where trust is built through a relationship, not technical performance.

AI may indeed transform medicine; I’m hopeful it will for the better. But this week's evidence suggests that transformation will depend less on the technology's capabilities than on our ability to preserve, and design around, the irreducibly human dimensions of care. The systems that may most likely succeed won't be the ones that eliminate human judgment. They'll be the ones that recognize where human judgment is irreplaceable and build it in.

🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.

🏗️ Industry news

Introducing Claude Sonnet 4.6 Anthropic released Claude Sonnet 4.6, its most capable Sonnet model yet, with major upgrades across coding, computer use, and long-context reasoning. The model approaches Opus 4.5 performance at Sonnet pricing and demonstrates human-level capability in complex multi-step tasks.

The doomsday scenario for AI and jobs Derek Thompson examines three scenarios for AI's employment impact, noting recursive AI can roll itself out faster than past technologies. Anthropic CEO Dario Amodei predicts AI could eliminate 50% of entry-level white-collar jobs.

OpenAI just Hired the guy who built the most viral AI tool since ChatGPT OpenAI hired Peter Steinberger, creator of OpenClaw (fastest-growing GitHub repo in history at 175K+ stars), to lead personal agents. Steinberger predicts AI agents could replace 80% of apps on your phone.

In a financial pinch, major health insurers are turning to AI for help Major insurers are accelerating AI adoption, with UnitedHealth pledging $1.5B in 2026 investment to cut $1B in costs. Providers report suspicious denial patterns and an emerging AI arms race, but secrecy around AI use means neither side knows when the other is deploying it for financial advantage.

🩺 At the point of care

Dr. Oz pushes AI avatars as a fix for rural health care. Not so fast, critics say Dr. Oz is proposing AI avatars for medical interviews as part of a $50B rural healthcare plan amid closure of 190+ hospitals. Keeping with our theme above, critics warn this removes human connection and tests unproven tech on underserved populations, raising concerns about a two-tiered healthcare system.

AI is everywhere in healthcare. But no one agrees on what it's actually for. AI adoption is outpacing governance strategy in hospitals, with 50+ major systems hiring C-suite AI leaders in 2025. One executive admitted: "We haven't been able to get our arms around it yet. That could be very dangerous."

Testing the boundaries of artificial intelligence in care delivery: Utah's prescription renewal pilot program Utah launched a first-in-nation pilot in January 2026 allowing autonomous AI to legally issue prescription renewals for chronic conditions under regulatory sandbox framework. Federal-state tensions are emerging as a December 2025 Executive Order promotes uniform national standards that may preempt state experiments.

'Shadow AI' continues to lurk in healthcare settings A Wolters Kluwer survey of 500+ healthcare workers found 17% admitted using unauthorized AI tools, with nearly half citing the need to hasten workflows and a third saying their workplace lacked approved options. The chief concern is data privacy—uploading protected health information to platforms like ChatGPT for training purposes—though experts note the problem has decreased as more organizations establish enterprise LLM policies and guardrails.

🏛 Government & policy

White House pressures Utah lawmaker to kill AI transparency bill The White House called Utah's AI transparency bill "unfixable" and offered no compromise. This signals broader federal intervention to preempt state AI regulation.

Red and blue states alike want to limit AI in insurance. Trump wants to limit the states. At least six states enacted AI health insurance laws despite Trump's December executive order seeking to preempt state regulation. Harvard's Carmel Shachar calls the order "possibly unconstitutional," while NY legislator Alex Bores framed the stakes: "The question is, should it be state or not at all?"

😇 Ethics & responsible use

Biological data governance in an age of AI Over 100 researchers from Johns Hopkins, Oxford, and Stanford endorsed a Biosecurity Data Levels framework published in Science for governing pathogen data in the AI age. The concern: once dangerous biological data hits the open web, it can't be recalled, and AI models could enable pathogen design.

🔬Research & evidence

Accelerating AI innovation in healthcare: real-world clinical research applications on the Mayo Clinic Platform Mayo Clinic Platform provides de-identified EHR data from 15.1M+ patients with cloud computing (up to 8 H100 GPUs) for AI research. The platform demonstrates how a BiGRU model trained in ~10 minutes for 15K patients, positioning itself as next-gen infrastructure for AI-driven translational medicine.

Merck to leverage Mayo Clinic platform for AI-enabled drug discovery Merck and Mayo Clinic signed a strategic research agreement giving Merck access to Mayo's de-identified clinical data, genomics, imaging, and clinical notes from 15.1M+ patients through the Platform_Orchestrate architecture. The partnership will initially target Inflammatory Bowel Disease, Atopic Dermatitis, and Multiple Sclerosis using AI to identify drug targets and validate models with real-world evidence.

Paying for AI in U.S. healthcare The Bipartisan Policy Center examines reimbursement barriers for clinical AI, finding that most AI services don't fit Medicare's benefit categories and rely on uncertain "carrier pricing" by regional contractors. This uncertainty incentivizes developers to prioritize administrative AI over clinical applications, and adoption remains concentrated in wealthier, metropolitan areas—creating geographic inequities in access to AI-enabled care.

🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.

Becoming an artist to outrun the machines Dan Hockenmaier argues that as AI progresses, three skills become more valuable: taste (evaluating quality), judgment (deciding under uncertainty), and influence (shaping human activity). And don’t miss his quartile graphic outlining four types of people at every company right now. Which one are you, or do you want to be?

Listen to audio summaries in Google Docs Google Workspace rolled out Gemini-powered audio summaries in Google Docs with customizable voices and playback speeds. Rolling out February 12, 2026 to Business/Enterprise users.

With rise of agents, we are entering the world of identic AI Tech strategist Don Tapscott introduces "identic AI" — personalized agents that learn your values and judgment and act as extensions of you, not just tools. Tapscott's practical advice: get knowledgeable now, use the tools yourself, and start mapping which roles in your organization will be amplified, compressed, or shifted toward supervising agents.

Why AI adoption stalls, according to industry data Research from Fractional Insights and Ferrazzi Greenlight surveying 3,000+ employees found a paradox: high AI angst drives usage (65% of job AI-assisted for high-angst employees vs. 42% for low-angst) while simultaneously doubling resistance. Technology and finance sectors show highest belief in AI's value paired with highest anxiety (48% above manufacturing/education), while healthcare shows lower angst due to mission-alignment framing—suggesting adoption strategies must address industry-specific psychological patterns, not just technical training.

Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected] 

🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.

👉 Mar. 12–18, 2026 — SXSW 2026, Austin, TX

👉 Mar. 27 — “The AI Doc: Or How I Became An Apocaloptimist” opens in theaters. Watch the trailer

👉 Mar. 30–31, 2026 — IAPP Global Privacy Summit, Washington DC

👉 Apr. 6–9, 2026 — HumanX 2026, San Francisco, CA

And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.

Till next time,

BC

Keep Reading