Programming note: After noticing we were missing critical stories that break on Friday mornings, we’ve decided to push our distribution to noon EST.

🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.

Today is jobs day - a new report is out from the government on how many jobs were added or lost in the U.S. in February. Toplines are that the economy shed 92,000 across all sectors, including health care. That’s a switch from prior reports, which showed nearly all of the jobs added to the U.S. economy in recent months have come from the health sector.

Health care and social assistance accounted for 95 percent of the 130,000 jobs added in January, according to the Bureau of Labor Statistics. That builds on similar trends throughout 2025, when the industry buoyed an otherwise slow labor market — hospitals, clinics, and nursing homes kept hiring even as many employers pulled back. Without health care and related jobs, the United States would have nearly 400,000 fewer jobs than it did a year ago.

There are plenty of forces that could change that trajectory. Immigration policies are tightening the labor supply. The recently passed Big Beautiful Bill will reduce Medicaid spending over the next decade, which could dampen hiring across the sector. And the latest jobs report suggests labor strikes and weather-related events could be responsible for dampening hiring.

But what will be the impact of AI? (Note: Anthropic just released a study trying to answer this very question, in which it found computer programmers, customer service reps, and financial analysts were the professions most exposed to AI automation).

Two narratives take shape. Both can be true.

In one version of the future, AI automates swaths of healthcare work, eliminating jobs. In the other, it augments clinicians and staff, making everyone more productive while demand absorbs the gains. The evidence this week suggests the answer is more nuanced than either camp allows.

A health workforce readiness study from Accenture looking at Northwest Arkansas found that about 39 percent of time spent in healthcare roles will be affected by AI, but the exposure is uneven. Administrative and non-clinical roles face the highest automation risk. Clinical roles are far more likely to be augmented. The more training or education you have, the less likely your job can be replaced by a machine.

This tracks with broader labor market data. A Harvard Business School study analyzing nearly all U.S. job postings from 2019 through March 2025 found that openings for routine, automation-prone roles fell 13% after ChatGPT's debut, while demand for more analytical, technical, and creative jobs grew 20%. The pattern in healthcare mirrors the economy-wide trend: the exposure is uneven, and the direction depends on the nature of the work.

This distinction — augmentation versus replacement — is also playing out in Washington. At a Senate hearing this week, Sen. Ted Budd (R-NC) and Rad AI's chief innovation officer both stressed that AI tools are meant to expand physician capabilities, not replace providers. As Rad AI's Demetri Giannikopoulos put it: "AI will not replace physicians, it will help them do what they train their entire lives to do — care for patients. Less hype, more help."

What can’t be automated.

If AI absorbs the administrative and operational work that currently employs a significant share of the healthcare workforce, the sector's role as an engine of job growth changes shape, even if clinical hiring holds steady.

What it won't absorb are the parts of healthcare that remain irreducibly human — a theme I explored in last week's edition. The nurse who senses something is wrong before it shows up in the data. The physician weighing efficacy against a patient's ability to tolerate suffering. The trust built through relationship, not technical performance. Those roles aren't just safe from automation. They're where healthcare actually happens.

Health care is the engine of American job growth right now, and AI will reshape it in some form. Some roles will be more resilient than others. How well we manage that transition, especially for those roles that are automated, depends on decisions that to be honest, aren’t being made at the moment.

🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.

🏗️ Industry news

OpenAI's Pentagon Partnership Draws Backlash OpenAI's expanding relationship with the Pentagon — including work involving surveillance and defense applications — has triggered significant public backlash. We're not spending much time here since this has been widely covered, but for readers who haven't been following the dispute, Platformer has a solid rundown.

Anthropic's Claude Surges to #1 on Apple's App Store Even amid the disupute with the Pentagon Anthropic's Claude dethroned ChatGPT as the top free iPhone app over the weekend, fueled by public backlash against OpenAI's Pentagon deal and the Trump administration's decision to cut off federal access to Anthropic's tools. Google searches for "Anthropic" hit an all-time high, and the company reported record sign-ups — enough to briefly crash the platform on Monday.

19-Year-Old Sells AI Calorie-Tracking App to MyFitnessPal After $40M Year Zach Yadegari built Cal AI — an app that tracks calories by processing photos of food — while still in high school, using OpenAI's Images API. The app earned $40 million in the past 12 months and employs around 30 people. It's a proof point of how AI is transforming who can start and sell a successful business: a teenager with an API key and a personal pain point built something a market leader wanted to acquire.

Can AI Be Pro-Worker? In The New Yorker, John Cassidy profiles a new Brookings report from Nobel Prize–winning economists Daron Acemoglu and Simon Johnson, along with MIT's David Autor, that challenges the assumption of societal powerlessness in the face of AI. Their argument: government — particularly as a major purchaser of technology in health and education — has real leverage to push AI toward augmenting workers rather than replacing them.

CVS Health to Launch AI-Powered Health Tech Subsidiary CVS Health plans to launch Health100, an AI-based consumer platform powered by Google Cloud and Gemini, designed to help patients find providers, compare care costs, centralize health records, and receive care management recommendations between visits. The platform will roll out midyear to CVS customers first before expanding to outside providers and companies.

🩺 At the point of care

Why "Invisible Work" Is Healthcare's Biggest AI Opportunity A Forbes Technology Council piece argues that the highest-impact AI use case in healthcare isn't clinical decision-making — it's the repetitive, high-volume administrative work that overwhelms staff: scheduling, refill requests, billing questions, after-hours calls.

RecovryAI Receives FDA Breakthrough Device Designation for Patient-Facing Clinical AI RecovryAI emerged from stealth with FDA Breakthrough Device Designation for its Virtual Care Assistants — physician-prescribed, patient-facing AI designed to support patients during post-operative recovery at home. The tool delivers recovery guidance based on clinical protocols and escalates deviations to the care team. CEO Scott Walchek draws a clear line between consumer wellness apps and clinical AI that carries responsibility inside the care pathway, arguing FDA authorization is the foundation for trust, accountability, and scale.

Healthcare Is AI's Hardest Test A TIME Ideas piece draws on interviews with Geoffrey Hinton, Eric Topol, and others to frame the central tension in clinical AI. In some settings, AI alone is already outperforming physicians — but in others, it's dangerously unreliable, including a recent Nature Medicine study where ChatGPT triaged incorrectly more than half the time. The piece also surfaces a legal asymmetry worth noting: if a doctor skips an available AI tool and a patient dies, no one is sued; if a doctor uses AI and harm follows, liability is immediate.

Clearing Up Some Healthcare AI Misunderstandings A HealthTech Q&A with three academic experts — from UTHealth Houston, Yale New Haven Health, and Tulane — tackles persistent misconceptions head-on. Among them: that AI will replace clinicians, that hallucinations mean models are broken (they're just inaccurate predictions), and that cost savings are imminent. Yale's Lee Schwamm offers a particularly sharp observation: most AI deployed in healthcare right now doesn't touch patient care directly — it's focused on cost containment, revenue growth, and reducing provider burden.

AWS Launches Agentic AI Solution for Healthcare Providers Amazon Web Services released Amazon Connect Health — its first purpose-built healthcare solution — targeting scheduling, patient verification, ambient documentation, and medical coding. The solution uses natural language voice to handle administrative tasks end-to-end, with built-in EHR integration and escalation to staff when a situation requires human judgment. UC San Diego Health reports saving one minute per call and diverting 630 hours weekly from verification to direct patient care.

🏛 Government & policy

Health Tech's Wish List for HHS Takes Shape In response to HHS's request for information on boosting clinical AI adoption, major health tech firms — including Epic, Oracle, Abridge, Aidoc, and Doctronic — submitted detailed proposals. Common asks: dedicated Medicare reimbursement codes for AI-performed services, adoption grants to cover infrastructure costs, and expanded regulatory exemptions for AI devices. The submissions offer a window into where industry sees the policy bottlenecks, and where it wants government to open the checkbook.

😇 Ethics & responsible use

America Is Betting on AI While Ignoring Its Biggest Healthcare Weakness A guest article from George Washington University researchers argues that AI's promise in healthcare will collide with a fragmented data infrastructure that has stymied innovation for years. The core tension: AI depends on large, diverse datasets, but U.S. patient data remains locked in disconnected systems optimized for billing, not learning. Countries with centralized health systems already have a structural advantage; without deliberate action, the U.S. risks squandering the opportunity.

Gemini Chatbot Cited in Wrongful Death Lawsuit ⚠️ Content Warning: This post discusses experiences with suicide/suicidal thoughts. If you or someone you know is struggling, help is available. You can call or text 988 anytime in the US. A Wall Street Journal investigation details what appears to be the first wrongful-death suit naming Google's Gemini. A 36-year-old Florida man died by suicide after extended voice conversations with the chatbot. The lawsuit raises urgent questions about the safety of emotionally responsive AI voice interactions, particularly Gemini's "affective dialog" feature that detects and responds to user emotion. Google said the model referred the individual to a crisis hotline multiple times but acknowledged AI models "are not perfect." The case adds to a growing body of litigation alleging AI-related psychological harms.

🔬Research & evidence

LLMs in Clinical Medicine: Lots of Studies, Little Rigorous Evidence A Nature Medicine systematic review identified 4,609 peer-reviewed studies evaluating LLMs in clinical medicine between 2022 and 2025 — roughly 3.2 papers per day. But only 1,048 used real patient data, and just 19 were prospective randomized trials. Most tested simulated scenarios or exam-style tasks. LLMs outperformed humans in 33% of head-to-head comparisons, but that rate dropped significantly when tasks involved real clinical data versus textbook-style questions. At least 25% of studies had sample sizes under 30. The takeaway: the evidence base for clinical AI is growing fast but remains shallow — reinforcing the evaluation gap theme from last week's issue.

AI Therapist? It Falls Short, a New Study Warns Brown University researchers found that even when major AI systems — including GPT, Claude, and Llama — are prompted to act as trained therapists, they consistently fail to meet professional ethics standards. Reviewing AI-led counseling sessions, licensed psychologists identified 15 distinct risks across five areas: ignoring individual context, poor collaboration, deceptive empathy, bias, and weak crisis response. The researchers stress that prompts alone can't make AI safe for therapy, and that no regulatory frameworks exist to hold AI counselors accountable the way licensing boards do for human therapists.

🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.

Gen AI Won't Make Your Employees Experts A controlled experiment at a fintech firm found that AI helped workers with adjacent expertise nearly match specialists, but did little for those with no domain background — a phenomenon researchers call "the AI wall." The implication for healthcare leaders: AI tools paired with staff who lack clinical or operational context may not close performance gaps the way organizations expect, making baseline training and workflow redesign essential complements to AI deployment.

Anthropic Launches "Import Memory" Tool Anthropic released a tool that lets users port saved preferences, project context, and behavioral settings from ChatGPT, Gemini, or Copilot into Claude with a single copy-paste. The company also opened its memory feature to free-tier users for the first time. For organizations evaluating AI platforms, the move lowers a key switching cost — accumulated institutional context — and signals that persistent memory across sessions is becoming table stakes for enterprise AI tools.

NotebookLM Adds Custom Infographic Styles Google's NotebookLM rolled out custom styles for its infographic feature, with 10 presets (including editorial, clay, brick, and kawaii) plus the ability to create custom styles. Another step toward making NotebookLM a practical tool for turning research into shareable visuals without design expertise.

Claude Code "Skills" for Reusable AI Instructions Anthropic's Claude Code supports "Skills" — markdown files that teach the coding agent how to handle recurring tasks (PR reviews, commit messages, brand guidelines) without re-explaining preferences each session. If you've been curious about what Skills are and how they work, this short explainer video is a good place to start.

Anthropic’s Free AI Course Library Anthropic offers 13 free courses covering topics from Claude 101 to building MCP servers to broader AI fluency. A solid starting point for anyone looking to build hands-on skills with Claude or deepen their understanding of how to work with AI tools more effectively.

How AI Damages Work Relationships — and Where It Can Actually Help An HBR piece argues that in the rush to deploy AI for productivity, we're outsourcing the small, messy interactions that actually build workplace relationships. The author flags several costs: increased cognitive load when colleagues can't tell if they're talking to a person or a prompt, erosion of trust when AI-generated "workslop" shifts the burden to recipients, and the loss of productive friction that drives creative collaboration.

A Closer Look at Perplexity Computer For readers curious about Perplexity's new "Computer" platform — the multi-agent system that plans, delegates, and executes tasks autonomously across specialized AI models — this video offers an introduction to what it does and how it works.

Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected] 

🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.

👉 Mar. 12–18, 2026 — SXSW 2026, Austin, TX

👉 Mar. 27 — “The AI Doc: Or How I Became An Apocaloptimist” opens in theaters. Watch the trailer

👉 Mar. 30–31, 2026 — IAPP Global Privacy Summit, Washington, DC

👉 Apr. 6–9, 2026 — HumanX 2026, San Francisco, CA

And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.

Till next time,

BC

Keep Reading