
🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.
Amazon made news last week with the launch of Health AI, a chatbot available to all U.S. Prime subscribers that can access your medical records, recommend treatments, surface pharmacy products, and connect you to a doctor.
The coverage framed it primarily as a privacy story. That framing is too narrow. What Amazon has built is a new version of vertical integration in healthcare.
Here’s how it works. Amazon Health AI sits at the top of the funnel, capturing the patient relationship at the moment of highest vulnerability — when someone is describing a symptom or trying to understand a diagnosis. From there, it can route that patient to Amazon Pharmacy for medications, or to One Medical, Amazon's primary care network, for a visit. One company. One data trail (though it will pull in other data from other providers if given permission to do so). One set of financial incentives runs through all of it.
We've seen this movie before.
Back in January, Congress spent hours grilling the CEOs of the nation's largest health insurers about this exact dynamic. Rep. Alexandria Ocasio-Cortez called it what it is: "Vertical integration is destroying people's ability to access care." Rep. Gregory Murphy, a physician, said he'd turn the whole industry "into dust" if he could. The criticism was bipartisan and pointed.
One week later, Senators Elizabeth Warren and Josh Hawley introduced the Break Up Big Medicine Act, which would prohibit parent companies from simultaneously owning insurers, PBMs, and medical providers. The bill reflects a growing consensus across the political spectrum that end-to-end ownership of the healthcare supply chain is a structural problem that is raising prices for patients.
The entry point changes everything.
Traditional vertical integration in healthcare starts with the payer. You own the insurance product, and from there you build or acquire the assets that let you keep more of the premium dollar: the PBM, the pharmacy, the physician practice. The patient is largely passive in this model. They show up; the system processes them.
Amazon's model is different. It starts with the patient relationship. Specifically, when a person describes how they feel, it acts as an intelligence-gathering operation. The more health questions Amazon's chatbot fields, the more it learns about its users. The more it learns, the better it gets at routing them toward Amazon's own products and services.
The regulatory gap.
Here's the problem: the Break Up Big Medicine Act targets insurers, PBMs, and providers. It doesn't reach a technology company that owns a chatbot, a pharmacy, and a primary care network, at least not obviously.
Amazon may not end up being the only technology company that will pursue this model, though they were poised to deploy it first. The combination of AI-powered patient engagement, pharmacy fulfillment, and care delivery is probably too valuable for Amazon to remain alone in this space. And as we've already seen, AI companies are racing into the health sector and competing with one another to be the health chatbot of choice for consumers.
Bottom line: Congress is focused on the last generation of vertical integration. Rightfully so. But a new generation may already be here.
🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.
🏗️ Industry news
Introducing Perplexity Health Perplexity has launched Perplexity Health, a suite of connectors linking users' personal health data — electronic health records, lab results, wearables — into a single AI-powered interface that can answer health questions grounded in peer-reviewed clinical literature. While they are late to the party, the launch puts Perplexity alongside OpenAI, Microsoft, and Anthropic as major AI players staking a claim in the consumer health information space.
The Differing Stakes for Data in AI Basecamp Research — a company sending explorers to remote ecosystems to collect biological data for AI-driven drug discovery — offers a striking contrast to how the broader AI industry handles training data. Since 2023, Basecamp has paid royalties to 60 organizations across 21 countries based on the use of their genetic data, building systems to tag, track, and compensate data contributions down to the source. The key takeaway here is that people may be far more willing to share their data to cure a disease than to generate content, sort of the same way people participate in clinical trials to pay it forward.
OpenAI to Cut Back on Side Projects in Push to 'Nail' Core Business In an all-hands meeting, OpenAI's CEO of Applications Fidji Simo told staff the company is deprioritizing an array of projects — including its video generator Sora and e-commerce features — to refocus on coding and enterprise customers. The pivot is a direct response to Anthropic's growing dominance in the business market.
How Google Is Using AI to Improve Health At its annual health event (“The Check Up”), Google announced a $10 million commitment to reimagine clinician education in the AI era, partnerships focused on rural health transformation in Arkansas, and a series of Fitbit upgrades — including the ability to link medical records directly to the app for personalized coaching. You can watch the recorded livestream below.
🩺 At the point of care
Top 10 Uses of AI in Healthcare A useful landscape overview of where AI is gaining the most traction in clinical and operational settings, from medical imaging — where more than 75% of FDA-authorized AI devices are concentrated — to clinical decision support, robotic-assisted surgery, precision medicine, and drug discovery. For health system leaders mapping their own AI priorities, it's a practical reference for understanding where the technology is most mature and where the clearest evidence base exists.
Three Systems Whose AI and Virtual Care Efforts Are Paying Off Three health systems are reporting concrete results from AI and virtual care investments: Sutter Health enrolled 6,000 patients in a remote blood pressure monitoring program, with 80% achieving controlled blood pressure within six months; Sanford Health's virtual care initiative saved rural patients more than $40 million in travel costs in 2025 and reached 14,000 patients who had never previously accessed behavioral health services; and Baptist Health reduced sepsis mortality and used an AI model to identify more than 100 potential pediatric human trafficking victims — then shared the model with Epic for national use.
🏛 Government & policy
Blackburn Releases Discussion Draft of National AI Policy Framework Sen. Marsha Blackburn (R-TN) has released a discussion draft of the TRUMP AMERICA AI Act, a sweeping federal framework that would preempt state AI laws, sunset Section 230, establish copyright protections for creators whose work is used to train AI models, require third-party audits for political bias in AI systems, and mandate workforce impact reporting. Additionally, this morning, the Trump administration sent its own framework to the Hill for consideration. Where either of these go is an open question. Preemption of state laws remains an unsettled matter.
How ARPA-H Is Developing FDA-Authorized AI Agents, Tested in Clinical Trials ARPA-H's ADVOCATE program is pursuing an FDA-authorized agentic AI system capable of providing 24/7 care management for patients with advanced cardiovascular disease — handling appointments, medications, diet, and exercise autonomously. Two quotes from ARPA-H leaders stood out to me: "If we're heading toward the self-driving car of health care, what we've invented so far is just cruise control." And: "It's very easy to make a very shallow demo. Honestly, I could vibe-code a heart-failure chatbot over the weekend. But the real world is messy." Both of these quotes signal that there’s still a lot of work to be done to safely and effectively integrate AI into healthcare.
Who Watches the Watchers? A Tabletop Exercise Fathom convened government, industry, and civil society participants in Paris last month to stress-test its Independent Verification Organizations (IVO) framework — a market-based approach to AI governance in which independent expert bodies certify AI systems against evolving safety standards. The simulation, built around a youth self-harm scenario, surfaced a finding that surprised participants: when an IVO framework was in place, the AI company in the scenario moved toward engagement and remediation rather than avoidance.
Bill Aims To Make Biological Data 'AI-Ready' For Biotech Development A bipartisan Senate bill from Sens. Todd Young (R-IN) and Ben Ray Luján (D-NM) would direct NIST to establish standards defining what counts as "AI-ready biological data" — a direct response to China's prioritization of data infrastructure for AI-driven drug discovery. Companion legislation has been introduced in the House. The bill builds on the National Security Commission on Emerging Biotechnology's 2025 report, several recommendations of which were incorporated into the Trump administration's AI action plan.
😇 Ethics & responsible use
Artificial Intelligence in Healthcare: Managing the Growing Risk to Patient Confidentiality A legal briefing from Chartwell Law maps the patient confidentiality risks that emerge when healthcare organizations adopt AI tools without adequate governance — including unintended data disclosure when PHI is entered into platforms that retain user inputs, third-party vendor exposure when business associate agreements aren't in place, and re-identification risk from data aggregation.
AI Healthcare Tools with Bias Need to Be Pulled At HIMSS26, Jefferson Health's AI director Avishkar Sharma argued that biased AI algorithms should be treated like a contaminated batch of anesthetic — pulled immediately and investigated. Here’s another quote that stood out to me: "If an AI system isn't equitable, it is not clinical." The piece surfaces ECRI's ongoing concern that health systems lack the governance structures to catch bias before it reaches patients, and that chatbots trained on skewed data don't just reflect existing disparities — they amplify them.
🔬Research & evidence
What 81,000 People Want from AI Anthropic conducted what it believes is the largest and most multilingual qualitative study ever undertaken, interviewing 80,508 Claude users across 159 countries and 70 languages about their hopes and concerns with AI. What did they discover? 81% of respondents said AI had already taken a meaningful step toward their vision, with productivity, cognitive partnership, and learning leading the way. The most common concern was unreliability, followed by job displacement and loss of human autonomy.

Source: Anthropic
How AI Integrated into Clinical Workflow Lowers Medical Liability Perception A study published in Nature Health found that mock jurors were nearly 50% more likely to side against a radiologist who reviewed an AI-flagged CT scan only once, compared to a radiologist who reviewed it twice — before and after receiving the AI's assessment. The finding has direct implications for how health systems design AI workflows: the number of times a clinician engages with an AI output shapes not just clinical quality, but legal exposure.
Regulating Generative AI and LLMs in Healthcare: A Global Perspective A perspective piece in Nature Digital Medicine argues that current medical device regulatory frameworks are poorly suited for generative AI and large language models, and calls for global collaboration in regulatory science research. The authors advocate for multidisciplinary expertise and explicit attention to the needs of diverse populations.
Fair and Safe Medical AI: Why Local Expertise Matters A collaboration between Oxford and Pakistan's Shaukat Khanum Memorial Cancer Hospital evaluated open-source LLMs against a database of 250,000 patient records — and found that models trained on global data introduced bias and performed inconsistently in the local clinical context. Fine-tuning with local data and clinicians familiar with the regional context improved accuracy significantly. For health system leaders, the key is that AI tools trained on majority-population datasets may not automatically perform well across the diverse patient populations most health systems actually serve.
A Data Engineer Used AI to Build a Custom Cancer Vaccine for His Dog. Scientists Are Paying Attention. Paul Conyngham had no biology background, a dying dog, and 17 years in machine learning. Using ChatGPT, AlphaFold, and a $3,000 genomic sequencing run, he identified Rosie's tumor mutations, mapped the proteins, and worked with UNSW researchers to synthesize a fully personalized mRNA cancer vaccine — the first ever for a dog. The tumor has shrunk by half, and UNSW scientists say the approach is now directly informing human cancer trials. UNSW researchers are calling it an example of citizen science — a non-biologist, armed with widely available AI tools, making a meaningful contribution to cutting-edge medical research in months rather than years.
🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.
Anthropic Launches Dispatch in Claude Cowork Anthropic has launched Dispatch, a new feature in Claude Cowork currently in research preview, that runs a persistent conversation with Claude on your desktop — letting you send tasks from your phone and return to completed work.
Researchers Asked LLMs for Strategic Advice. They Got "Trendslop" in Return. New research tested seven leading LLMs across thousands of strategic decision simulations and found a consistent pattern: regardless of context, the models gravitated toward the same trendy answers — differentiation over cost leadership, augmentation over automation, long-term over short-term — reflecting what sounds good in contemporary management discourse rather than what fits the actual situation. The researchers call it "trendslop," and found that neither better prompting nor richer context fully eliminates it. The authors recommend using LLMs to expand your options and stress-test assumptions, but keep the final strategic judgment in human hands.
Inside Bank of America's 'Build Once' AI Strategy A case study on Bank of America's AI journey offers a useful case study for large health systems: starting in 2018 with a single AI-powered assistant, the bank deliberately built a model-agnostic platform that could be reused and adapted as the technology evolved. When generative AI arrived and teams started going rogue with their own builds, BofA's CTO found they were "reinventing the wheel." Upfront discipline of building shared infrastructure rather than point solutions pays compounding dividends, even when the technology keeps changing underneath you.
Who in the C-Suite Should Own AI? UC Berkeley's Toby Stuart applies a framework from sociology to the C-suite turf wars now erupting over agentic AI — arguing the right question isn't "who owns AI?" but "who owns which AI-related decisions?" His practical answer: map specific decision rights to the executives best positioned to make them (COO for outcomes, CIO for infrastructure, CDO for data governance, CFO for ROI), and give the Chief AI Officer ownership of the coordination layer itself.
Gamma Launches Gamma Imagine, Bringing AI-Native Design to Knowledge Workers Gamma — the AI presentation and document tool with nearly 100 million users — has launched its biggest update yet, including Gamma Imagine, which generates posters, logos, and infographics from a single prompt; AI-native templates that can restyle an entire deck in one step; and Connectors that embed Gamma directly into ChatGPT and Claude.
Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected]
🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.
👉 Mar. 27 — “The AI Doc: Or How I Became An Apocaloptimist” opens in theaters. Watch the trailer
👉 Mar. 30–31, 2026 — IAPP Global Privacy Summit, Washington, DC
👉 Apr. 6–9, 2026 — HumanX 2026, San Francisco, CA
👉 Apr. 7–8, 2026 — Behavioral Health AI Summit, Nashville, TN
👉 Apr. 10 — Ethical AI: Leadership and Governance, Virtual
👉 Apr. 27–28, 2026 — AI for Hospitals & Health Plans Summit, New Orleans, LA
👉 May 4–5, 2026 — AI in Medicine Conference (AIIM 2026), Boston, MA
👉 May 7–8, 2026 — NBER Conference on AI in Healthcare, Cambridge, MA
👉 Jun. 8–10, 2026 — Fortune Brainstorm Tech, Aspen, CO
And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.
Till next time,
BC

