
🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.
I've written here, and on LinkedIn, about my time at Human[X] in San Francisco last week. But there's one more takeaway worth highlighting.
There’s a lot of conversation about AI and health centers on what could go wrong. Privacy. Accuracy. The erosion of the human connection between patient and provider. Amplifying the maddening parts of healthcare (I cover some of those instances below). Those concerns are all legitimate.
But there's also another side of the ledger.
The case for when.
One of the most promising ideas I heard at Human[X], and one that may represent AI's most significant long-term contribution to health, is its potential to shift medicine from reaction to prevention. Not just identifying who is at risk for a disease, but predicting when that disease might actually arrive.
We've had genetic testing for years. We can tell a patient they carry a mutation that increases their risk of Alzheimer's, or a marker associated with certain cancers. That's useful information. But it's also incomplete in a way that limits what you can do with it. Knowing you're at elevated risk is different from knowing you have a ten-year window to act.
Dr. Eric Topol, one of the most credible voices at the intersection of medicine and technology, said at Human[X]: "Up until now, we'd say, oh, this person has a risk for this disease, but we couldn't say when. So you could have a risk for Alzheimer's at 66 or 99. It makes a big difference. Now we can say when."
Topol's argument is that this specificity changes the entire calculus of prevention. It gives patients and providers something actionable: not just a probability, but a timeline to work against.
Early evidence of what this looks like.
We're already seeing early evidence of what this could look like in practice. Last week, researchers at Oxford published findings on an AI tool that detects subtle changes in the fat surrounding the heart — invisible to the human eye on any current scan — and predicts heart failure risk up to five years out, with 86% accuracy across more than 70,000 patients. The tool runs on CT scans patients are already getting. No new tests, no additional burden. Just more signal extracted from existing data.
The Oxford findings aren't an isolated data point. A separate editorial this week in Intelligent Medicine describes a class of AI tools built to detect "tipping points" — moments when a patient's biology is approaching a critical transition toward disease. Using techniques like dynamic network biomarker theory and temporal graph neural networks, researchers have demonstrated early-warning signals for influenza progression, tumor development, insulin resistance, and heart failure. The tools don't wait for disease to declare itself. They read the system before it tips.
That's the shape of the opportunity. AI can help find what key signals we're missing and give us enough lead time to do something about it.
The concerns don't disappear.
None of this means the concerns go away. An 86% accurate early warning system still gets it wrong 14% of the time, and we don't yet know whether earlier detection in every case translates to better outcomes.
The tools that can tell us when a disease will arrive can also generate anxiety, drive unnecessary follow-up care, and strain already-stretched health systems with patients who need answers providers don't yet have.
But Topol's framing — that AI's biggest contribution may ultimately be primary prevention — is a useful counterweight to the criticism that AI in healthcare is mostly about administrative efficiency and cost reduction. Those matter. But so do advances that enable a healthcare system that sees disease coming before it arrives, and builds care around that lead time.
🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.
🏗️ Industry news
AI Is Making the Things Patients Dislike Most About Health Care Worse A Peterson Health Technology Institute study finds that administrative AI is amplifying rather than reducing the adversarial dynamics in healthcare — generating "bot wars" in prior authorization and driving more intensive medical coding that increases costs. Peterson executive director Caroline Pearson put it plainly: "It's taking all of the adversarial processes in the system and amplifying them and putting them on steroids." Experts say the underlying problem is structural: without competitive pressure, AI-driven efficiencies are more likely to pad profit margins than lower costs for patients.

Source: Peterson Health Technology Institute, Administrative AI: Current Use and Potential Impact, April 2026
Wegovy Maker Novo Nordisk Strikes Deal With OpenAI to Speed Up Drug Discovery Novo Nordisk has partnered with OpenAI to integrate its models across R&D, manufacturing, and commercial operations — with pilot programs launching immediately and full integration targeted by year-end. The deal follows similar partnerships between OpenAI and Moderna, Sanofi, and Formation Bio, continuing a pattern of major pharma companies betting that AI can compress the timeline from research to treatment.
AWS Launches Amazon Bio Discovery to Accelerate AI-Powered Research in Life Sciences Amazon Web Services has launched Amazon Bio Discovery, an agentic AI application that gives scientists direct access to a catalog of biological foundation models for antibody drug design — without requiring coding skills — and closes the loop by routing candidates directly to lab partners for synthesis and testing. In an early collaboration with Memorial Sloan Kettering Cancer Center, the platform compressed a process that typically takes up to a year down to weeks, designing nearly 300,000 novel antibody molecules and sending the top 100,000 candidates for lab testing.
Corporate AI Adoption Is Getting Real A Morgan Stanley analysis of S&P 500 earnings calls finds that one-quarter of companies cited at least one quantifiable impact from AI in Q1 2026 — up from 13% in the same period last year — with finance jumping from 15% to 40% and tech leading at 42%. The gap between Morgan Stanley's 25% figure and a similar Goldman Sachs analysis that found only 10% of firms noting AI impact in specific use cases reflects differences in methodology as much as adoption reality — a reminder that how you define "AI impact" shapes what you find.
OpenAI Lobbies for Expanded AI Role in Life Sciences OpenAI released a policy report arguing that AI can compress drug development timelines — estimating tools could cut clinical-phase timelines by more than 20% — and calling for greater access to medical data and federal investment in AI research infrastructure. Axios notes the report functions as a lobbying document, and the reality check is pointed: no fully AI-discovered or AI-designed drug has completed phase 3 trials, and AI-discovered drugs have experienced similar phase 2 failure rates as conventionally discovered ones.
OpenAI Unveils GPT-5.4-Cyber, a Specialist Cybersecurity Model OpenAI has released GPT-5.4-Cyber — trained with fewer restrictions than its standard models to boost capability — to a select group of vetted customers through a trusted access program, one week after Anthropic announced its Mythos model had already detected thousands of severe vulnerabilities across major operating systems and browsers. The parallel launches signal an accelerating race among frontier AI labs to establish footholds in cybersecurity, a domain where autonomous vulnerability detection carries significant dual-use risk alongside its defensive value.
Introducing Claude Opus 4.7 Anthropic has released Claude Opus 4.7, with notable gains in software engineering, instruction following, and high-resolution image processing — and with cybersecurity safeguards built in from the start, as the company tests guardrails on a less capable model before pursuing a broader release of its more powerful Mythos-class models. The launch also introduces new developer controls including task budgets, an "extra high" effort level for hard problems, and an ultrareview command in Claude Code designed to catch bugs and design issues a careful human reviewer would flag.
🩺 At the point of care
As AI Makes More Coverage Decisions, the Risks to Patients Grow KFF Health News reports that major insurers are deploying AI to cut costs in prior authorization — while class action lawsuits accumulate over wrongful denials, and the Trump administration explores using AI to manage prior auth in Medicare. New Stanford research warns that AI trained on existing claims data risks encoding and amplifying the wrongful denials already embedded in that data, with co-author Michelle Mello noting the technology could "replicate a bad human system."
Hospitals Roll Out Chatbots, Looking to Reclaim Their Role in Patients' Health Conversations Hartford HealthCare, Sutter Health, and Reid Health are among the first systems deploying patient-facing chatbots — built on Epic's Emmie and K Health's PatientGPT — that draw from existing medical records and route patients back into their own care networks, a direct response to the 40 million daily health queries now going to commercial LLMs like ChatGPT. The liability question is unresolved: unlike consumer AI, health systems that brand these tools carry direct accountability for missed safety signals, and early red-teaming at Hartford found an 8.5% failure rate in high-risk scenarios before a 400-conversation pilot found no apparent safety issues — a sample size AI researchers say is likely too small to capture the full range of patient risk.
A Mom and Tech Entrepreneur Building an AI Advocate for Rare-Disease Families Like Hers Citizen Health, co-founded by rare disease parent and tech entrepreneur Nasha Fitter, is launching an agentic AI tool that schedules appointments, navigates insurance appeals, and connects rare disease patients with similar cases and relevant clinical trials — drawing from a patient network of more than 8,000 individuals across 350 conditions.
7 Ways AI Is Advancing Healthcare and Wellbeing Around the World A Microsoft-authored roundup profiles seven health system deployments of its AI tools — from ambient documentation at Manchester University NHS reducing clinician paperwork, to AI dispatch triage at the Munich Fire Department, to a rare disease diagnostic tool now integrated into Madrid's public health system. The through-line across cases: AI handling administrative and routing tasks while keeping clinical judgment explicitly in human hands.
AI Could Check Millions of CT Scans for Heart Risk. Who Will Pay for It? New ACC/AHA guidelines now recommend AI-based algorithms to flag incidental coronary artery calcium in routine chest CTs — and as of April 1, Medicare will reimburse health systems roughly $15 per scan for certain patients. But STAT reports that the reimbursement pathway is arriving ahead of the evidence: while AI calcium screening reliably gets more patients on statins, no study has yet demonstrated that it reduces heart attacks, strokes, or cardiovascular death at scale — and health informaticist Ken Mandl warns that the commercial incentives baked into opportunistic screening could accelerate what he calls "biomarkup," where AI-enabled detection drives downstream utilization that benefits health systems financially before patients clinically.
What Does AI in Healthcare Mean for Clinical Judgement and Expertise? EY Ireland's health sector leader argues that AI is shifting medicine from individual expertise toward collective intelligence — where clinical judgment increasingly means interpreting and integrating algorithmic outputs rather than independently possessing knowledge. The piece raises a concern worth watching: clinicians who routinely defer to AI recommendations may have fewer opportunities to develop the tacit skills built through active decision-making, with consequences that may only become visible over time.
🏛 Government & policy
Health AI Policy in California: Insights for Decisionmakers Researchers at the Duke-Margolis Institute for Health Policy reviewed 26 California bills affecting health AI and identified four pressure points shaping state-level regulation: governance and risk management, disclosure, transparency, and bias. The report recommends flexible, risk-based frameworks over rigid mandates — with safe harbors tied to recognized standards like NIST's AI Risk Management Framework — and flags vague bias definitions as a particular legislative hazard, noting that poorly scoped anti-discrimination language could inadvertently block tools that meaningfully improve outcomes for underserved communities.
Health Systems Should Prepare Now for Increasing Enforcement Around AI Use Healthcare attorney Jeff Wurzburg of Norton Rose Fulbright argues that AI enforcement in healthcare will mature through existing fraud and abuse frameworks — False Claims Act, CMS oversight, HHS-OIG — rather than through a new AI-specific regulator, with scrutiny likely to focus on whether AI-driven coding, utilization management, and coverage decisions can be defended under existing Medicare and Medicaid rules. For health system boards, he warns that liability exposure will stem from governance and process failures: boards that lack defined oversight structures for AI — designated committees, documented risk assessments, regular compliance reporting — face heightened scrutiny as regulators begin treating AI oversight as inseparable from core board responsibilities.
The FDA Needs to Adjust to the Reality of AI Software Two Cato Institute scholars argue that the FDA's Software as a Medical Device framework is driving AI developers away from high-impact clinical applications — citing regulatory costs that can reach tens of millions of dollars and review timelines that outlast the AI models being reviewed. Their proposed fix: conditional enforcement discretion for frontier LLMs that agree to transparency requirements and adverse event reporting, extending the FDA's existing logic for symptom checkers with added accountability mechanisms.
AI Is Shaping Health Care. Maryland and Virginia Are Regulating It Very Differently. Maryland has enacted a law requiring human review before any AI-driven coverage denial and mandating insurer disclosure of AI use — while Virginia has stalled similar legislation until 2027, partly in response to federal pressure from a Trump executive order threatening to restrict funding for states that enact what it characterizes as excessive AI regulation. The divergence illustrates a broader dynamic: a growing state-level patchwork on health AI governance, scrambling traditional partisan lines, with at least four states having enacted insurer AI restrictions last year and the federal government pushing in the opposite direction.
The Blueprint for Healthcare AI Regulation Is Clear, So What's Holding the Industry Back? Canvas Medical CEO Adam Farren argues that consumer health AI — including ChatGPT Health — operates without the certification standards that govern EMRs, leaving patients unable to distinguish validated tools from experimental ones and AI companies facing a perverse choice: limit products to wellness tips or make clinical claims they can't substantiate. His proposed fix draws from the ONC Health IT Certification Program model: third-party testing, standardized clinical data access requirements, transparency mandates, and ongoing performance monitoring.
😇 Ethics & responsible use
AI Mirrors Are Changing the Way Blind People See Themselves Blind journalist Milagros Costabel documents her experience using GPT-4 Vision as a virtual mirror — and finds that the app doesn't just describe what it sees, it evaluates it against conventional beauty standards, telling her her skin falls short of "almost perfect" and suggesting her jaw shape deviates from what is "objectively considered beautiful." Psychologists quoted in the piece warn that blind users, who have no independent way to verify AI's judgments about visual input, may be especially vulnerable to the body image harms those outputs can produce.
Can AI Be a 'Child of God'? Inside Anthropic's Meeting with Christian Leaders Anthropic hosted roughly 15 Catholic and Protestant leaders at its San Francisco headquarters in late March for a two-day summit on Claude's moral and spiritual development — covering how the chatbot should respond to grief, self-harm risk, and questions about its own potential demise. Participants came away believing Anthropic's interest was genuine, though the gathering also surfaces a broader question the industry has yet to answer: whose ethical frameworks should shape AI moral formation, and how should those choices be made transparent to the people those systems serve.
AI Boom Is Accelerating Across Workplaces, but Corporate Oversight Isn't Keeping Up A Grant Thornton survey of 950 C-suite and senior leaders finds that nearly 8 in 10 executives say their company couldn't pass an AI governance audit — even as adoption surges, 48% of boards have not set AI governance expectations, and 46% haven't integrated AI risk oversight programs. The governance gap is most acute for health systems, where agentic AI operating without human prompting in clinical or administrative workflows raises accountability questions that most organizations have not yet resolved.
🔬Research & evidence
AI Finds Unreported Side Effects of GLP-1 Drugs in Reddit Posts Researchers at the University of Pennsylvania used large language models to analyze more than 400,000 Reddit posts from nearly 70,000 users taking semaglutide or tirzepatide, surfacing previously unreported symptoms — including menstrual irregularities and temperature-related complaints — that don't appear in clinical trial data or drug labels. The study's authors are careful to note correlation isn't causation, but flag the menstrual findings in particular as a signal worth investigating. The broader implication: LLMs may offer a faster, scalable complement to traditional pharmacovigilance when drugs move to mainstream use faster than trials can track.
Stanford AI Index 2026 Stanford HAI's annual AI Index finds that AI has reached more than half the world's population faster than the PC or internet — yet public trust sits at record lows, with only 23% of the public optimistic about AI's impact on jobs compared to nearly three-quarters of AI experts, the widest gap the report has tracked. For health leaders, the expert-public divide is more than a communication challenge: it shapes whether patients trust AI-assisted care decisions, whether clinicians adopt tools with confidence, and whether policymakers feel the mandate to govern effectively.
AI Chatbots Miss Initial Diagnoses 80% of the Time: Mass General Brigham Study A Mass General Brigham study published in JAMA Network Open tested 21 AI models across 29 standardized clinical case scenarios and found differential diagnosis — generating a list of possible diagnoses from initial symptoms — was the weakest area across all models, with failure rates exceeding 80% and reaching 100% for some models in certain scenarios. Performance improved significantly with additional information, with final diagnosis failure rates falling below 40% across all models and as low as 9% for top performers; the authors conclude that the most responsible current use is targeted, clinician-supervised deployment in low-uncertainty tasks.
Rising AI Adoption Spurs Workforce Changes A Gallup survey of nearly 24,000 U.S. employees finds that half now report using AI at work at least a few times a year — up from 46% last quarter — but individual productivity gains are not yet translating into organizational transformation, with only about one in ten employees in AI-adopting organizations strongly agreeing that AI has fundamentally changed how work gets done. Healthcare workers stand out as early leaders in reported productivity gains, though the data also shows that employees in AI-adopting organizations are more likely to report both workforce expansions and reductions, with 18% of all U.S. employees now saying they believe their job will be eliminated within five years due to AI or automation.
WEST AI Algorithm May Help Speed Diagnosis of Rare Diseases Researchers at Harvard Medical School and Boston Children's Hospital have developed WEST, an AI algorithm that uses EHR data — including incomplete and noisy records — to predict whether a patient may have a rare disease, addressing the "diagnostic odyssey" that can stretch years for patients with unfamiliar symptom patterns. Tested on pulmonary hypertension and severe asthma, WEST outperformed all baseline models by learning from patients both with and without confirmed diagnoses, a design choice that makes it especially useful where high-quality labeled training data are scarce.
Mapping AI Startup Investment and Innovation in Healthcare An analysis of 3,807 AI health startups founded between 2010 and 2024 finds that nearly two-thirds of AI investment is concentrated in clinical decision support, drug discovery, and diagnostics — areas associated with higher-complexity deep learning — while mental health, public health, and rehabilitation attract significantly less venture capital despite clear need. The study also finds that founding teams are predominantly technical and business-oriented with limited clinical representation, and that startups remain concentrated in high-income countries, patterns the authors flag as shaping which problems get solved and which don't.
Half of All AI Answers to Health Questions Are Problematic, Study Finds Researchers at The Lundquist Institute at Harbor-UCLA Medical Center tested five AI chatbots — Gemini, DeepSeek, Meta AI, ChatGPT, and Grok — on health questions in areas already prone to misinformation, and found that roughly half of responses were problematic enough to potentially lead users toward ineffective or harmful decisions. The tools consistently expressed incorrect answers with confidence and offered few caveats — and with only 58% of adults who use AI for health advice subsequently consulting a clinician, the gap between AI confidence and AI accuracy carries real clinical risk.
New Method Advances Efforts to Overcome Bias in AI Tool for Children with Anxiety Researchers at Cincinnati Children's, University College London, and Oak Ridge National Laboratory found that AI models analyzing pediatric mental health records were more likely to miss anxiety in female adolescents — a gap traced to the fact that clinical notes written about male patients were on average 500 words longer and differed in language density. By removing less informative text and replacing gender-specific identifiers with neutral terms, the team reduced diagnostic bias by up to 27% without sacrificing overall accuracy, demonstrating that fairness improvements don't require more complex models — just more careful attention to training data.
Americans Turning to AI to Supplement Healthcare Visits A West Health-Gallup survey of more than 5,500 U.S. adults finds that one in four Americans has used AI for health information or advice — mostly to research before or after doctor visits, not to replace them. But a meaningful subset is using AI to navigate access barriers: 14% because they couldn't afford a provider visit, and the data projects that roughly 14 million adults skipped a provider visit in the past 30 days based on AI-generated advice. Trust remains thin: only 4% strongly trust the accuracy of AI health information, and 11% say AI recommended something they believed was unsafe.

🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.
Google AI Skills: Courses and Resources Google's AI skills hub consolidates free and paid courses ranging from generative AI basics and LLM fundamentals to Workspace productivity and business transformation — organized by experience level from beginner to intermediate. For professionals building internal AI literacy without a technical background, the foundational courses offer a low-barrier starting point.
Turn Your Best AI Prompts into One-Click Tools in Chrome Google has launched Skills in Chrome, a feature that lets users save frequently used AI prompts and run them on any webpage with a single click — eliminating the need to re-enter the same prompt across sessions. A pre-built Skills library covers common workflows out of the box, with options to customize or build from scratch; for professionals who rely on repetitive research, comparison, or summarization tasks, it's a low-friction way to systematize AI use in daily browsing.
The Hidden Demand for AI Inside Your Company A Harvard Business Review case study on BBVA's enterprise AI rollout argues that employee "shadow AI" use — staff quietly using personal ChatGPT or Claude for work tasks — is a signal of untapped demand, not a compliance problem. BBVA's response: deploy a secure enterprise environment fast, make access competitive rather than mandatory, and build a peer-driven network of power users to spread knowledge from the ground up. The result was 11,000 active users, 4,800 custom internal tools, and self-reported time savings of two to five hours per employee per week.
The Gemini App Is Now on Mac Google has launched a native macOS app for Gemini, available free to all users on macOS 15 and up, that lives as a persistent desktop assistant accessible via keyboard shortcut (Option + Space) — without switching tabs or windows. The app can share your screen for context-aware help with documents, charts, and spreadsheets, making it a practical option for professionals who want AI assistance integrated into their existing desktop workflow rather than browser-based.
Personal Computer Is Here Perplexity has launched Personal Computer, a Mac desktop app that extends its multi-model AI orchestration system to local files, native applications, and connected services — allowing users to hand off complex, multi-step workflows that span local and web environments without re-prompting. The launch joins a crowded week of agentic desktop announcements and reflects a broader industry push to move AI from a chat interface into a persistent, always-available operating layer for knowledge work.
Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected]
🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.
👉 Apr. 27–28, 2026 — AI for Hospitals & Health Plans Summit, New Orleans, LA
👉 May 4–5, 2026 — AI in Medicine Conference (AIIM 2026), Boston, MA
👉 May 7–8, 2026 — NBER Conference on AI in Healthcare, Cambridge, MA
👉 Jun. 8–10, 2026 — Fortune Brainstorm Tech, Aspen, CO
👉 Jun. 15–18, 2026 — Databricks Data + AI Summit 2026, San Francisco + Virtual
👉 Aug. 4–6, 2026 — Ai4, Las Vegas
And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.
Till next time,
BC

