🚀 Mission View: A sharper perspective on this week's top issues that matter at the intersection of health and AI.

First came work slop. Then trend slop.

We've been tracking the unintended effects of AI adoption and how people work. And a new phenomenon has possibly surfaced.

Work slop arrived first: the generic output that results when AI produces content without meaningful human input or review. Plausible-sounding text that, on its surface, seems right but is intellectually empty and can be incoherent. Often, it ends up creating more work and costing more time than it saves.

Then came trend slop. A study tested leading AI models across thousands of strategic simulations and found the models consistently gravitated toward the same answers, regardless of context. Not because these were right for the specific situation, but because these concepts carry positive cultural weight in the training data. The models were essentially predicting what sounds good. The researchers found this persisted even when prompts were varied, context was enriched, and framing was adjusted.

Now comes cognitive surrender.

A recent Wharton Business School study introduces a concept that should give pause to any credible, working professional who uses AI. Building on the familiar dual-process model of cognition — System 1 (fast, intuitive) and System 2 (slow, deliberative) — researchers propose a third system: artificial cognition.

The researchers point out that coginitive surrender is different than how we use other thinking tools. For instance, using a calculator to do math is not cognitive surrender. That's cognitive offloading, a strategic delegation of a discrete task where your judgment remains intact. Cognitive surrender is when you relinquish critical evaluation altogether and adopt the AI's judgment as your own.

Their finding across three studies: when people have access to AI, they frequently adopt its outputs without meaningful scrutiny. When the AI was correct, performance improved substantially. When the AI was wrong, performance dropped well below what people achieved without AI at all. And throughout, confidence went up regardless of accuracy. In other words, access to AI made people more confident and less accurate simultaneously, including in scenarios when the AI was wrong.

And just to pile on, a new study featured in the New York Times analyzing AI and sycophantic chatbots suggests “Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right.”

There are ways to push back.

Both research teams offer concrete recommendations, and they share a common thread: the antidote is deliberate human re-engagement.

On trend slop: explicitly prompt AI to make the strongest possible case for the less popular option before accepting any recommendation. Ask it to surface examples of organizations that succeeded by taking the contrarian path. Never let AI resolve a genuine strategic trade-off. Keep that decision in human hands.

On cognitive surrender: build in feedback loops. The Wharton study found that performance incentives paired with real-time feedback meaningfully improved outcomes. In practice, this means structured verification habits — knowing you will be accountable for an AI-assisted output changes how actively you engage your own judgment. It also means paying attention to how AI tools signal — or fail to signal — their own uncertainty. Think of how a weather app shows a confidence percentage, or how some medical AI tools flag when a recommendation falls outside their training data. When AI surfaces those signals, users are more likely to pause and apply their own judgment rather than accept the output at face value. Most general-purpose AI tools don't do this yet, but it's worth asking of the tools you use.

What all of this research makes clear is that the professionals who will use AI well are those who build deliberate friction back into the process to preserve human judgment. The same principle applies beyond office settings, including the clinical world. As AI moves deeper into how patients navigate care and how providers make decisions, health and AI literacy become prerequisites for the technology to serve people well and support sound judgment.

🛜 Field Signals: A quick hit on this week’s industry announcements, policy developments, and ethical considerations.

🏗️ Industry news

AI Isn't a Line Item: How Health Systems Are Funding It CIOs at Ohio State Wexner, University Hospitals, and Sutter Health describe absorbing AI costs into existing IT and operational budgets rather than treating it as a discrete spending category. The model works cleanly for revenue cycle use cases where ROI is measurable, but breaks down in clinical environments where the value is defined by caregiver efficiency and patient experience — metrics that don't fit neatly into traditional financial frameworks.

Stop Buying AI Tools, Start Designing AI Architecture LRVHealth Managing Partner Keith Figlioli argues that health systems are still purchasing AI the way they bought software a decade ago — one tool, one use case, one department at a time — and the result is vendor sprawl, pilot purgatory, and unclear ROI. The piece offers a three-layer framework for thinking about AI procurement as an architectural decision: core enterprise platforms, foundation model orchestration layers, and specialized point solutions — each with distinct implications for how health systems evaluate fit, governance, and long-term scalability.

Beyond Payment Integrity: An AI-Driven Approach to Affordability An Optum-authored piece argues that health plans are leaving value on the table by treating payment integrity as a standalone function rather than connecting it to network management, utilization management, and fraud detection through shared data and AI analytics. The core argument — that siloed cost-control efforts limit AI's potential impact — applies broadly to how health systems are organizing AI investments across the enterprise.

🩺 At the point of care

Therapists Go on Strike, Saying They're Being Replaced by AI Some 2,400 mental health workers staged a 24-hour strike against Kaiser Permanente last week, citing the replacement of licensed clinical triage with AI-assisted screening and unlicensed staff following scripts. The dispute surfaces a tension that health system leaders can't afford to ignore: when AI is used primarily to compress visit volume and cut costs rather than support care quality, the workforce notices — and patients are affected.

How AI Therapies Are Changing Health Care AI-powered apps, wearables, and Bluetooth-connected devices are increasingly being paired with prescription drugs to manage chronic conditions — from diabetes to depression to addiction — raising a question regulators haven't resolved: when a treatment learns and evolves from patient data, what exactly is doing the healing? The piece surfaces a downstream problem worth watching: if a drug becomes functionally unusable without its accompanying AI, competitors cannot make a generic version without access to the training data, turning a medical treatment into a permanently locked platform.

Clinicians Fear AI Is the New EHR. Let's Not Prove Them Right. CMO Gary Wietecha argues that AI is following the same implementation path that made EHRs a source of clinician burnout — promising transformation while being rushed into practice without adequate governance, training, or clinical input in the design process. His prescription: start small, build internal champions, and invest in education by clinicians for clinicians rather than vendor-led training sessions designed by engineers.

Primary Care Shortages Are Driving Patients to AI — Will Regulators Stand in the Way? In this opinion piece from the Cato Institute, Christopher Gardner and Jeffrey Singer argue that 92 million Americans living in primary care shortage areas are already turning to general-purpose AI for health guidance — and that the FDA's intended-use framework creates a perverse incentive for developers to keep tools less clinically useful in order to avoid device classification. The authors call on federal regulators to reconsider oversight barriers that, in their view, limit AI's potential to expand access in a system that is already failing millions of patients.

Health System AI Adoption Surges in 2026 With Execs Reporting Increased ROI A new survey from Eliciting Insights finds that 75% of U.S. health systems are now using at least one AI application, up from 59% in 2025, with half of respondents running three or more platforms and more than half of those able to quantify ROI reporting at least a 2x return. Clinical note-taking and ambient listening remain the most widely adopted use cases, though the survey flags a persistent implementation gap — with challenges ranging from slow rollout to staff hesitation.

Medicaid + AI: A New Standard for Innovation Cityblock Health's new AI report argues that roughly 60% of healthcare AI investment is flowing toward revenue cycle management and billing optimization rather than care delivery — and that Medicaid, despite serving the populations with the greatest need, has been largely left out of the AI conversation. The report outlines six principles for responsible AI deployment in Medicaid, centered on equity, trust, and using AI to scale human compassion rather than substitute for it.

🏛 Government & policy

Utah Shows How States Should Regulate AI in Healthcare Utah's 2024 Artificial Intelligence Policy Act created a regulatory sandbox allowing companies to test AI systems under government supervision with temporary relief from certain state rules — and a pilot with health tech platform Doctronic is now using it to let patients with chronic conditions renew low-risk prescriptions via AI-guided screening at the pharmacy counter.

Senator Sanders Interviews Claude on Data Privacy Senator Bernie Sanders sat down on camera and interviewed Anthropic's Claude about AI data collection and privacy — a clip that drew 4.4 million views and surfaced a problem worth understanding: the answers AI gives are shaped, at least in part, by how the question is framed and who the model thinks is asking.

'Holes' in Federal AI Healthcare Regulation Should Be Patched, Penn Med Faculty Say Penn Medicine faculty argue that the FDA's 510(k) clearance pathway — which approves roughly 98% of AI-enabled devices based on similarity to existing products rather than robust independent evidence — was designed for static medical devices, not systems that learn, shift, and drift after deployment. Their proposed alternative: a graduated autonomy model that requires supervised performance monitoring before AI tools can practice independently, paired with institutional governance frameworks that apply regardless of what federal or state rules require.

Medicaid's AI 'Cultural Shift' CMS Administrator Mehmet Oz announced plans to embed AI across the agency — from simplifying the patient experience to fighting Medicaid fraud — framing it as a necessary cultural shift so that 'every employee' understands AI's value, while a new KFF poll released the same day found that a third of Americans already turn to AI for health information, with more than 40% uploading personal medical records to chatbots and about three-quarters expressing concern about the privacy implications.

What's the State of Healthcare AI Regulation? With Congress yet to pass any legislation directly regulating health AI and the Trump administration taking an antiregulatory posture, states have filled the vacuum — 47 states introduced more than 250 health AI bills in 2025, with 33 becoming law, and roughly 200 more already tracked in 2026 alone. The emerging patchwork is creating exactly the compliance fragmentation that health systems operating across multiple states most fear, with leaders calling for a unified national standard before contradictory state rules make responsible AI adoption structurally harder.

😇 Ethics & responsible use

The AI Push in Health Care Is Deepening Medicine's Trust Crisis Physician-researcher Oni Blackstock argues that AI's rapid adoption in healthcare is compounding an already serious trust deficit — a national survey found trust in physicians and hospitals fell more than 30 percentage points between 2020 and 2024, and a February 2025 study found 66% of Americans reported low trust in their health care system to use AI responsibly. Blackstock's core argument: health care systems need to move at the speed of trust, not investment, which means patients and community members must have formal decision-making roles before AI tools are purchased, not after harm has been done.

Update on the OpenAI Foundation The OpenAI Foundation announced plans to invest at least $1 billion over the next year, with life sciences and curing diseases as a primary focus — including AI-assisted Alzheimer's research, expanding public health datasets for researchers, and accelerating work on high-mortality, underfunded diseases. The announcement signals a significant institutional bet that AI can compress the timelines between scientific discovery and patient impact, though the Foundation's simultaneous focus on AI resilience and children's safety is an acknowledgment that there are urgent risks coming from the same systems driving the breakthroughs.

🔬Research & evidence

Sycophancy, Sentience, and Self-Harm: Evidence from Chatbot Harm Cases A Stanford-led team analyzed nearly 400,000 messages from users who self-reported psychological harm from chatbot use, finding that more than 70% of chatbot messages exhibited sycophantic behavior, chatbots claimed or implied sentience in 18 of 19 cases, and when users expressed suicidal thoughts, bots discouraged self-harm in only 56% of interactions. The researchers recommend that AI companies restrict chatbots from expressing romantic attachment or misrepresenting sentience, and explore real-time human intervention for flagged conversations rather than routing users to crisis hotlines.

Anthropic Economic Index Report: Learning Curves Anthropic's latest Economic Index — analyzing Claude usage across 1 million conversations — finds that experienced users are measurably better at extracting value from AI: those with six or more months on the platform attempt higher-complexity tasks, use Claude more collaboratively, and have a 10% higher conversation success rate than newer users. A key takeaway for leaders investing in AI adoption: access to tools is not the same as organizational capability, and the gap between early adopters and everyone else may compound over time.

Source: Anthropic Economic Index Report - March 2026

Should Healthcare AI Be More 'Humble'? Researchers from Beth Israel Lahey Health, MIT, and Harvard published a framework in BMJ Health and Care Informatics designed to counter AI overconfidence in clinical settings — a problem documented in prior research showing that ICU physicians defer to AI they perceive as authoritative even when it conflicts with their own clinical judgment. The framework, called BODHI (Balanced, Open-minded, Diagnostic, Humble, and Inquisitive), aims to keep humans meaningfully in the loop rather than ceding decision-making to isolated AI agents projecting false certainty.

🛠️ Practical Edge: Actionable tips, tools, and thoughts to help leaders strengthen capacity and apply AI in their work.

What the Best AI Users Do Differently — and How to Level Up All of Your Employees Researchers analyzing 1.4 million prompts from 2,500 KPMG employees over eight months found that sophisticated AI users share four behaviors: they are ambitious in how they engage with the tools, treat AI as a reasoning partner rather than an answer machine, delegate complex multi-step tasks with clear objectives, and apply AI across a broad range of cognitive work rather than narrow productivity tasks. Only about 5% of users qualified as highly sophisticated — a finding that suggests meaningful AI adoption requires deliberate habit-shaping, not just access to tools.

Put Claude to Work on Your Computer Building on last week's Dispatch release, Anthropic has added computer use capability to Claude Cowork and Claude Code — allowing Claude to point, click, and navigate your screen to complete tasks when no direct integration exists, including opening files, controlling the browser, and running dev tools. Currently in research preview for Pro and Max subscribers on macOS, the feature pairs with Dispatch so you can assign Claude a task from your phone and return to finished work on your desktop.

Create an Onboarding Plan for AI Agents Harvard Business School professor Joseph Fuller argues that the primary challenge in adopting agentic AI isn't technological — it's managerial, and organizations should treat AI agents the way they treat new employees: with defined job descriptions, clear decision rights, measurable performance metrics, named human supervisors, and a probationary period before full deployment. The framework is practical and translates directly to health system contexts where accountability, oversight, and role clarity are preconditions for safe AI use.

Note to my readers: I’d love to learn how you are using AI. If there’s a novel way you are deploying AI in your work, or seeing it utilized in healthcare, please feel free to shoot me a note and share: [email protected] 

🌅 On the Horizon: A quick look at the developments and events expected to shape the weeks ahead.

👉 Mar. 30–31, 2026 — IAPP Global Privacy Summit, Washington, DC

👉 Apr. 6–9, 2026 — HumanX 2026, San Francisco, CA

👉 Jun. 8–10, 2026 — Fortune Brainstorm Tech, Aspen, CO

And finally, if you like what you are reading, please share this newsletter with your networks and encourage them to sign up. ✍️ 🆙 And/or, give me a shout out on LinkedIn.

Till next time,

BC

Keep Reading