The September 2025 Anthropic Economic Index is the freshest publicly-licensed dataset of how AI is actually being used at work. It tracks more than 20,000 distinct work tasks across the BLS O*NET taxonomy, drawn from millions of anonymized frontier-LLM conversations and (new this release) first-party API traffic. It is licensed CC-BY on Hugging Face, which means anyone can build on it commercially. Most of the press coverage focused on software engineers, who account for 37% of all queries. The healthcare numbers tell a more nuanced story — and one that, if you are a physician, you should read instead of skim.
4% of jobs use AI for at least 75% of their tasks. 36% use it for at least 25%. Computer and mathematical occupations dominate (37% of queries), followed by arts, design, and media (10%). Across all occupations, usage skews 57% augmentation versus 43% automation — meaning humans are mostly being helped, not replaced. Those four numbers got recycled in every newsletter. They are accurate. They are also beside the point if you are trying to make a real career decision.
Healthcare-and-technical occupations (the BLS umbrella that covers physicians, nurses, allied health, dentistry, and pharmacy) sit at approximately 5% of all queries — well behind tech and creative, but ahead of education, science, and management. The composition matters more than the headline. Within healthcare, the September 2025 release showed three things worth pausing on.
First, documentation tasks dominate the healthcare share. Drafting clinical notes, generating differential diagnoses, summarizing literature, preparing patient handouts — these are the surfaces where frontier LLMs are being deployed inside real clinical workflows today. Procedural and bedside tasks are barely represented. This is not because the data missed them; it is because they are not what AI is being used for. The procedural moat is intact and visible in the data.
Second, radiology and pathology have the highest intensity scores in clinical medicine, and the gap to the next tier is meaningful. Radiology in particular shows an automation share that is materially higher than the rest of clinical medicine — image-based first-pass workflows are where the substitution risk is highest. This is consistent with the products being shipped (every major PACS vendor now ships AI worklist triage) and with the published literature on radiology AI adoption. The September release made this gap larger, not smaller, than the prior release.
Third, the augmentation/automation split inside healthcare is structurally different from the all-occupations average. Across all jobs the split is 57% augmentation, 43% automation. Inside clinical medicine the split is closer to 80/20 — meaning when AI is used in healthcare, it is overwhelmingly being used to help a human, not to replace one. This is partly regulatory (no licensed prescriber, no prescription) and partly liability-driven (the model cannot carry the risk). It is also partly cultural — physicians are slow adopters of anything until the evidence is overwhelming, and AI for clinical reasoning is not yet at the evidence bar.
Three claims circulated in late 2025 that the actual data does not support. They are worth naming because they keep getting recycled in 2026.
Claim 1: "AI will replace primary care physicians first." The Index does not support this. Primary-care intensity is in the middle of the medicine pack — well behind radiology and pathology, ahead of surgery and anesthesia. The visible automation pressure on primary care is documentation (where the augmentation share is high), not diagnosis (where the automation share is low). The argument is plausible on first principles; the actual telemetry does not show it.
Claim 2: "Nurses are safe." Nurses ARE among the most AI-resistant clinical roles by score, but the data shows real and growing intensity in nursing documentation specifically — the EHR-integrated AI scribes (DAX, Abridge, and the wave of Hippocratic AI products) are real, deployed, and used. The role is durable; the documentation surface inside the role is not. A nursing professional who cannot use an AI scribe in 2027 will be at a real disadvantage to one who can.
Claim 3: "Healthcare adoption is too slow to matter." The September 2025 intensity score for healthcare-technical was up roughly 40% from the previous release. Adoption is not slow. It is accelerating. It is also concentrated — most of the growth is in a handful of high-leverage workflow surfaces (ambient documentation, image triage, prior authorization narratives, patient-education content). Slow-and-everywhere is a fiction; fast-and-narrow is what is actually happening.
Three concrete moves, anchored to what the data actually shows.
One: get fluent with ambient AI documentation in 2026, not 2027. Whether your hospital deploys DAX, Abridge, Suki, or a vendor's home-grown product, the time-to-fluency curve is short and the productivity delta is real. The physicians who are already comfortable with these tools are the ones who get assigned the best schedules, run the smoothest clinics, and have first dibs when leadership roles open up. The data does not show this directly — that is judgment from inside the system — but the data does show the surface is real and the adoption curve is steep.
Two: if you are early-career and choosing a specialty, weight procedural and bedside content positively. The Index is unambiguous about which surfaces are durable. The argument for cardiology over IM, or for EM over hospitalist medicine, or for procedural anesthesia over generalist work, is not a panic argument — it is a "the next 20 years of compensation will follow the procedural moat" argument. This was true before AI; it is more true now.
Three: treat clinical informatics as the AI-era physician credential. The ABPM-CI board (American Board of Preventive Medicine — Clinical Informatics) is a real board pathway, and the holders of that credential are the people leading every AI procurement and deployment decision inside health systems right now. There is genuine scarcity in the supply. This is the credential I would tell my own colleagues to consider in 2026 if they want operational leverage in the AI transition.
Anthropic ships an Index update roughly every quarter. The next release is expected mid-2026. When it lands, I will publish a follow-up post in this same notebook covering: which professions moved the most, which assumptions in our scoring methodology need updating, and what changed for healthcare specifically. The AI-Proof Score for every occupation in our dataset will get re-run within seven days of each release.
If you want the next post and the score recompute delivered to your inbox, the easiest way is to get your AI-Proof Score. Email subscription is automatic.
About the author. Taylor Gardner, DO, is a board-certified physician and the founder of The Career Diagnostic. He reads every Anthropic Economic Index release the day it drops and writes about what changed for the people whose careers depend on knowing.
Pick your role. Get your number. Receive the 12-week plan.