The AI-Proof Score is a single 0–100 number per occupation, computed as round(100 × (1 − exposure)), where exposure is a weighted blend of four published signals from three authoritative sources. Higher scores mean the work is more durable; lower scores mean the work is more exposed. The full formula, the weights, the data sources, and the underlying CSV all live on this page and in the public GitHub repository linked at the bottom.
For every occupation in the dataset:
That's the entire model. Two Eloundou exposure terms (the most rigorous task-decomposition study published), two Anthropic Economic Index terms (the freshest real-world usage data publicly available), and a vertical-specific drift constant. Every weight is published; every input is citable.
From Eloundou et al., "GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models" (OpenAI / Stanford, 2023). For each occupation, α is the share of tasks that an LLM can perform directly with no additional tooling — the model alone, prompted appropriately, would reduce the time required by at least 50%. Range: 0–1.
From the same paper. β is the share of tasks reachable via LLM-powered software — the augmented surface a developer could plausibly build with the model as a component. β is always ≥ α. Range: 0–1.
From the Anthropic Economic Index, refreshed quarterly. We map an occupation to its O*NET task list, then to the Index's task-coded usage signal. Higher intensity = more real users actively deploying frontier LLMs on that kind of work today. Normalized to 0–1.
Also from the Index. Of the real usage observed, what fraction is users automating the work versus augmenting their own workflow? Roles where automation dominates score lower (more exposed). Roles where augmentation dominates score higher — they're being helped, not replaced. Range: 0–1.
Every occupation belongs to a vertical (medicine, nursing, allied health, law, finance, accounting, tech, creative, education, trades, service, management, sales, science). Each vertical carries a 5-year drift constant — the expected score decline if current AI-capability and adoption trends continue. The drift constants:
| Vertical | 5-yr drift (pts) | Reasoning |
|---|---|---|
| trades | 2 | Physical work; immune to text-model progress. |
| nursing, dentistry | 4 | Hands-on care; modest documentation drift. |
| allied_health, sales | 5–6 | Mostly stable; some workflow erosion. |
| medicine, education, management | 6–7 | Documentation surface squeezing; clinical authority durable. |
| science, service, pharmacy | 8–9 | Repetitive surfaces eroding; skilled cores durable. |
| tech, finance, accounting | 11–13 | High augmentation today, high substitution risk if model capability holds curve. |
| law | 14 | Document review and legal research collapsing fast; courtroom durable. |
| creative | 15 | Writing, design, translation under sustained pressure. |
1-yr and 3-yr drift are 25% and 65% of the 5-yr value, respectively — a slightly front-loaded linear schedule. We will publish a more empirical drift curve once we have two more quarters of Anthropic Index data to calibrate against.
The score is informational, not personal advice. It does not know your specific employer, your specific city, your specific seniority, your specific niche within the occupation, or any of the dozens of other factors that make a real career decision a real career decision. It is a starting point for a conversation, not the conversation itself.
The score also says nothing about whether AI is good or bad for your work. It's a number describing exposure to a technological transition. Some exposed roles will become higher-paid (because productivity rises). Some resistant roles will become commoditized (because the entry barrier falls). The score is one input into a much bigger judgment — which is exactly why it lives next to a personalized roadmap, not on its own.
Every release of this site shows a version pill in the top-left wordmark. v1.0 · 2026 means: built April 2026, against the March 2026 Anthropic Index release.
Every numeric input that produces every score is published. The current dataset lives at /assets/data/occupations.json. The underlying source-of-truth Python file (with notes per occupation) is committed to the public repository. If you spot a number that's wrong, email support@thecareerdiagnostic.com with a citation. We will fix it and credit you in the changelog.
Email support@thecareerdiagnostic.com with the occupation, the input you disagree with, and a published source. Real arguments improve the model. Public credit for substantive contributions.