Andrej Karpathy quietly shipped one of the clearest labor-market charts of the year: a GitHub repo called karpathy/jobs that scored all 342 U.S. Bureau of Labor Statistics occupations for AI exposure on a 0 to 10 scale. The repo’s own summary put the average exposure at 5.3. Then the screenshots started moving faster than the link itself, with X posts from @_kaitodev and @PawelHuryn turning the treemap into instant discourse.
The reason it spread is obvious once you look at the pattern.
This is not a fussy model about the future of every industry. It is a rough scoring project, powered by an LLM, and it still manages to say something a lot of executives have been dancing around: jobs that happen mostly through text, software, and screens look far more exposed than jobs that still require hands, motion, and physical presence.
The Treemap Is Crude, and That Is Why It Works
Plenty of labor forecasts die under their own complexity. Karpathy’s project does the opposite. It compresses the question into a single practical lens: how much of the job can AI plausibly do?
That is why the extreme cases are so memorable.
Medical transcriptionists reportedly landed at 10 out of 10. That tracks. Listen to speech, turn it into formatted text, clean it up, route it into a system. AI is already good at most of that chain.
At the other end, roofers, plumbers, and electricians sit near 0 to 1. The reason is not that those trades lack intelligence or process. It is that the work keeps colliding with the physical world. Pipes are behind walls. Roof pitches are uneven. Old houses lie to you. A model can help with quoting, scheduling, troubleshooting, and documentation, but it cannot crawl into the crawlspace.
The chart is blunt. Good. Blunt is useful here.
The Real Filter Is Whether the Job Needs a Body
The viral commentary around the project landed on the same point because the treemap practically forces you there. If a role lives on a laptop all day, its exposure score rises fast. If the role depends on a person showing up, moving through a space, touching equipment, or dealing with physical edge cases, the score drops.
That explains why software developers, writers, analysts, and other knowledge workers cluster high, often in the 8 to 9 range in the screenshots circulating on X. Their jobs already produce digital inputs and digital outputs. That is ideal terrain for models.
It also explains why the chart feels more honest than a lot of boardroom AI talk. The dividing line is not prestige. It is embodiment.
You can hear the same idea in the commentary people latched onto around the screenshots: if your entire job happens on a screen, you should assume AI is coming for a meaningful chunk of it. That may sound harsh, but it is a cleaner rule than most enterprise strategy decks.
The Useful Read for Smaller Companies
For smaller companies, the value of this project is not in treating the scores like destiny. It is in using them as a map for where to look first.
If your company runs on screen-heavy work such as intake, quoting, proposal drafting, bookkeeping cleanup, customer email triage, reporting, or CRM maintenance, you probably do not need a think piece. You need a queue. Those are the jobs most likely to move from full human labor to human-supervised AI workflows first.
If your company is in the trades, home services, field operations, logistics, or any business where value gets created on site, the picture is different. The physical service is less exposed than the office layer wrapped around it. Dispatch, follow-up, invoicing, call notes, estimate prep, job summaries, and internal reporting are the soft underbelly. The wrench work is not.
That distinction matters because a lot of companies are still asking the wrong question. They ask whether AI can replace the business. Usually it cannot. The better question is which slices of the business are embarrassingly screen-shaped.
Exposure Is Not the Same Thing as Elimination
This is where people get sloppy.
A high exposure score does not mean a job disappears. It means the job is easier to unbundle. Some tasks get automated. Some get compressed. Some become review work instead of production work. Some turn into client-facing judgment layered on top of model output.
Low exposure does not mean safe forever either. It means the physical core of the work remains stubborn. The surrounding admin work is still fair game, and that alone can change margins, staffing, and customer expectations.
That is why the plumber joke kept spreading alongside this chart. If AI automates more of the screen economy, the scarce work that still requires a competent human body may get more valuable, not less.
Our verdict: Karpathy’s project is rough, but the roughness is part of the point. The cleanest way to think about AI job exposure in 2026 is not white-collar versus blue-collar or college versus non-college. It is simpler than that. Ask whether the work mainly happens in software or in the world, then build your hiring and automation plans from there.
