The Last Human-First Programming Language
Java's garbage collector hid lifetimes. Ruby hid types. Rails hid HTTP. Hibernate hid SQL. EC2 hid hardware. For three decades, programming languages and frameworks took something the machine cared about and pushed it below the layer the programmer touched. The trade paid until the human stopped being the bottleneck.
What changed
The 2025 Stack Overflow developer survey put 84% of respondents using or planning to use AI tools, and 51% of professional developers using them daily. The percentages will keep moving. If the bottleneck stops being the human at the keyboard, the priorities of language design have to follow.
Typing stopped being scarce. Reading still matters, but mostly for review. What constrains throughput now is whether Claude Code or Copilot can open a PR that passes the type checker, the unit suite, and CI without a maintainer rewriting it. Rails and Rust were designed against a cost function that no longer dominates.
The AI adoption tax
A new language or framework used to win by being more pleasant for humans: Go, Rust, Kotlin, Swift, Elixir on the language side; Rails, React, Svelte on the framework side. Safer, faster to write, easier to learn, fewer silent failure modes.
A new language launched today starts behind unless its corpus catches up. Beyond a compiler, a package manager, and a community, it needs enough representation in pretraining corpora for an agent to write it. Coding assistants are reliably worse at niche DSLs and young languages: less training exposure, less Stack Overflow to scrape, almost no worked examples beyond toy snippets. Standard libraries don't close that gap. Years of public code do — the advantage that made JavaScript and Python the defaults the moment LLMs got useful.
Good verifier loops narrow it from the other side. A language with a fast type checker, a precise LSP, and a deterministic formatter gives an agent quick, structured signals about what it just got wrong. That is part of why Rust survived its own learning curve, and why Gleam and Zig aren't as hopeless as their corpus alone suggests. But corpus and verifier loops penalise the same thing: runtime designs whose effects aren't on the page.
A language with no corpus today is a language with no documentation. And the corpus advantage compounds: as more shops fine-tune on private codebases, the language a team has bet on becomes an internal asset that makes migration expensive in a new way — not just for humans, but for the model that has to relearn an unfamiliar repo.
Boilerplate is documentation for machines
ORMs generate SQL you don't see at the call site. Decorators wire behaviour to other places. Convention-over-configuration replaces five hundred lines with five hundred unwritten assumptions. Dependency injection moves object wiring out of the code that uses it. Metaprogramming saves typing and hides effects.
SQL itself is a high-level abstraction and LLMs handle it well, because what a query does is on the page in front of you. Typed routes, declarative pipelines, App Intents declarations sit on the same side. They're abstract and dense, but you can read them. The opposite list — ORMs that materialise seventeen joins from one expression, decorators that mutate state on import, dispatch resolved at runtime — is what turns into a liability when the editor isn't human. The line is visibility at the call site, not how abstract the API is.
For an agent, that line is not aesthetic. A SQL query carries table names, selected fields, filters, and return shape in the same neighbourhood as the edit. An ORM relation may require reading generated code, import-time side effects, config files, and framework conventions before the behavioural diff is even knowable. Visibility at the call site is, in effect, a prefetch of the evidence the next token needs.
The canonical extreme of this failure is Knight Capital, August 2012. A repurposed config flag reactivated routing code that had been dead since 2003 on one of eight servers that missed the deploy. A 2005 refactor had silently removed the dead code's safety check. Seven servers ran the new code; the eighth ran an infinite buy-high-sell-low loop. $440 million in 45 minutes, and no diff that showed any of this.
The everyday version is milder. I've shipped my share of ORM regressions where the diff was tiny and the generated SQL changed shape. An agent ships them more often, because it has even less to look at than the human who shipped the change. The cost lands on the reviewer: more code, more diff. The bet is that each line is more legible — behaviour lives where you read it, not in a metaclass three files away. The job changes shape. Not difficulty.
Model Context Protocol (Anthropic, Nov 2024). OpenAI's Structured Outputs (GA Aug 2024). Apple's App Intents (iOS 16, 2022). Three different shapes — JSON-RPC manifest, JSON Schema, Swift protocol — converging on the same move: the integration point became a typed surface the system can introspect, not a prose tutorial the developer reads. This is partly SOAP rehabilitated. WSDL was machine-readable in exactly this way and lost to REST because human integration friction beat machine introspection in the human-ergonomics era. With agents in the loop, the calculation flips back.
What survives
A few defaults look durable beyond the visibility point. Deterministic formatting and a small canonical idiom: variance burns context. Small units that fit in a window, since context is finite and edits should be reversible. Machine-readable interfaces (OpenAPI, gRPC, MCP) over prose docs. Languages that compile to or interoperate with high-corpus defaults instead of starting from scratch.
This site runs TanStack Start. Reading a request from route to loader to typed query is three files, none hidden. The NestJS controllers I've worked in took longer to read than to change. The same pattern shows up across the stack: Rust 1.95 with rust-analyzer is more agent-tractable than Rust 1.40 with RLS was in 2020; raw SQL beats most ORMs; static configuration beats runtime metaprogramming.
Caveats
Humans still read code. Architecture, security, debugging, and accountability stay on the human side. Readable by humans. Optimised for machines. In that order.
New languages can train their own models. That only relocates the problem: the new language also needs an AI distribution strategy on top of a compiler and a package manager — fine-tunes, an MCP server, machine-ingestible docs, public code at scale. Synthetic data and RL on verifier feedback compress the cost (see recent gains on Lean and on competitive programming benchmarks) but don't eliminate it.
Python complicates the picture. The most LLM-fluent language today fails most of the criteria above. Its corpus is enormous, and LLMs got remarkably good at the dynamic-magic style because the corpus supported it. Python isn't losing tomorrow. But the cost structure that shaped Python isn't the one shaping what comes next.
This argument is mostly about application development. Embedded firmware, scientific computing, game engines, and HFT optimise against different cost functions, and the prescription doesn't carry.
Model quality could equalise the rest of it. If frontier models keep getting better at corpus-sparse languages — Lean and competitive programming both showed it's possible with synthetic data and verifier feedback — the corpus advantage compresses toward noise inside five years. Some of this post is a bet that the compression doesn't happen at the rate the model labs' roadmaps suggest. The visibility argument survives that bet either way: fast verifier loops still favour languages whose effects are on the page.
For thirty years the language designer's job was minimising human cognitive load per line. The new constraint is per change — how reliably an agent can write a change, inspect it, run the tests, and fix what broke.
You don't need to wait for a new language to act on this. Pick the typed query builder over the opaque ORM. Pick the OpenAPI spec over the Notion page. Pick the framework whose request path reads in one file, not across a graph of decorators. These aren't new instincts; they just hold up better with an agent in the loop.
My bet, three years out: tRPC and TanStack Start outpace NestJS and Spring among new TypeScript and Java backends. If that's wrong, the corpus argument is bigger than the visibility one, and the future looks more like Python continuing to absorb everything. Either way, the agent-era line isn't abstract versus concrete. It's visible versus hidden — and that distinction is the one worth tracking.