The Calculator Didn't Kill Accounting
If you’re a software engineer watching AI write code, you’re probably feeling some version of the same thing: is this going to replace me?
I’m not going to tell you that fear is irrational. It’s not. The tools are genuinely impressive and getting better fast. Anyone who dismisses that isn’t paying attention. But I do think the picture is more nuanced than the loudest voices on either side are painting, and I think it’s worth working through honestly. This isn’t the first time a profession has watched a tool show up and wondered if it was staring at its own replacement.
The calculator didn’t kill accounting. It killed a certain kind of accounting job — the rooms full of people doing arithmetic by hand — and it made every surviving accountant dramatically more capable. The automated oven didn’t kill baking. It killed some bakeries and transformed others, but the bakers who understood their craft used better tools to make better bread, faster. The camera didn’t kill art. It killed a specific kind of portraiture, and then artists found entirely new things to do with it. Every one of these tools eliminated some jobs and transformed others. The question was never “will things change.” It was always “how… and what do you do about it.”
The accelerator thesis
Here’s what I think is actually happening: AI amplifies the professional. It doesn’t replace them. A strong engineer with AI ships what used to take a team. Not because the AI is doing the engineering — but because it removes the friction around the thinking. The boilerplate, the lookups, the first draft of something you were going to rewrite anyway. The thinking is still yours. The judgment is still yours. The tool just gets you to the interesting part faster.
This has real economic consequences, and they’re worth stating plainly. A lot of outsourcing has been built on a simple value proposition: volume at lower cost. Five engineers offshore for the price of two onshore. When one onshore engineer with AI can match that volume, the math changes. I’m not making an editorial about quality in either direction — I’m just saying the economics shift. And when they shift, team sizes shrink. The premium moves away from headcount and toward the question: how good is the person driving the AI?
That’s the accelerator model. One person does more. The ceiling for what a skilled individual can accomplish goes way up. The floor for what justifies a role goes up too.
What we’re going to get wrong
I’m optimistic about where this lands for experienced professionals. But I think we’re going to make some serious mistakes on the way there, and I want to name two of them.
The junior pipeline problem
This is the one I feel strongest about, and there are two ways we’re going to screw it up.
The first is companies that keep their junior roles but restrict AI access. Especially in corporate and regulated industries, I can already see the policy memos — juniors can’t use Copilot, can’t use Claude, can’t touch the AI tools because of security concerns or because leadership doesn’t trust them with it yet. This handicaps an entire generation. AI is a tool, the same way version control is a tool and CI/CD is a tool and the terminal is a tool. You don’t train new engineers by keeping them away from the tools the industry runs on. You train them by teaching them to use those tools well.
And here’s the thing — AI isn’t just another tool in the belt. It might be the best learning tool we’ve ever had for junior engineers. Documentation is perpetually outdated. The highest-voted Stack Overflow answer was written in 2017 and might not be the right answer anymore. Tutorials rot. But a junior engineer who learns to work with AI has something previous generations didn’t: an infinitely patient tutor that can explain concepts in context, help them understand why the codebase works the way it does, and guide them through problems in real time — all while they’re delivering real value to the team. AI literacy has to be a core part of how we develop junior talent, not something we hope they figure out on their own after hours.
The second failure mode is worse, and I think it’s inevitable: companies are going to look at AI output and decide they don’t need junior engineers at all. Some already have. Internship programs will shrink. Entry-level roles will get cut. And in the short term, the spreadsheet will look great — you’re getting similar output with fewer people at a lower cost. This is dangerously short-sighted.
Junior roles aren’t just cheap labor. They’re how the industry grows its next generation of senior talent. Every staff engineer, every principal, every architect — they were all juniors once. They learned the craft by doing real work on real teams, by having seniors invest in them, by making mistakes in environments where someone was watching. Cut that pipeline now, and in 10-15 years you’re looking at aging senior engineers with no one behind them who learned how things actually work. You can’t just skip that step and expect the talent to appear.
The broader cuts
And it won’t just be junior roles. I think a lot of people are underestimating the scale of what’s coming. This isn’t a normal technology cycle — it’s not like adopting a new framework or migrating to the cloud. This is a fundamental shift in the economics of knowledge work. Entire categories of roles are going to be eliminated in ways we haven’t fully mapped out yet, in industries that haven’t even started having the conversation. Not everyone’s job survives this transition, and the number of people affected is going to be larger than most forecasts suggest. I don’t think it helps to pretend otherwise, and I don’t think it helps to dwell on it either — but dismissing it as hype is just as dangerous as panicking about it.
What survives and why
Here’s the part I keep coming back to. AI generates average output. Not because the technology is bad — because that’s literally what it’s trained to do. It learns from the aggregate. It produces the median. Ask it to write code or a document or a design and you’ll get something competent, something that looks like the average of everything it’s seen. And for a lot of tasks, that’s genuinely useful. But average is a ceiling, not a floor.
The professionals who thrive through this are the ones who bring what AI structurally can’t: novel thinking, taste, quality standards that sit above the mean, and genuine care about whether the output is actually good. Not just functional — good. Not just shipped — crafted.
Think about the artisan baker from earlier. Automated ovens and industrial bakeries didn’t eliminate baking. They commoditized the middle. You can get decent bread anywhere now, cheap and fast. But the bakers who survived — the ones who are thriving — are the ones who care in a way a production line doesn’t. They’re not competing on volume. They’re competing on craft. And people pay a premium for it because they can taste the difference.
That’s where this is heading for software, and for knowledge work more broadly. I want to be clear about what I’m not saying. I’m not saying “learn to prompt and you’ll be fine.” Prompting is a mechanical skill. It’s table stakes, the same way knowing your IDE shortcuts is table stakes. What actually matters is whether you have taste. Whether you can look at AI output and know it’s not good enough. Whether you have the judgment to throw away the 80% that’s mediocre and the skill to elevate the 20% that’s promising. Whether you care enough about your craft to hold a standard that the tool can’t hold for itself.
AI is going to be the best tool that great engineers have ever had. Not because it replaces the thinking, but because it frees you up to do more of it. The people who care deeply about being great at this — who see software as a craft, not just a job — are going to build things that weren’t possible before.
I really believe that. Not as a platitude, but as a practical prediction. The leverage is extraordinary for the people who know what to do with it.