The perfect storm: burnout and AI-assisted development
How the combination of unrealistic expectations, industry pressure, and tools that change every week is burning out development teams.
I took four months off at the end of 2025. No code, no side projects, just a break before looking for a new job. When I came back in early 2026 and started a new position, two things hit me at once: AI tools had made a massive leap, and the industry was in a very different place from where I'd left it.
How my workflow changed
Before the break, I was using Cursor with Sonnet for autocomplete and the occasional prompt with limited context. It worked, but most of the time crafting the prompt and iterating was more work than just writing the code myself.
When I started the new job, getting up to speed with AI tools wasn't optional. The team was already using them. I started trying Claude Code with Opus 4.5 (now 4.6), and the difference was huge. I describe what I want, review what comes out, and iterate. AI implements features while I direct and review.
The development flow used to look something like this:
sequenceDiagram
participant Product as Product
participant Dev as Developer
participant Review as Developer Review
loop Technical analysis
Product->>Dev: Proposal / Requirements
Dev-->>Product: Technical feedback
end
Dev->>Dev: Develop feature
Dev->>Review: Open PR
Review-->>Dev: Changes requested
Dev->>Dev: Apply changes
Dev->>Review: Update PR
Review-->>Dev: Approved
Note over Dev,Review: MergedWhere most of the time was spent in the feature development stage and, in some cases, the technical analysis with product and design.
Now with the AI flow, it compresses several stages beyond just development. Since prototyping is so easy, sometimes it's better to do a quick prototype and capture the product analysis in a faster, more tangible way.
sequenceDiagram
participant Product
participant Dev as Developer (Owner)
participant AIDev as AI Agent Dev (Claude Code)
participant Tools as Linter/Tests
participant AIRev as AI Reviewer (CodeRabbit)
participant DevRev as Dev Reviewer
loop Technical analysis
Product->>Dev: Requirements
Dev->>AIDev: Plan + context
AIDev-->>Dev: Technical proposal
opt Quick prototype
AIDev->>Dev: Demo / mock
Dev-->>Product: Early validation
end
Dev-->>Product: Alignment
end
AIDev->>Tools: Implement + validate
Tools-->>AIDev: Checks OK
AIDev-->>Dev: Implementation ready
Dev->>Dev: Review changes
alt Adjustments needed
Dev-->>AIDev: Request changes
AIDev->>Tools: Update + validate
Tools-->>AIDev: OK
AIDev-->>Dev: Ready for PR
else OK for PR
AIDev-->>Dev: PR ready
end
Dev->>AIRev: Open PR
AIRev-->>AIDev: Automated comments
AIDev->>Tools: Re-validate
Tools-->>AIDev: OK
Dev->>DevRev: Technical review
DevRev-->>Dev: Approved
Note over Product,DevRev: Merge (PR approved + checks OK + AI review + dev review)The real shift wasn't speed. It was which skills matter: code review is now the core job, planning is more critical than implementing, and setting up an environment with strict guardrails (fast linter, strong types, solid tests) is what keeps AI output honest.
A little over a year ago, when tools like Cursor started gaining traction, I was working as a tech lead and recommended the team start experimenting, with a clear message: the output is your responsibility. We were never code-monkeys translating requirements into code before AI, and we shouldn't become prompt-monkeys now. That mindset hasn't changed. If anything, it's more relevant than before.
But the workflow shift is only part of the story. What concerns me more is how the industry is handling this transition.
Three types of companies
Not every company is responding the same way. What I've seen, both firsthand and in conversations with other developers, falls into three clear patterns.
The ones that haven't started
Companies where AI tool adoption isn't a priority. No subscriptions, no policies, no directives. The more curious developers use tools on their own, often paying out of pocket or using free tiers.
The ROI math is absurd: tools costing $20-200/month per developer in teams with high salaries. But they get stuck on "we need to evaluate vendors" or "security hasn't approved it" and lose months of productivity while competitors ship faster.
The paradox is that in these teams, developers using AI on their own end up with a quiet advantage. But it's an individual advantage, not a team one. There's no coordinated approach, no shared learning, and when someone leaves, that productivity walks out the door with them.
The pragmatists
These are the ones getting it right. They understand the technology has real strengths and real limitations. They invest in tools, give time for experimentation, and most importantly: set realistic expectations.
A pragmatic team invests in the full feedback stack: fast linters like oxlint, strict type checking, good test coverage. But they also invest in project-level configuration for the agents: files like AGENTS.md or CLAUDE.md that define conventions, constraints, and codebase context so the AI generates consistent code without you repeating it in every prompt. Custom commands and skills that encode team workflows and make them reproducible. They understand that more agent output means more review load, and plan for it. They create space for the team to evaluate tools, share what works and what doesn't.
flowchart LR
A([AI Agent]) --> B[Generated code]
B --> C{"Lint - Types - Tests"}
C -.->|Fail| A
C -->|Pass| D{Human review}
D -.->|Changes| A
D -->|Approved| E([Ship])These companies treat AI as what it is: a powerful tool that requires infrastructure around it to work well. Not a magic productivity button.
The ones that bought the hype
This is where it gets ugly. Management that reads the headlines ("30% of our code is AI-generated!", "2x productivity with Copilot!") and translates that into immediate output expectations without investing in anything else.
No time for training. No process adjustments. Just the directive to ship more, faster, with the same people. The premise is that AI should make everything easier, so why aren't we delivering twice as much?
The result is predictable: teams pressured with unrealistic metrics, without proper tools or time to learn how to use them. AI gets used poorly, output is mediocre, code review becomes a bottleneck, and quality drops. Developers end up working more hours, not fewer, trying to meet expectations nobody validated against reality.
A UC Berkeley study published this month analyzed a 200-person tech company over eight months. The central finding: AI doesn't reduce work, it intensifies it. Workers using AI didn't work less. They worked faster, took on broader projects, and ended up extending their hours. 77% of employees reported that AI increased their workload, not reduced it.
As one participant put it: "You had thought that maybe, 'Oh, because you could be more productive with AI, then you save some time, you can work less.' But then really, you don't work less. You just work the same amount or even more."
The developer side
The software industry is used to change. New framework every year, new build tool, new paradigm. But this transition is different: it doesn't change which tools you use, it changes what the job is.
And it caught a lot of people off guard.
The ones who dove in
Some developers adopted these tools from day one. They experiment, optimize their workflow, push the limits of what's possible. They're the ones sharing tips in Slack and helping the team get up to speed.
But here's a paradox I didn't expect. According to TechCrunch, the first signs of burnout are coming from the people who embrace AI the most. Not from those who resist it. From those who use it the most.
Why? Because AI expands what feels possible. Before, a developer had a natural limit on how much code they could write in a day. That limit doesn't exist in the same way anymore, and without clear boundaries, work expands to fill everything. More tasks, more scope, more context to manage. The Berkeley study confirms it: productivity grows, but at the cost of cognitive fatigue and increasingly blurred lines between work and personal life.
The ones who need guidance
Many developers aren't resisting AI. They just don't know where to start. Or they started and didn't see results because nobody showed them how to use the tools effectively. 47% of workers using AI don't know how to use it to be more productive.
This group needs technical leadership that creates the space and structure to learn. Not mandates to "use AI," but real support: pair programming sessions with AI tools, sharing workflows that work, iterating together.
The ones who use it without understanding the consequences
And then there's the group that accepts everything the AI generates without questioning it. They copy-paste the agent's output straight into a PR without reviewing, without understanding what the code does, without checking edge cases. They ship fast, but they ship technical debt.
96% of developers don't fully trust the functional accuracy of AI-generated code. 76% think it needs refactoring. But when the pressure is to ship fast and the AI generates code that "works," the temptation to skip thorough review is enormous.
The pressure that didn't exist before
All of this is happening in an industry context that changed dramatically. In 2020-2021, the barrier to entry was low, demand for developers was sky-high, and companies competed for talent. Today the picture is different: over 245,000 tech layoffs in 2025, US programmer employment fell 27.5% between 2023 and 2025, and entry-level positions require more experience than ever.
Developers today carry a pressure that didn't exist three years ago. Fewer positions, higher demands, expectations inflated by AI hype, and the constant feeling that if you don't adopt these tools you'll be left behind. That combination is the perfect storm for burnout.
The tool race nobody can win
There's a factor that rarely gets mentioned: how fast the tools change. Claude Code, Codex, OpenCode, Cursor, Windsurf. New models every few weeks: Opus 4.5, then 4.6, Codex 5.2/5.3, Gemini 3. Each update changes what's possible and what's worth doing.
A developer who wants to stay current needs to constantly evaluate tools, test new workflows, adjust their setup. That takes time. Time many companies aren't willing to give because the directive is "ship ship ship."
The result: developers evaluating tools in their free time, staying late to try the latest model, feeling that if they don't, they'll fall behind. Another burnout factor disguised as "professional development."
Pragmatic companies create explicit space for this: dedicated time to experiment, share findings, decide as a team which tools to adopt. The hype-driven ones expect it to happen magically between sprints.
What actually works
If there's one thing I'm clear on after these months, it's that the answer isn't resisting change or diving in headfirst without thinking.
More guardrails, not fewer. Strict linters, strong types, solid tests. Let the AI fly within those boundaries. You get most of the speed without the quality risk.
Realistic expectations from management. AI doesn't double productivity overnight. It increases output on certain tasks but creates new load on others (especially review). If management doesn't understand this trade-off, they'll push based on a fantasy.
Time to learn. The tools change fast. If you don't create space for your team to evaluate and learn them properly, they'll use them poorly or not at all.
Connected feedback loops. CLI tools like psql, Chrome DevTools MCP, fast test runners. The more automatic feedback loops the agent has, the less manual verification you need and the better the result.
Berkeley researchers recommend what they call an "AI practice": intentional norms around AI use that include structured pauses before important decisions, sequencing work to reduce context-switching, and protecting time for human connection.
This isn't the end, it's a transition
A developer's job is still understanding how software works. That hasn't changed and isn't changing anytime soon. What changed is that you're now directing a tool instead of doing everything by hand, and that requires different skills.
Models will keep getting better. The distance between "I describe a feature" and "working code with tests" shrinks every few months. But the technology alone doesn't solve anything if companies don't adjust expectations and developers don't set boundaries.
The companies and teams that handle this transition with intent, not just speed, are the ones that will come out the other side without having burned their people along the way.