Evolving With the AI Era
Not a list of tips. A real look at what separates the engineers who will come out the other side of this from the ones who won't.
Spotify’s best engineers have not written a single line of code since December. Not the junior ones who are still finding their footing. The senior ones. The architects. The people you would expect to be the last ones still holding the keyboard. They are directing AI agents from Slack, reviewing the output, merging from their phones, and shipping features faster than before. That came out on the Q4 earnings call last week and it landed exactly as hard as it should for me.
If your first instinct was to find a reason that does not apply to you, that is the instinct you should be worried about.
This is not a one-off experiment at a single company. The direction that bigger tech firms are heading towards is clear and it is accelerating. The question is not whether engineering is changing. It is whether you are changing with it, or whether you are the person still writing the same kind of code you wrote three years ago and telling yourself it is fine.
A caveat before I go further: everything that follows is pattern-matching from a handful of architecture decisions, a handful of companies worth of agent deployments, and personal experience of building this sort of stuff at PayPal. I have maybe 3-5 real data points. That is not a real study. That is just a hunch with some receipts. But the pattern started showing up much more often these past few months, and I think it is moving faster than most engineers realize, so here it is.
Know which type you are (and be honest about it)
The shift happening right now is not that AI is doing some of the work. It is that the work requiring zero contextual judgment is nearly fully automated, and the rest of it requires more contextual judgment than ever. Both of those things are true at the same time, and together they destroy the middle.
The work that requires no real judgment is getting automated. The work that requires deep judgment is getting harder and more valuable. Everything in between is disappearing. Quarter by quarter, agents and their harnesses get better, and more of the middle gets automated. The floor keeps rising. Most engineers are standing on the part of the floor that is about to become the new basement.
Two types of engineer stay above the floor. Not one. Two.
The first is the genuine generalist. Not the basic kind, where full-stack developer means basic React, Node, AWS, and whatever else was in the job posting. I mean someone who actually understands infrastructure, security, data modeling, product tradeoffs, UI/UX, frontend and backend design, and enough business context to know which one matters most in a given situation. Enough surface area to direct agents across the full scope of a system and catch it when the output is wrong. (I spent a long time thinking breadth was the goal. Turns out breadth without depth is just a longer list of things you are mediocre at. The distinction took me longer to understand than I would like to admit.) The genuine generalist’s value in an agent world is judgment at the system level. They stay above the floor because they see the full system. They orchestrate across a wide surface because they actually understand what is underneath it.
The second is the deep domain specialist. The engineer who knows payment systems well enough to catch an idempotency violation that looks correct. The one who knows healthcare data boundary rules that would never appear in a training set. The fraud detection engineer who understands why the timing assumptions baked into that logic are there, not because they read the code, but because they know the regulatory context that produced it. A general-purpose model cannot replicate this yet. It produces plausible-sounding output that violates every constraint the specialist knows. The specialist’s job is no longer to write the code. It is to know exactly where the model is about to get it wrong.
You probably already know which type of engineer you are. You might not want to say it out loud. What dies is everything below the rising floor. The engineer who is pretty good at one thing but not expert in the domain. The generalist who is comfortable across many tools but not actually deep in any of the underlying areas. “Pretty good at React” is a pattern the models already know. “Understands why this payment flow will fail under certain specific edge cases or jurisdictions” is not. You do not have to panic. But you do have to be honest about where you are right now.
The pure vibe coder is the clearest example of someone standing on the part of the floor that is about to rise past them. Yes, they can ship. Prompt an agent, get a working app, push it live, hit a thousand users. That genuinely works and the results are real. But scratch the surface and what you find underneath is usually no mental model of the infrastructure it is running on, no understanding of why costs look the way they do, no ability to explain the security posture of a system they built by describing it to a model. Getting from a thousand users to a million is not a bigger version of the same problem. It is a completely different class of problem. The architectural decisions you skipped over at the start are the ones that blow up your cost structure at scale. The trust boundaries nobody thought through are the ones that leave your users’ data exposed. Results without substance is a fine place to demo from. It is a terrible place to scale from. And this is not what enterprise companies are looking for.
There is a related problem that does not get talked about enough. The engineer who entered the field after agents were already everywhere and never had to build the mental model from scratch. They can ship. But when the agent gets it wrong, they cannot tell, because they never learned what right looks like without the agent. This is not a character flaw. It is a gap in experience that the tooling papered over. If that is you, the fix is not to stop using agents. It is to make sure you understand what is happening underneath them. Read the code they produce. Break things on purpose. Build something without the agent at least once, not because that is how you should work, but because that is how you learn what the agent is actually doing for you.
Then there is the opposite problem. The experienced engineer who has the depth but refuses to pick up the new tools. They have seen hype cycles before. They watched drag-and-drop builders in Visual Studio and Xamarin promise that anyone could build apps without writing code. They watched WordPress and Squarespace come for their jobs and then become just another tool they had to support. They watched no-code and low-code promise to kill coding entirely. They watched NFTs fade. They watched early Devin not live up to the hype. And they are convinced this is the same thing. That skepticism was earned and it served them well. But the difference this time is that the technology is already in their codebase, already in their CI pipeline, already shipping production code at the companies they want to work for. These engineers will still be in demand. Their depth is real and it is not going anywhere. But they are doing themselves a disservice by dismissing tools that would make their existing expertise dramatically more powerful. The engineers with real depth who also embrace the new tools are going to be the most valuable people in any room.
Orchestration plus events is the whole game
I wrote about Stripe’s Minion system in my last post. One-shot end-to-end coding agents in production, not a research demo. Then Spotify drops Honk on their Q4 earnings call. Honk is an internal orchestration layer sitting on top of Claude Code. Engineers describe what they want in natural language, Honk routes it to the right agents, the agents write and test the code, and the result gets pushed back to Slack for review. Senior engineers are merging production changes from their phones. Ramp built Inspect, a background agent that writes code in sandboxed VMs with full access to Sentry, Datadog, and their deployment pipeline. Within months, 30% of all PRs merged to their frontend and backend repos are written by Inspect. Then OpenClaw goes viral as the open-source version of the same idea: a local orchestration layer that wires agents into messaging platforms, event sources, and external tools, with the user directing from a chat interface. Four implementations. Same pattern. That is what convergence looks like.
I have been building something similar at PayPal. Watching Stripe and Spotify describe it on earnings calls felt less like news and more like confirmation. The details are not mine to share, but the shape and vision is the same. And the thing that changed my thinking was not the code generation. It was the moment I tried to expand across domain and systems boundaries and discovered the agent’s biggest problem was not reasoning. It was context. It did not know the things a senior engineer carries in their head and never writes down. That experience is why I keep coming back to the same point: the current models are not the bottleneck. The context architecture is. Your moat is not the models but the way you engineer your context and build your pipelines and workflows around them.
Anthropic shipping Claude agent teams formalized what is already happening in the wild: multiple agents, shared context, coordinated output, quality checks at each hand-off. SiliconANGLE called this the “coding wedge”, the first domain where autonomous agents were reliable enough for real production use, and whoever controls how developers build with AI gains control of the orchestration layer that extends to every knowledge worker after that. Anthropic’s Claude Cowork announcement erased $285 billion in SaaS market value in a single session. That is not a productivity story. That is a platform war, and the orchestration layer is the territory now, not the models. Within the next year, open-source models will hit the level of Opus 4.6 at a fraction of the cost. When that happens, the model is no longer the differentiator. The harness and orchestration layer become the frontier that most engineers will be working in.
The engineer’s job in that picture is not to write the code. It is to design the system that writes the code. And that is genuinely harder than it sounds because you are now responsible for something closer to an architecture than a feature. You have to think about context boundaries, task decomposition, when to parallelize and when to keep things sequential because one step’s output feeds the next. You will have to think about security, how to build workflows that keep your models on track, how to build observability so your agent systems are not black boxes. That is the moat now. Not the model. The system around it.
What most people underestimate is how much the event architecture underneath all of this matters. An agent that only responds to direct user input is an expensive chat interface. An agent wired into a real event fabric behaves like something with situational awareness. A git push triggers a code review. A deployment event triggers a rollout summary. An internal state change propagates downstream and kicks off a workflow before any human noticed the condition existed. That is what gives these systems the feeling of intelligence. Not the model quality. The event topology.
This is not new technology. Engineers who have spent time with Kafka, event sourcing, and async message patterns already have the mental model. What is new is applying that architecture as the nervous system for agent workflows instead of just data pipelines. The model handles the reasoning. The events define what it knows is happening and when. Get that right and the system feels alive. Get it wrong and you have a slow, expensive autocomplete box with an API bill attached.
What to actually do on Monday morning
Here is where most “the world is changing” posts stop. They diagnose the problem, tell you to adapt, and leave you staring at your terminal wondering what that means in practice. So let me be specific.
Backend and cloud is where I would concentrate if I were starting from scratch right now. Not because frontend is dying. But because the surface area on the backend is enormous, the problems compound in genuinely interesting ways, and it is the layer where all the real orchestration work lives.
If you are a frontend engineer and want to stay in frontend, the same rising floor applies here too. The middle is shrinking. Building components, wiring up basic state, shipping standard layouts. A model with decent context already does that. But the edges of frontend are still genuinely hard and still require engineers. Heavy business logic frontends with complex multi-tenancy, multiple user roles and tenant configurations, deeply intertwined domain rules and logic. That is one edge. The other is extreme design-level frontend, the kind of animation, interaction, and visual craft that a model cannot taste-check for you. If you want to see what expert-level frontend actually looks like, look at people like Josh Comeau and TkDodo. Deep accessibility optimization, server-driven UI patterns, complex state management for real business logic. Those survive. Learn the base of CSS properly but do not spend months going deep into CSS-in-JS versus Tailwind versus CSS Modules. Pick one stack and commit. And if you want to future-proof, push toward fullstack. The frontend engineer who also understands the infrastructure underneath has a very different career trajectory than the one who does not.
For the backend path, here is what that looks like concretely. Start with how distributed systems actually fail. Not the textbook version. The version where services call each other in ways nobody fully mapped out and something breaks at 2am because of a dependency three layers deep. Learn a serious message broker, not just the tutorial but the operational reality of what happens when messages arrive out of order or a consumer falls behind. Understand how containers run in production, not just how to write a Dockerfile but what happens when something goes wrong and you need to figure out why. Learn cloud security and IAM, not as a compliance checkbox but as an actual mental model of who can talk to what and why. Learn observability and logging, because a system you cannot see into is a system you cannot debug, and agent workflows make that ten times worse.
Then layer agent orchestration on top of that foundation. Context window management, tool-use patterns, how to decompose a complex task into subtasks an agent can handle without losing coherence. How to build the evaluation harness that tells you whether your agent system is actually working or just producing confident-sounding garbage. Learn when to route a step through the model and when to just execute it directly. If a workflow is deterministic, run it on the server or through a tool call without burning tokens on reasoning the model does not need to do. The difference between an agent system that costs a fortune and one that scales is knowing where the model adds value and where plain code is enough. This is the stack that matters now. It is also, not coincidentally, the stack that is hardest for a model to learn on its own because the failure modes are emergent and the context is institutional.
What to stop doing: optimizing for things the model already does well. If you are spending your learning time cycling through the latest framework or library in a space the model already handles, that time is going somewhere with diminishing returns. What to start doing: building something with agents that touches a real system. Not a demo. Not a chatbot wrapper. Something that has to handle failure, manage state across calls, and produce output you would actually trust. You will learn more in two weeks of that than in six months of reading about it.
The cost curve is not waiting for you
Every new layer of abstraction in the history of computing unlocked a class of problems the previous layer could not address economically. Assembly to C. C to the web. The web to cloud infrastructure. Each time, the people who clung to the old layer said something was being lost. Each time, the people who embraced the new layer built things that were simply not possible before. There will still be engineers who write every line by hand. There are still COBOL programmers. They are not unemployed. They are also not building what comes next. That is the choice in front of you right now, and pretending it is not a choice does not make it go away.
In my last post I wrote about watching for the moment when cheaper models hit frontier quality. Sonnet 4.6 dropped recently and delivers Opus 4.5 level performance at Sonnet pricing. That is the cost curve doing exactly what I said it would, inside Anthropic’s own lineup before the open-source alternatives even catch up. Faster than I expected. These things always do. The economics of running agent systems at scale are going to look very different in twelve months. The companies that were waiting for costs to come down before committing to this architecture are running out of excuses.
On the bubble question: the financial layer could absolutely correct. Valuations are stretched, capital is flooding in faster than revenue can justify it, and markets do what markets do. But here is the distinction most people are not making: the financial bubble and the technical shift are two different things. The financial bubble may correct. The technical one is not a bubble. There is no putting this back in the box. The models exist. The architectures are in production. The cost curve is falling. This growth is happening in one of the least favorable macro environments in a decade. High rates, constrained enterprise budgets, uncertain capital markets. AI is expanding into all of it anyway. Now imagine what happens when rates come down, budgets loosen, and inference costs drop another order of magnitude at the same time. If you are sitting on the sideline waiting for the hype to die so you can go back to normal, there is no normal to go back to.
The uncomfortable part
Growth does not come easily to someone who is attached to being good at the thing they are already good at. That is the honest version of “be open to change” and it hits differently when you say it plainly.
I have skills I am proud of. Years of work went into them. Some of those skills are becoming a commodity, and watching that happen is genuinely uncomfortable. Here is the specific moment I am talking about. I was refactoring a component at work, the kind of careful, methodical decomposition I have done dozens of times. Halfway through, I gave the agent the full context on a whim. I described my vision for the refactor, the structure I wanted, the patterns I was aiming for. It implemented it exactly as I described, first try, with improvements I had not thought of. The kind of refinement I would normally spend hours on, iterating and crafting until it felt right. Done. I sat there for a while. The scarce skill was still the decomposition, knowing how to break the problem apart and describe the shape of the solution. But the coding itself, the hours of crafting and refining, that was not the scarce part anymore. (I would love to tell you I had this realization gracefully. I did not. I made a coffee, sat back down, and deleted my refactor branch out of spite.)
The signal to watch for is commoditization. When something you do well starts to feel like something a model with decent context can do passably, that is the market telling you where the floor just moved. You do not have to abandon everything overnight, but you should already be shifting your weight toward the layer above it, the problem it creates, the judgment it requires. Skills do not expire all at once. They degrade slowly and then suddenly, and by the time it is obvious it is usually too late to catch up gracefully.
The engineers who come out of this well are going to be the ones who could break their own worldview when the evidence required it. The golden age of product building is not behind us. It has not started yet. The engineering cost of solving certain classes of problems just dropped by an order of magnitude, and that means problems nobody bothered attempting are now worth attempting.