How AI agents will impact software engineering in 2026
As we move into 2026, I keep coming back to one core question: How are we actually treating AI agents? The industry has been quick to adopt them, but not nearly as quick to define what they are in relation to the teams and systems they now support. Are they people? Are they processes? Are they something in between?
The answer to that question will influence everything from governance to security to how engineering teams operate in the year ahead.
Personas, Processes, and an Entirely New Management Problem
Right now, my fascination with AI is rooted in how we choose to treat the agents themselves. If we decide to treat them like people, we get one set of outcomes and problems. If we decide to treat them like processes, we get a different set of outcomes and problems. So the real question becomes: "Do we continue to anthropomorphize these agents, or do we start treating them like IT capital?"
If an agent is doing well, do we give it a performance review? Do we track it in an HR system? Do we treat them like developers we manage? Or are they just processes you run the same way you run a terminal or a web editor?
Ultimately, my fascination is really around: "Do these agents have a persona, or not?" And how does that affect the way we interact with them? For me, it's more anthropological than anything else.
Then there's a second-order question: if developers are managing these agents, do they need to understand management? Do classic motivational tactics work with agents? Do they need a whole new set of skills to guide - not just operate - these systems?
And all of this is happening alongside a huge data question: with all these agents running, what data is moving back and forth into someone else's environment? What data is leaking? What data is intentionally going? What data do you think is safe that isn't?
The whole gamut of agent management is just ripe for disruption or creation or evolution.
We'll see some companies lean heavily into giving agents personalities and managing them metaphorically like humans. And we'll see others adopt strict IT governance, treating them like processor assets.
Outages, Zero Trust, and Operational Separation
I think we'll start to see significant outages or security events based on code produced by AI agents that wasn't reviewed, gated, or controlled properly in the operational paradigm.
If code is coming from a place where you can't trace or verify its origin, we have to treat it as hostile. If that's the case, you need zero trust across your entire operational paradigm. And that's not simple.
So in 2026, you'll see more investment in gates, promotion techniques, and protection of the production runtime. You may also see a rise of operational practices that are separated by necessity from developers. This may be one of those times where the companies that never moved to developers running everything they build may actually end up being ahead of those that did.
Another question for 2026: how many companies have budgets that assume a fixed reliance on single providers? In just the last month, we've had major outages from Azure, AWS, GitHub, and Cloudflare. There's been a lot of discussion about multi-cloud strategies and other mitigation mechanisms, but personally, I find that to be the cart before the horse.
Even if I move everything I control outside of us-east-1 on AWS, I guarantee I'm relying on something that relies on us-east-1. Authentication, logging, monitoring - whatever hosted service I use - I cannot move everything outside of one region or provider unless I control it.
So organizations effectively have two options:
- Don't use other people's services and run everything yourself, or
- Use other people's services and accept that sometimes you're going to have downtime because you're not the primary thing people are worried about during those outages.
Fewer CVEs, Thinner Toolchains, and Faster Response
2025 saw a big rise in CVE-free or "Zero CVE" advertised images and other paradigms for technology delivery. Beyond the obvious questions - how can anyone guarantee Zero CVE images or vulnerability-free software? - I think we'll continue to see this trend.
Some of the engineering behind it has been genuinely valuable: reducing footprint, changing compile options and flags, reducing reliance on external libraries, thinning out toolchains to exactly what's needed. Those are meaningful improvements.
But reducing CVEs is only one piece of the puzzle. How you respond when a vulnerability is found is another. So I think the conversation moves beyond reducing the attack surface to focusing on overall response ability and agility.
People are going to realize in 2026 that preventing all security problems is unrealistic. You can prevent many, and you can mitigate many, but you absolutely have to improve how you respond once they're discovered. That will be the next wave.