Over the last three years, something has shifted in enterprise IT that is harder to name than to feel. It is not one technology, not one framework. It is the slow realization that large frontier models — the advanced reasoning systems from major AI labs — have stopped being an experiment and started being a structural force. They sit now in the middle of how we develop, how we operate, and how we think about the people who do this work.
I have spent thirty-five years in enterprise technology, from mainframes through cloud-native. And I have never seen a shift that touches so many layers simultaneously while being so quietly underestimated in its organizational impact.
From Writing Code to Owning Lifecycles
For most of my career, a developer was someone who wrote code. Good code, hopefully. But the primary measure was always output — features shipped, bugs fixed, lines committed. That model is dissolving.
When advanced coding systems from major AI labs can produce a working function in seconds, typing code loses its weight. What gains weight is everything around it: understanding what should be built, validating that what was generated fits the architecture, and owning the lifecycle from deployment through decommissioning.
I saw this when one of our teams used a state-of-the-art foundation model to refactor a payment processing module. The model produced clean code in minutes. But it took a senior engineer three hours to verify that the refactored logic preserved every edge case from twelve years of business rules. That three hours was the real work.
The Amplification Trap
There is a real danger that I observe in organizations moving fast with these tools. I call it the amplification trap. Because frontier models are so capable at producing plausible output — code, documentation, test cases, infrastructure definitions — there is a tendency to trust without adequate verification.
When I started my career, a junior developer who copied code from a manual without understanding it was considered negligent. Today, a team that accepts AI-generated Terraform configurations without reviewing them against their security baseline is doing the same thing, just faster.
The skill requirement has shifted. We need people who can read generated code critically, who understand architectural patterns deeply enough to spot elegant but wrong choices, and who have the discipline to say “let me verify” instead of “ship it.”
The Context Window Problem as Architecture Constraint
Here is something that few technology leaders discuss publicly but that will define the next wave of enterprise modernization: the context window is an architecture constraint, and a hard one.
There are legacy codebases. Million lines of older languages, like Cobol or Delphi. Built over two decades. It works. It runs critical business processes. And it does not fit into a context window. No frontier model today can ingest that codebase holistically and reason about it as a whole. The model sees fragments. Isolated modules without the web of dependencies that give them meaning.
This led me to what I consider a genuinely new role in enterprise IT: the Context Engineer. This is the person who fragments, indexes, and prepares legacy code so that AI systems can consume it meaningfully. They decide which 40,000 lines of a 300,000-line module matter for a specific modernization task. They build the retrieval layer that feeds right context to the model at the right time.
Your modernization speed is no longer limited primarily by AI capability. It is limited by how well you have organized your legacy knowledge for AI consumption. The Context Engineer determines the modernization velocity. I have not seen this role in any job description yet, but it will be there within two years.
Less Applications, More Agents — The Enterprise Shift
Something equally fundamental is happening on the business application side. For decades, enterprise IT meant buying or building applications — ERP systems, CRM platforms, reporting tools — each a monolith of screens and workflows that humans navigated manually.
What I see emerging is different. Advanced AI systems enable a shift from applications to agents and workflows. Instead of a procurement officer navigating seven screens to approve a purchase order, an AI agent reviews the request against policy, checks budget, flags anomalies, and presents only the decision point. The human still decides. The cognitive overhead is gone.
This means less management overhead. Not because managers are replaced, but because information preparation — collecting data, formatting reports, chasing updates — is increasingly handled by intelligent workflows operating on massive amounts of data. What remains is the core: judgment, decision, accountability.
I see operations, where it was moved three reporting processes from manual Excel assembly to AI-driven data pipelines. The time saving was significant. But the real gain was that our team leads could focus on interpreting data instead of compiling it.
The Psychological Dimension
What concerns me most is not the technology. It will get better. What concerns me is the psychological impact on people who have built their professional identity around skills that are visibly changing.
A senior Delphi developer with twenty years of experience watches a frontier model generate code in a language they do not fully know. A system administrator who spent years mastering infrastructure sees an AI system propose a complete IaC deployment. These moments touch professional identity.
The honest answer is this: the experience of those professionals is more valuable now, not less, but in a different way. Their deep understanding of how systems behave in production, of what breaks at scale, of where business logic hides — this is exactly what models cannot learn from training data alone. The challenge is helping people see that shift as an elevation, not a loss.
So What Changes for People?
Everything and nothing. The tools change. The speed changes. But the fundamental truth remains: someone has to understand what the business needs, someone has to ensure systems are reliable and secure, and someone has to be accountable when things go wrong.
What changes is the shape of the skills. Validation over generation. Architecture thinking over implementation speed. Context engineering over raw coding. Critical reasoning over mechanical execution. And the willingness to learn continuously in an environment where the ground shifts every few months.
After thirty-five years, I still learn something new every week. The learning curve is steeper, the tools more powerful, and the margin for complacency smaller than ever. But the people — the developers, the platform engineers, the security specialists — they remain at the center. Not because it is comforting to say. But because it is true.












It depends… There is not true answer and time will tell. Based on various studies the productivity is real existing, but was consumed by the more complex integration of the digital world. Have the early digital assistants like electronic mail, the Internet and electronic calendar, u.o. led to more productivity of individuals. The recent adding of more and more decisions on „not experts“ led to less productivity at all. Was in the late 90’s an admin which organized many parts of the business, now more and more high paid employees are doing this by themselves. Of course there are a lot of supporting tools. However it is obvious that experts are much better handling the right processes, individuals can du this when more seldom are they do that.





