The Specification Is the Product Now — And Your Developers Better Know It


How we ended up here

Over the last three years, something has shifted in enterprise IT that I did not fully anticipate, even after thirty-five years in this industry. The tools we use to build software have not just improved — they have fundamentally changed what it means to be a developer, an architect, or a platform engineer. Large frontier models from major AI laboratories have moved from impressive demonstrations to daily production instruments. And with that shift, the center of gravity in software development is moving from code to specification.

From Writing Code to Owning the Lifecycle

When I started in this business, a good developer was someone who could write tight, efficient code. Over the decades, the role expanded — version control, testing, deployment, monitoring. DevOps made developers responsible for the full lifecycle. CI/CD pipelines, Infrastructure as Code, observability — all of this became part of what a software professional must understand.

Now another expansion is happening. Advanced coding systems from major AI labs can generate, refactor, test, and document code at a speed and consistency that no human can match on pure throughput. I have seen this in my own teams. A task that took a senior developer two days — writing integration tests for a legacy API — was drafted in twenty minutes by a reasoning system, reviewed and adjusted by the developer in another hour.

The developer did not become unnecessary. The developer became the person who decided what the tests should validate, how edge cases map to business logic, and whether the generated output was actually correct. The skill shifted from typing to thinking.

Specification-Driven Development: The Real Asset Is Not the Code

This is where something genuinely new is emerging. If state-of-the-art foundation models can produce working code from well-written instructions, then the quality of those instructions becomes the competitive advantage. Not the code. The specification.

I have started calling this Specification-Driven Development, and I am not alone. Methodologies like BMAD and OpenSpec are formalizing what many of us felt intuitively: the person who writes the best specification wins. Not the person who types the fastest. Not the person who memorizes the most framework APIs.

In my own practice, I have seen projects where a precise, well-structured specification document — describing behavior, constraints, interfaces, error handling, and security boundaries — produced better results through AI-assisted generation than a team of four developers working from a vague user story. The specification became the actual product asset. The code became a derivative. Reproducible, replaceable, regeneratable.

This is a profound change in how we must think about software ownership. Configuration management, version control, review processes — all of these must now apply to specifications with the same rigor we applied to source code for decades.

The Risk of Letting the Machine Think for You

I would be dishonest if I did not address the danger. Over-reliance on new generation reasoning systems is real and I have witnessed it. Junior developers accepting generated code without understanding what it does. Architects letting AI propose system boundaries without questioning whether the decomposition fits the organizational reality. Security engineers trusting AI-generated policies without tracing them back to actual threat models.

AI amplifies capability. It does not replace responsibility. Every generated artifact needs a human who understands why it exists, what it should do, and what happens when it fails. The moment we lose that, we do not have automation — we have negligence with extra steps.

The skill requirements are shifting hard toward validation, architectural reasoning, and critical thinking. These are not new skills. They are the skills we always said mattered most but rarely prioritized in hiring because we were too busy looking for people who knew specific frameworks.

AI Agents in the Enterprise: Less Applications, More Workflows

Beyond development, the transformation is reshaping how enterprises operate at every level. In my current environment, I see a clear pattern: traditional applications are being decomposed into AI-driven agent workflows. Instead of a monolithic reporting tool, we now have intelligent agents that collect data across systems, apply contextual reasoning, and surface decisions — not dashboards.

The result is less management overhead and more focus on the actual core of the task. A procurement specialist no longer spends hours consolidating supplier data from three systems. An agent does that. The specialist spends time on negotiation strategy, relationship assessment, risk evaluation — the things that require human judgment and experience.

Better decisions through data is not a new promise. What is new is that the massive amount of data enterprises sit on can now actually be processed contextually by frontier models, not just aggregated into charts that nobody reads. The data becomes actionable because the reasoning layer between raw information and human decision has become genuinely capable.

I have seen this reduce decision latency in operational contexts by days. Not because people were slow before, but because the data preparation work that preceded every decision was slow. That bottleneck is dissolving.

The Psychological Weight of Constant Adaptation

What I do not see discussed enough is the pressure this puts on people. Every six months, the capabilities of these systems leap forward. What was impossible last year is routine today. For a forty-year-old platform engineer who spent a decade mastering Kubernetes and Terraform, being told that the next generation of infrastructure management might look entirely different is not exciting. It is exhausting.

I feel this myself. After thirty-five years, I still spend evenings reading, testing, adjusting my mental models. Not because I enjoy the pressure, but because the alternative — becoming irrelevant — is worse. And I say this as someone who genuinely finds this technology fascinating. For colleagues who are less enthusiastic about constant change, the psychological burden is real.

Organizations that do not acknowledge this — that simply announce the next AI initiative without investing in their people’s learning capacity and emotional resilience — will lose their best engineers. Not to competitors. To burnout.

So What Changes for People?

The honest answer is: almost everything about the daily work, but nothing about what makes a good professional. Critical thinking, accountability, the ability to translate business needs into precise technical requirements — these have always been the core. They just were hidden behind the noise of implementation details.

What changes is that the implementation layer is increasingly handled by advanced AI systems. The human role moves upstream — to specification, validation, architectural decision-making, and ethical judgment. Developers become specification authors and quality gatekeepers. Platform engineers become orchestrators of AI-augmented infrastructure. Security engineers become the last line of reasoning before automated systems act on sensitive boundaries.

The code is becoming a byproduct. The specification is becoming the asset. And the people who understand this shift — who invest in learning how to instruct, constrain, and validate machine-generated output — will define the next era of enterprise technology. Not the machines alone. Never the machines alone.

Git Was Never Built for Machines – And Yet, It Became Their Library


When Linus Torvalds created Git in 2005, he solved a very human problem: how do distributed teams of engineers coordinate changes to a shared codebase without stepping on each other’s feet? The design was elegant, the model brilliant – and entirely anthropocentric. Every concept in Git, from commits to branches to pull requests, was designed to reflect human reasoning about software change.

Nobody thought about machines. Nobody had to.

GitHub today: 420 million repositories – the world’s largest unintentional AI training dataset, built by humans for humans, read by machines for everything.

The Unintended Consequence of 20 Years of Open Source

Fast forward to today. GitHub hosts over 420 million repositories. The platform has become, without anyone planning it that way, the single largest structured dataset of human reasoning about software in existence. Not raw text, not unstructured web crawls – but versioned, annotated, context-rich artifacts of how engineers think, communicate, decide, and build.

When OpenAI trained Codex, when DeepMind built AlphaCode, when Anthropic trained Claude – they all fed on this corpus. The commit messages, the PR discussions, the inline comments, the issue threads, the README files. Git history is, quite literally, the autobiography of software development, and AI learned to read it before most organizations realized what they were giving away.

The irony is profound. A version control system designed for human collaboration became the substrate for training non-human intelligence. Git was the library. GitHub was the librarian. And the AI models were the quiet students who read everything, remembered everything, and said nothing.

What Does a Machine Actually Learn from a Repository?

This is where it gets interesting – and where most CTO conversations I have still miss the point. The common assumption is that AI learns “code” from GitHub. That is technically true but deeply incomplete. What AI systems actually absorb is something far richer: the relationship between intent and implementation.

A commit message that reads “fix edge case in authentication flow when user has expired token” paired with a fifteen-line diff teaches a model not just syntax, but causality. The issue thread that preceded it, the review comments that shaped it, the test that was added afterward – together, they represent a chain of engineering reasoning that no textbook ever captured at scale.

This is fundamentally different from learning from documentation. Documentation describes what code does. Repositories reveal how engineers think about what code should do, why it changes, and under what conditions decisions get revisited. The difference is the difference between reading a law and watching a courtroom.

The Next Evolution: Repos as AI-Native Artifacts

Here is where we stand at an inflection point that most engineering organizations have not yet grasped. If repositories are already the de facto training substrate for AI, the logical next step is to design repositories with that fact in mind. Not reactively, not accidentally – but intentionally.

In the coming years, we will see repositories evolve from pure version control archives into structured knowledge bases that explicitly address AI agents as consumers. This means several concrete developments that are already beginning to emerge in practice.

The first is the emergence of a dedicated specification layer. Today, most codebases contain implicit intent buried in comments, naming conventions, and tribal knowledge that lives only in the heads of the engineers who wrote it. Tomorrow’s repositories will carry explicit machine-readable specifications – linked directly from the code they describe – that articulate not just what a module does, but what it is trying to achieve and what constraints govern its evolution. Formats like OpenSpec and similar frameworks are early experiments in this direction.

The second shift is what might be called the Intent Layer. Beyond the specification of individual components, future repositories will carry structured metadata that describes the reasoning behind architectural decisions. Why was this approach chosen over the alternatives? What trade-offs were consciously accepted? What assumptions does this design rely on? This is the kind of context that AI agents need to reason correctly about a codebase – not just to generate code that compiles, but to generate code that fits.

The third development is the rise of agent-aware commit protocols. If AI agents are both reading and writing to repositories – which they already are in many of our development pipelines today – the commit structure itself needs to evolve. Automated commits should carry provenance metadata: which model generated this, from which specification, against which test harness, with what confidence. Human commits will increasingly need corresponding context flags that distinguish deliberate design choices from pragmatic workarounds.

The Strategic Implication Nobody Is Talking About

There is a competitive dimension here that deserves more executive attention than it currently receives. Organizations that deliberately enrich their repositories with machine-readable intent and specification data are not just improving their own AI-assisted development workflows. They are producing higher-quality training data for the next generation of models. If open-source development continues to feed the pre-training pipelines of frontier AI systems, then the quality of the reasoning encoded in public repositories will shape the quality of the AI that the entire industry relies on.

This creates an asymmetry: companies that treat their internal codebases as structured knowledge assets – not just as source code archives – will build internal AI capabilities that reflect higher-quality reasoning. The gap between organizations that have thought seriously about repository architecture as an AI substrate and those that have not will become a measurable capability difference within this decade.

Git was never built for machines. But machines have made themselves at home, and now the question is whether we redesign the house accordingly.

The answer, for any organization serious about AI-driven development, is yes. And the time to start is now – not when the next model generation arrives, but before it does, while there is still time to shape what it learns from you.


This post is part of a series on the structural shifts in softwawe development brought on by AI integration. Previous entries covered specification-driven development and multi-agent orchestration in enterprise codebases.

Transforming the Enterprise


In the recent years the change in a lot of industries have arisen from an traditional business approach, which was developed over decades to a software defined version of that. There are compelling reasons why this has happened.

(c) by presentation load

(c) by Presentation load

When the industry develops products often it takes decades that this comes out. Take cars for example, all development from cars currently in mass production are start to build and design in the last 5 -10 years. Updates will often go into the next generation of the product cycle.

This is obvious if it is mechanical, but on software it can be much faster adopted. A good example is Tesla Motors which changed the Industry with a concept building a computer in from of a car. Nightly software is updated over the air and new functionality is available for the drive or passenger. But not only this has changed also the selling of that kind of car is different. While for traditional car dealers it is a exercise to train all the sales personal on new function and features, new leasing models or service capabilities to explain this to the customers, modern companies change the sales structure to the internet with an easy to update and adjust model. This leads that options and selling capabilities more depend on the flexibility and creativity of the company, not on the salesforce and their adaptability. The new model traditional Enterprises stumble into demands deeply a adoption of an agile and innovative behavior and processes to leverage the demand and open new segments of making business with.

Why is this happen

Because it is possible. With the appearance of cloud and the models supported thru that, Startups have shown that it is easy to build business without a large invest into infrastructure or DC. Even more, in the past you have to ask investors for a large amount of money to build the DC now you can pay while you build your business. This is much more enabling the investment of the capital in the business model and not into the IT landscape. But this is only one aspect. With the commodization of IT resources, and the container based IT, it is much more cost efficient and reliable to build enterprise class IT with a minimum of investment. However, there is a trap many companies will fall into, which is standardization. Currently there is a believe that one cloud standard, driven by cloud providers, can be the right one, but history has shown that this will lead to more cost and will be replaces in time by an Industry association. We see this on the horizon with OpenStack already, which this is still far of enterprise ready. The key will also be more in the PaaS layer with open software, like CloudFoundry and Docker, which opens a broader Eco space for applications and operations.

Innovation HandIllustration by Dinis Guarda

Innovation HandIllustration by Dinis Guarda

So what about to enable the “New” Enterprise model

The new model will be driven thru innovation in software and applications. With my daily talks to large companies and customers many of them think about how to implement this two aspects into their business process modelling. Often it is driven out of the IT department, but the link to the business and the drivers are missing or simply not established. T I see large enterprises and global companies investing in application development thru the Line of Business and building a second IT knowledge, which is more enrich with the business than the agile development. This not only leads often to a wrong assessment of the best development environment, it also creates a new class of information islands. In the long run this will not be the right innovative approach for many enterprises, but it let adopt and compete with the new kids on the block, the startups, much better. My advise to the CIO and cloud architects is always to engage actively with the CIO departments and help them to change to a more agile and innovative model, we call that, continuous innovation, but also get in return the IT expertise to make the right strategic decisions for the company.

IT provider, like EMC and the federation, enables thsi process and guide also thru that, with various iterations EMC has possibilities to analyze the  current status of an IT department and show the path from a 2nd platform concept to the modern web scale architecture, the 3rd platform concept demands. Since this is not a “shoot once and forget” also in IT terms the “New” model is a constant change. Was it in the past a management of resources and strive form more synergy and “innovation” thru new HW/SW will be the next decade the IT departments more a broker of public and private cloud, may be also for other companies as an additional service.

How to proceed ?

It is not simple and has to be step by step, since the current change of the business model in many verticals not only driven thru development and operation aspects, it also deeply influenced thru big data concepts, which often lead to a Internet Of Things discussion. Silos and public cloud may be an answer, the key to success I see in many cases with a joint effort of the business units and the IT responsible people in the enterprise.

Was Spock the first Data Analyst?


The last couple of years a discussion around the information society has started. Since more people enter data around their lives as well our planet, it was obvious that business start to leverage this trend and added more data; like Google scanned the library of Congress, mapped the planet including the oceans. Nowadays this is topped. Data combination and new streams of information are provided, some free some for purchase.

Now in the Star Trek series the chief scientist, called Spock, has the task to gather as many as possible data streams delivered on the starship, combine it with the knowledge of a huge computer library and dr

Vulcan (Star Trek)

Vulcan (Star Trek) (Photo credit: Wikipedia)

aw conclusions of it, in real-time. In this series the logic and an ability to draw fast conclusions for the captain to make relevant decisions where key for survival.

In modern business the survival of companies depend on fast, exact, agile conclusions.  Modern technologies like the Chorus a product of Greenplum enables businesses of all sizes to gain insight on markets, customers, competition etc. was it in the past that this could be done on a long time frame today’s businesses move toward a continuous optimization and adoption of the GTM and their portfolios.

To enable an agile business process leader of companies have to gather as many streams of data around your business combine it with insight knowledge and make the tough decisions.

Specialists that turn this data into information where decisions will be drawn from so called data analyst, while the identifying of relevant data streams out of white noise is the job of a data scientist.

Of course today the time between analyze and decision-making is not quite short like it was often at the Enterprise, but the trend of more and faster data generation as well as access, more agile business grow as startups and compete with the established ones.

Looking on trends in IT departments of enterprises of all kinds, the desire for more agility leads to a cloud approach. This is only the first step, the last state is to be in the middle of the data universe and navigate their Enterprise thru the business solar system. The input will be overwhelming, new processes for sensors input needed to be developed and the crew aligned to the new command structure.

The engineering section, we would call it infrastructure, has to offer flexible and agile systems to answer the requests fast and right. One Key to success is automation, orchestration and standardization, and not dictation and a silo approach. Scotty will most probably fit into a data scientist role.

Star Trek: Phase II

Star Trek: Phase II (Photo credit: Wikipedia)

CIO´s will more become like captains to understand the challenges in this new space and align the crew and the rest of the ship to needs of the next decade. When cloud computing is the engine for agility, Big Data is the survival kit for the enterprise in the future. So Spock and Scotty are the two main assets of modern Enterprises  and James T. Kirk has drawn the right decisions from them, always.

Enhanced by Zemanta