Since the early days of computing, software development has followed a very consistent pattern: every decade or two, a new paradigm emerges that raises the abstraction level by one significant step. We moved from punch cards to assembler, from assembler to C, from C to object-oriented languages like Java and C++, and then from there to higher-level scripting and systems languages like Python and Rust. Each of these transitions shared the same fundamental characteristic — they allowed developers to think less about how the machine does something, and more about what needs to be done.
Does AI break this pattern, or does it continue it?
In my view, it continues it — but at a scale and speed we have not seen before.
When C appeared in the early 1970’s, it was a revolution. Programmers could abstract over registers and memory addresses with structured control flow. With Java and C++ in the 1990’s the next step happened: objects, encapsulation, inheritance. The programmer could now model the world in concepts rather than instructions. A Car object had methods and state. The machine details where moved even further down. Python and its contemporaries took this further, removing memory management entirely and allowing rapid prototyping that would have taken weeks in C to be done in hours.
Each of these epochs shared one common denominator — the developer still wrote every line, still translated intention into instruction, just at a higher level.
This is exactly the step AI is taking now.
The translation from intention to implementation was always the developer’s core job. You had an idea, you had a requirement, and your skill was to bridge that gap in code. LLMs are now beginning to perform this translation automatically. Not perfectly, not without oversight, but in a direction that is unmistakable.
We are moving from imperative thinking — tell the machine step by step what to do — to intentional thinking — tell the system what outcome you want. The shift is profound. It is not about writing less code, it is about changing who writes it and at what level of abstraction humans need to operate.
Is this the end of the developer?
I would argue no, but the role will shift dramatically. The same way the introduction of C did not eliminate hardware engineers, but changed what skills were needed and where the value was created. The developers of the next decade will be architects of intent, not writers of loops. The skill set moves from syntax mastery and algorithmic thinking towards domain expertise, system design, and the ability to validate and guide AI-generated output.
From my personal experience leading large engineering teams, I already see this shift in practice. The question is no longer “can you write the code?” but “do you understand the system well enough to judge the code that was generated?” Quality, correctness, security and maintainability remain a human responsibility. The generation part is moving to the machine.
Where are we today?
We are probably in the MS-DOS phase of this transition. The tools are real, the output is impressive, but the workflow, the standards, the guardrails and the enterprise-grade reliability are still being developed. Companies that understand the abstraction shift happening now will be the ones architecting the platforms of the next decade. The others will be the ones migrating legacy prompt-less codebases in 2035.
The lesson from history is clear: abstraction always wins. The only question is how fast you adapt.

