Git Was Never Built for Machines – And Yet, It Became Their Library


When Linus Torvalds created Git in 2005, he solved a very human problem: how do distributed teams of engineers coordinate changes to a shared codebase without stepping on each other’s feet? The design was elegant, the model brilliant – and entirely anthropocentric. Every concept in Git, from commits to branches to pull requests, was designed to reflect human reasoning about software change.

Nobody thought about machines. Nobody had to.

GitHub today: 420 million repositories – the world’s largest unintentional AI training dataset, built by humans for humans, read by machines for everything.

The Unintended Consequence of 20 Years of Open Source

Fast forward to today. GitHub hosts over 420 million repositories. The platform has become, without anyone planning it that way, the single largest structured dataset of human reasoning about software in existence. Not raw text, not unstructured web crawls – but versioned, annotated, context-rich artifacts of how engineers think, communicate, decide, and build.

When OpenAI trained Codex, when DeepMind built AlphaCode, when Anthropic trained Claude – they all fed on this corpus. The commit messages, the PR discussions, the inline comments, the issue threads, the README files. Git history is, quite literally, the autobiography of software development, and AI learned to read it before most organizations realized what they were giving away.

The irony is profound. A version control system designed for human collaboration became the substrate for training non-human intelligence. Git was the library. GitHub was the librarian. And the AI models were the quiet students who read everything, remembered everything, and said nothing.

What Does a Machine Actually Learn from a Repository?

This is where it gets interesting – and where most CTO conversations I have still miss the point. The common assumption is that AI learns “code” from GitHub. That is technically true but deeply incomplete. What AI systems actually absorb is something far richer: the relationship between intent and implementation.

A commit message that reads “fix edge case in authentication flow when user has expired token” paired with a fifteen-line diff teaches a model not just syntax, but causality. The issue thread that preceded it, the review comments that shaped it, the test that was added afterward – together, they represent a chain of engineering reasoning that no textbook ever captured at scale.

This is fundamentally different from learning from documentation. Documentation describes what code does. Repositories reveal how engineers think about what code should do, why it changes, and under what conditions decisions get revisited. The difference is the difference between reading a law and watching a courtroom.

The Next Evolution: Repos as AI-Native Artifacts

Here is where we stand at an inflection point that most engineering organizations have not yet grasped. If repositories are already the de facto training substrate for AI, the logical next step is to design repositories with that fact in mind. Not reactively, not accidentally – but intentionally.

In the coming years, we will see repositories evolve from pure version control archives into structured knowledge bases that explicitly address AI agents as consumers. This means several concrete developments that are already beginning to emerge in practice.

The first is the emergence of a dedicated specification layer. Today, most codebases contain implicit intent buried in comments, naming conventions, and tribal knowledge that lives only in the heads of the engineers who wrote it. Tomorrow’s repositories will carry explicit machine-readable specifications – linked directly from the code they describe – that articulate not just what a module does, but what it is trying to achieve and what constraints govern its evolution. Formats like OpenSpec and similar frameworks are early experiments in this direction.

The second shift is what might be called the Intent Layer. Beyond the specification of individual components, future repositories will carry structured metadata that describes the reasoning behind architectural decisions. Why was this approach chosen over the alternatives? What trade-offs were consciously accepted? What assumptions does this design rely on? This is the kind of context that AI agents need to reason correctly about a codebase – not just to generate code that compiles, but to generate code that fits.

The third development is the rise of agent-aware commit protocols. If AI agents are both reading and writing to repositories – which they already are in many of our development pipelines today – the commit structure itself needs to evolve. Automated commits should carry provenance metadata: which model generated this, from which specification, against which test harness, with what confidence. Human commits will increasingly need corresponding context flags that distinguish deliberate design choices from pragmatic workarounds.

The Strategic Implication Nobody Is Talking About

There is a competitive dimension here that deserves more executive attention than it currently receives. Organizations that deliberately enrich their repositories with machine-readable intent and specification data are not just improving their own AI-assisted development workflows. They are producing higher-quality training data for the next generation of models. If open-source development continues to feed the pre-training pipelines of frontier AI systems, then the quality of the reasoning encoded in public repositories will shape the quality of the AI that the entire industry relies on.

This creates an asymmetry: companies that treat their internal codebases as structured knowledge assets – not just as source code archives – will build internal AI capabilities that reflect higher-quality reasoning. The gap between organizations that have thought seriously about repository architecture as an AI substrate and those that have not will become a measurable capability difference within this decade.

Git was never built for machines. But machines have made themselves at home, and now the question is whether we redesign the house accordingly.

The answer, for any organization serious about AI-driven development, is yes. And the time to start is now – not when the next model generation arrives, but before it does, while there is still time to shape what it learns from you.


This post is part of a series on the structural shifts in softwawe development brought on by AI integration. Previous entries covered specification-driven development and multi-agent orchestration in enterprise codebases.

The 10x Developer Myth Is Over – And AI Killed It


Every industry has its mythology. In software development, the most persistent one is the 10x developer. The idea that certain individuals produce ten times the output of an average engineer. That somewhere out there is a person who, given the same problem and the same tools, simply delivers an order of magnitude more than everyone else. I have been in this industry for over thirty years. I have hired hundreds of engineers. I have worked alongside many extraordinary ones. And I want to make an argument that most people are not yet ready to hear: the 10x developer, as a concept and as a hiring strategy, is over.

Was the myth ever real?

To be fair, there was something to it. The original research goes back to a 1968 study by Sackman, Erikson and Grant that found dramatic variance in programming performance across individuals. Later studies confirmed that top performers could indeed outpace average ones by significant multiples on certain tasks. The variance was real. It came from a combination of deep domain knowledge, fast pattern recognition, intimate familiarity with the codebase, and the kind of instinct that only accumulates over years of hard-won experience.

But the myth also generated consequences that were never healthy. Star developer worship. Knowledge hoarding as job security. Teams with a bus factor of one. Engineering cultures where a handful of individuals became irreplaceable and knew it, and occasionally leveraged that position in ways that were damaging to everyone around them. I have seen this pattern destroy more than one team. The 10x developer was real, but the culture built around chasing that individual was often toxic.

The lone genius model of software development is being replaced by something more interesting: distributed capability, amplified by AI.

What AI actually does to the productivity distribution

When I look at data from teams that have genuinely adopted AI coding tools – not as a toy, not as a demo, but as a core part of their daily workflow – the productivity distribution changes in a way that is structurally important. The bottom of the distribution rises significantly. Developers who previously struggled with boilerplate, with unfamiliar frameworks, with the cognitive overhead of context switching, now have a capable assistant closing those gaps in real time.

The top of the distribution also rises, but proportionally less. The senior developer who already moved fast moves faster. But the gap between the senior and the junior – the gap that the 10x myth was built on – narrows considerably. A developer with two years of experience, working with a well-configured AI coding environment and a clear specification, is producing work today that three years ago would have required five years of experience to produce. I have observed this directly, and the numbers are not subtle.

This is the democratization of execution. And it is happening faster than most organizations have internalized.

What still differentiates? The things AI cannot compress.

I want to be precise here, because the argument is sometimes misread as “all developers are now equal.” That is not what I am saying. What I am saying is that the dimensions that previously drove the 10x differential – typing speed, syntax recall, knowledge of obscure APIs, ability to hold large amounts of code in working memory – are being compressed by AI. Those were always somewhat accidental measures of value anyway.

What remains genuinely scarce, and what AI does not currently compress, is judgment. The ability to recognize that the technically correct solution is wrong for this business at this moment. Domain knowledge deep enough to spot when the AI-generated code is plausible but wrong in a way that will only manifest six months later under production conditions. System thinking that understands how a change in one component propagates to parts of the architecture that are not immediately visible. The ability to write a specification that is precise enough to drive correct AI output on the first attempt rather than the fifth.

These are the dimensions that matter now. They are also, interestingly, dimensions that were always present in the best senior developers but were often obscured by the noise of raw execution speed.

Speed of typing versus clarity of thinking. The second is now the bottleneck

So what does this mean for hiring?

It means the interview process most companies still run is measuring the wrong things. Whiteboard coding under time pressure tests a form of performance that is becoming commoditized. LeetCode exercises optimize for pattern recall that AI can now provide on demand. These processes were always a proxy for what we actually wanted – problem solving ability, communication clarity, system intuition. They were proxies because we had no better measurement. We should replace the proxy, not defend it out of habit.

What I would measure instead: How does this candidate think through an ambiguous problem? Can they write a precise specification from an imprecise requirement? How do they evaluate AI-generated output – do they review it thoughtfully, or do they accept it uncritically? How deep is their domain knowledge in the areas that matter for your product? How do they communicate technical decisions to non-technical stakeholders?

These questions do not fit well into a two-hour coding interview. But they predict performance in an AI-assisted development world far better than any algorithm challenge.

And compensation? And team design?

Compensation models built around the 10x mythology created enormous salary variance in engineering. Some of that variance reflected genuine scarcity of specific knowledge. Much of it reflected the leverage that star performers held in organizations that had allowed single-point dependencies to develop. As AI redistributes execution capacity, the leverage shifts. The knowledge hoarder loses power. The system thinker and domain expert gain it.

For team design, the implications are significant. The argument for large engineering headcounts was always partly about raw implementation capacity. If AI increases per-developer output substantially, the optimal team size for a given amount of work changes. But the answer is not simply to run the same team smaller. It is to run a different kind of team. Fewer people doing pure implementation. More people doing specification, review, domain modeling, and AI orchestration. The roles look different. The skills required are different. The management model is different.

Organizations that reduce headcount as their only response to AI productivity gains will discover they have also reduced the judgment capacity they need to direct the AI effectively. The teams that will win are those that redesign around the new bottleneck, which is not implementation anymore.

The end of a mythology, and what replaces it

Mythologies exist for a reason. The 10x developer myth gave organizations a simple mental model for why some teams were dramatically more productive than others. It gave individual developers an aspiration and a career ladder. It gave the industry a way to justify enormous compensation variance. All of these are real needs, and they do not disappear when the myth dissolves.

What replaces it, I think, is something more honest and in some ways more interesting. The most valuable developer in the next five years is not the fastest coder. It is the clearest thinker who also knows how to direct machines. That is a combination of human skills – domain knowledge, communication, judgment, systems thinking – with a new technical competency: the ability to work effectively with AI as a collaborator rather than a tool.

That developer exists in every organization today, often not in the role you would expect. Sometimes it is a domain expert who never wrote much code but now, with AI assistance, is producing remarkably precise and useful software. Sometimes it is the thoughtful mid-level engineer who was always slower than the star performers but whose output had fewer bugs and required less rework. These people are about to become significantly more valuable, and the organizations that recognize this early will build better teams for the next decade.

The 10x developer had a good run. What comes next is more interesting, and more human.

From Procedural to Intent – The 35-Year Arc of Programming Paradigms


In the early nineties, when I started my career in IT, writing software meant writing every single instruction by hand. C was king. You told the machine exactly what to do, byte by byte, pointer by pointer. There was beauty in that precision, and also a brutal limitation. The mental overhead consumed enormous human energy, and most of it was wasted on mechanics rather than meaning.

The arc of programming abstraction – from machine instructions to human intent. Each decade, one less layer of mechanical translation.

Did Object Orientation solve the problem?

Object orientation promised salvation. Java and C++ shifted the abstraction one level up. Suddenly we talked about things rather than operations. Classes, interfaces, polymorphism. In theory this modelled the real world better. In practice it generated enormous amounts of boilerplate, heated debates about design patterns, and the rise of the “architect” as a separate species. The productivity gains were real but also came with new complexity layers. We traded one set of problems for another.

Then came Python, and with it the age of expressiveness. Write less, mean more. The scripting world invaded enterprise development. Ruby on Rails showed that a small team could build in weeks what previously took months. The abstraction level climbed again.

So what is the next step?

Looking at where we are in 2025, I believe we are witnessing the most fundamental shift in the entire history of programming. The abstraction is no longer about how to express logic in a language. It is about expressing intent in human language, and letting a model translate that into executable code. This is not an incremental evolution. This is a paradigm change comparable to the jump from assembly to high-level languages.

The trajectory is clear: from machine instructions to structured code to objects to functions to natural language. Each step removed one layer of mechanical translation from the developer’s mind. AI removes the last one.

Intent becomes executable. The translation layer that once required years of training is now handled by language models

What this means for teams building software today is profound. The question shifts from “who writes the best code” to “who formulates the best intent.” The language model handles the rest. Developers who understand this shift will adapt. Those who think it is just a fancy autocomplete will be left behind, and probably soon.

The 35-year arc of computing has always been about raising the level of abstraction. We are now at the top of that arc, and the view is remarkable.

The Next Abstraction Layer: From Procedural to AI-Driven Development


Since the early days of computing, software development has followed a very consistent pattern: every decade or two, a new paradigm emerges that raises the abstraction level by one significant step. We moved from punch cards to assembler, from assembler to C, from C to object-oriented languages like Java and C++, and then from there to higher-level scripting and systems languages like Python and Rust. Each of these transitions shared the same fundamental characteristic — they allowed developers to think less about how the machine does something, and more about what needs to be done.

Does AI break this pattern, or does it continue it?

In my view, it continues it — but at a scale and speed we have not seen before.

Generated with Google Gemini

When C appeared in the early 1970’s, it was a revolution. Programmers could abstract over registers and memory addresses with structured control flow. With Java and C++ in the 1990’s the next step happened: objects, encapsulation, inheritance. The programmer could now model the world in concepts rather than instructions. A Car object had methods and state. The machine details where moved even further down. Python and its contemporaries took this further, removing memory management entirely and allowing rapid prototyping that would have taken weeks in C to be done in hours.

Each of these epochs shared one common denominator — the developer still wrote every line, still translated intention into instruction, just at a higher level.

This is exactly the step AI is taking now.

The translation from intention to implementation was always the developer’s core job. You had an idea, you had a requirement, and your skill was to bridge that gap in code. LLMs are now beginning to perform this translation automatically. Not perfectly, not without oversight, but in a direction that is unmistakable.

We are moving from imperative thinking — tell the machine step by step what to do — to intentional thinking — tell the system what outcome you want. The shift is profound. It is not about writing less code, it is about changing who writes it and at what level of abstraction humans need to operate.

Is this the end of the developer?

I would argue no, but the role will shift dramatically. The same way the introduction of C did not eliminate hardware engineers, but changed what skills were needed and where the value was created. The developers of the next decade will be architects of intent, not writers of loops. The skill set moves from syntax mastery and algorithmic thinking towards domain expertise, system design, and the ability to validate and guide AI-generated output.

Generated with Google Gemini

From my personal experience leading large engineering teams, I already see this shift in practice. The question is no longer “can you write the code?” but “do you understand the system well enough to judge the code that was generated?” Quality, correctness, security and maintainability remain a human responsibility. The generation part is moving to the machine.

Where are we today?

We are probably in the MS-DOS phase of this transition. The tools are real, the output is impressive, but the workflow, the standards, the guardrails and the enterprise-grade reliability are still being developed. Companies that understand the abstraction shift happening now will be the ones architecting the platforms of the next decade. The others will be the ones migrating legacy prompt-less codebases in 2035.

The lesson from history is clear: abstraction always wins. The only question is how fast you adapt.

People lead the Digital World


In the recent years more and more discussions have been arisen around Internet of Things and the professional version Industry 4.0. Mainly it is a consequent step from the cloud approach the IT Industry currently pushing and before that the first iteration which was called ASP(Application Service Provider).Funny enough all these concepts and implementations are sold under the umbrella of more efficiency and productivity of men or women.

Does this really lead to more productivity ?

Chillyd49b949f2dIt depends… There is not true answer and time will tell. Based on various studies the productivity is real existing, but was consumed by the more complex integration of the digital world. Have the early digital assistants like electronic mail, the Internet and electronic calendar, u.o. led to more productivity of individuals. The recent adding of more and more decisions on „not experts“ led to less productivity at all. Was in the late 90’s an admin which organized many parts of the business, now more and more high paid employees are doing this by themselves. Of course there are a lot of supporting tools. However it is obvious that experts are much better handling the right processes, individuals can du this when more seldom are they do that.

Companies add more software to improve, means optimize the cost further, to support this evolution, but end of the day the often individuals take more time to get the process done with they think have the best benefit from. This leads in less productivity. Maybe not for the company, since a lot of this are taken by extra hours, prior the digital revolution, it was called privat time. Always on and Always available means less productive for the individual.

How about the society ?

Evolution requires education, revolution requires new thinking and more education as well as new processes and procedures. With the current pace IT and the other business change the models many peo- ple could not keep up, so they where often lost and productivity is going down. Does it matter. Yes ! We have to start to justify the productivity no more on one or a group of individuals, more on the impact in economics. Only this way we can inno- vate further.

Evolution or revolution ?

If enough people adopt to a total new change, we speak from a revolution. Evolu- tion happens every day and is a constant we see since the the early days of mankind. My opinion is that a revolution in society can be good if this is than taken with the majority of people and nobody is left behind by purpose. This is a tricky process. However in the past the revolution and evolution where centered around man kind. In the younger history more and more voices you will hear that talk about a revolution in robotics and artificial intelligence. This will lead that mankind will no longer sit in the driver seat, maybe some minor believe that they can control that revolution.

 

Do we than need still humanity

The question more would follow the „Why“. Algorithm can do things without „error“, robots can work day and night. This will lead for the ultimative questions, for whom? If more and more humans are replaced by smart machines or algorithms, what will than happen with humanity? There is a simple answer from the history they will not survive, maybe some in the first iteration, but end state is with out mankind.Tomatoes Dream

Productivity around humans

Taken this thought, it will take you to the ultima ratio that we have to focus on the man and woman, child and parents, old and young as the center and pole to drive productivity gains. All efforts have to bing the productivity to increase the personal life of people not replace them in the first place. To make this quite right, there will be some duties, work or other stuff which will benefit from removing humans out of the center. the goal seams to me to upskill the starting with the children for a better productivity paired with the skills mankind has to offer as complement.

 

See also

Machines Can`t Flow:

 

IoT – Chapter I: The Things in the Internet and the connection


Why IoT now ?

In the not so young past the miniaturization has cause a variety of new concepts to drive the markets. This was introduced to many markets. In the IT it is called, and was originated, from the move to softwarization of HW. Yes that is the main driver of the digitalization. Was in the 90’s the key on CNC productions, process automation,

People around the world

(c) by presentation load

improving the way humans work and it turned over to the 00’s in more focus on Software. With the concept of Open-Software and mass development, more high layer of software development open the door to connect devices. Not to mention the standards which are born in the IT side, like IP and the ISO models led to a harmonization of the infrastructure. Since around three years the trend of a unique backbone, we call it IT infrastructure, which consist out of HW/SW/Middleware and is transparent to the development of software cased the opening of a window to more and more connecting of different devices. Still many questions are not yet answered, like security, ownership of generated and collected data, responsibility, the governmental, profiling u.o. However one of the burning and open questions in the OO’s was solved with the introduction of HADOOP and MAP-Reduce concepts. Combined with the open software approach it defined the starting point of the big data area, or I would call it the era of information. Now the Internet of Things was born. With combination of the Google and Apple for mobile as well as Facebook and RedHat for Information collection and backbone OS the collected data was much more easy to personalize and enlighten with more data sources collected somewhere else. Of course there are still many companies opt-out of the sharing and implement own standards to hold on the proprietary formats of data. On the long run they will follow the market and join the collective, I am quite sure and the history has proven this more than one time.

What is IoT ?

The basics of Internet of things is the collection of data with uncounted of variations of devices. It can be from machine, human body sensors, cars, trains, databases, watches, sensors on environmental, mobile devices  among many others possibilities. Smart homes and smart cities are the next steps which the industry currently targeting.
In addition to the devices and island of data producing entities the Internet of Things is marked thru the collection and combination of the data. These are stored in various new concepts of DB’s. At the end the key is to drive new insights out of more data. So in essence the Internet of Things generate facts from the billions of sensors and combination with other data stored in the Internet.

What is than Industry 4.0

Under the term Industry 4.0, the digitalization of the process automation industry is often mentioned. From my point this is only a small fraction. The Industry 4.0 describes all attempts to digitalize the processes in the industry. This covers retail,  manufacturing, connected cars, aerospace, transportation as well as healthcare and other industries. The focus here is not only to improve the quality of the data, it drives also a totally new way to go to market: The as a service approach. This means that the companies no longer supply the offerings as a one off, more as a constant service. It is like the telephone companies do already in there business models. Other companies now driven by many startups to do the same in the Business to Business (B2B) efforts.

from http://fabiusmaximus.com

Future is Today

What is next?

The digitalization will not come to an end. It will consume more and more industries. Like the mathematicians would say „The World is build on numbers and functions“ This process will drive IT and and Information skills into business units. The traditional IT departments will be commoditized and often move to a provider. I see that happening in the next 10 years, like it happened to the telekom models in the late 90’s. Today no large enterprise is running his own telecommunications department. It is all integrated in the IT. In the next 5 years all non critical systems will be run by a specialized provider. Global and large enterprise have to understand the impact of open software, the information and data science. With the sharing of information and connection of things security w

ill be a critical asset to understand. Information and data will be assets which will find the way on the balance sheet.
In the next years it will not be the question what, more the question why the companies need still own IT

 

Transforming the Enterprise


In the recent years the change in a lot of industries have arisen from an traditional business approach, which was developed over decades to a software defined version of that. There are compelling reasons why this has happened.

(c) by presentation load

(c) by Presentation load

When the industry develops products often it takes decades that this comes out. Take cars for example, all development from cars currently in mass production are start to build and design in the last 5 -10 years. Updates will often go into the next generation of the product cycle.

This is obvious if it is mechanical, but on software it can be much faster adopted. A good example is Tesla Motors which changed the Industry with a concept building a computer in from of a car. Nightly software is updated over the air and new functionality is available for the drive or passenger. But not only this has changed also the selling of that kind of car is different. While for traditional car dealers it is a exercise to train all the sales personal on new function and features, new leasing models or service capabilities to explain this to the customers, modern companies change the sales structure to the internet with an easy to update and adjust model. This leads that options and selling capabilities more depend on the flexibility and creativity of the company, not on the salesforce and their adaptability. The new model traditional Enterprises stumble into demands deeply a adoption of an agile and innovative behavior and processes to leverage the demand and open new segments of making business with.

Why is this happen

Because it is possible. With the appearance of cloud and the models supported thru that, Startups have shown that it is easy to build business without a large invest into infrastructure or DC. Even more, in the past you have to ask investors for a large amount of money to build the DC now you can pay while you build your business. This is much more enabling the investment of the capital in the business model and not into the IT landscape. But this is only one aspect. With the commodization of IT resources, and the container based IT, it is much more cost efficient and reliable to build enterprise class IT with a minimum of investment. However, there is a trap many companies will fall into, which is standardization. Currently there is a believe that one cloud standard, driven by cloud providers, can be the right one, but history has shown that this will lead to more cost and will be replaces in time by an Industry association. We see this on the horizon with OpenStack already, which this is still far of enterprise ready. The key will also be more in the PaaS layer with open software, like CloudFoundry and Docker, which opens a broader Eco space for applications and operations.

Innovation HandIllustration by Dinis Guarda

Innovation HandIllustration by Dinis Guarda

So what about to enable the “New” Enterprise model

The new model will be driven thru innovation in software and applications. With my daily talks to large companies and customers many of them think about how to implement this two aspects into their business process modelling. Often it is driven out of the IT department, but the link to the business and the drivers are missing or simply not established. T I see large enterprises and global companies investing in application development thru the Line of Business and building a second IT knowledge, which is more enrich with the business than the agile development. This not only leads often to a wrong assessment of the best development environment, it also creates a new class of information islands. In the long run this will not be the right innovative approach for many enterprises, but it let adopt and compete with the new kids on the block, the startups, much better. My advise to the CIO and cloud architects is always to engage actively with the CIO departments and help them to change to a more agile and innovative model, we call that, continuous innovation, but also get in return the IT expertise to make the right strategic decisions for the company.

IT provider, like EMC and the federation, enables thsi process and guide also thru that, with various iterations EMC has possibilities to analyze the  current status of an IT department and show the path from a 2nd platform concept to the modern web scale architecture, the 3rd platform concept demands. Since this is not a “shoot once and forget” also in IT terms the “New” model is a constant change. Was it in the past a management of resources and strive form more synergy and “innovation” thru new HW/SW will be the next decade the IT departments more a broker of public and private cloud, may be also for other companies as an additional service.

How to proceed ?

It is not simple and has to be step by step, since the current change of the business model in many verticals not only driven thru development and operation aspects, it also deeply influenced thru big data concepts, which often lead to a Internet Of Things discussion. Silos and public cloud may be an answer, the key to success I see in many cases with a joint effort of the business units and the IT responsible people in the enterprise.

The new IT Landscape


A view from the Infrastructure

In the last 15 years the Infrastructure landscape was defined by demands of the business. This will of course not change. However the approach that one business line demands middleware X another middleware Y will stop. There is a profound reason for that.

In the last couple of years the physic run the Infrastructure has dramatically comodiasied. This has reach a point where the saving for large enterprise no

Featured image

longer get in significant dimensions. The efficiency thru Server Virtualization and nowadays Storage Virtualization has reached in some enterprises more than 80%. With new storage and server orchestration layers and additional concepts like the enterprise hybrid cloud (EHC) this can be tweaked more, but needs first a different approach to the IT operation.

Key here is private cloud, which is similar to the public could offerings, of course on premise.

So what is the catch?

Mainly the operation. In the traditional datacenter, many enterprises and global operational IT departments have build a structure to map the silos approach of the LoB (Line of Business). You will find functions focused on Server, Storage, Networking, Databases, Middleware etc. Each of them have coordination functions with the LoB and cross functional sections. Lost of talks I have with those entities in the IT department always claim that they can do that better than external companies like VCE, which offers converged Infrastructure. Also many of them hide behind the “vendor-lock-in” argument.

On the other side we see that this cost the companies a fortune. Often this IT departments cover 70% of their cost with this, or the other way they can save a lot of that.

What has changed ?

With the concept of “as-a-service”, IT has the ability to automate many tasks and build a software layer as the final governance. With new concept of SLA build into the Software defined components IT personal no longer has to pan, define think-about and run it. Combined with the Converged Infrastructure and the possibilities of Software defined it changes the silos approach to an more holistic view of the datacenter. This does not only save cost and transport test and development of the infrastructure back to the vendor, it also allows higher integration of resources to drive more efficiency.

How does LoB react ?

Often they already there. With offerings of a public cloud the development of new software happens in this organizations often without the IT department involvement. This is a major concern of the CIO and CDO which I here very often. LoB´s look at the business outcome, they have alternatives to the internal IT now and they move off.

So what is next?

From ym view a lot will come in to analyze of the current state of the IT department and how mature this is already in the as-a-service transformation. There are various of offerings like the IT Transformation Workshop of EMC to define and reshape the IT landscape. Have a look at that.

So what with the applications?

Not so simple. There will be three types of applications found in many of the enterprises.

Applications which only deliver information, exist because of historical reasons. Others are monolithic large Enterprise Apps, like SAP, or Oracle Applications the thrid one are new apps for the new business lines touching Web, Mobile, social and cloud.IT-Transformation-Storymap1

For the first, I would retire them and replace that by a database delivering the results. Maybe there are apps no longer used, but nobody realize that? Shut them down. The 2nd kind is more tricky, and have to looked at case by case build a migration strategy and this may take some mont/years. The last I would put immediately on the new concept of Infrastructure.

So what is the key characteristics of this infrastructure?

Automation and orchestration, comodization and standardization. To drive more cost out of the IT the next generation of architecture have to follow this rules. More that that it has to build an independent layer between the physic and the applications. An interface between the resources and the applications. Efficiency and time to provisioning can be only gained with automation. Modern architecture drive provisioning down from weeks to days or even hours, defining the SLA and report back the cost of the selected SLA`s. Also it reports back whether a service breached the SLA or has performed in the payed and agreed parameters.

Finally all this journey start with the ability of the IT department to change and understand the journey of the private cloud.

Image courtesy of pro-physic.de, EMC Corporation

Read More:

http://itblog.emc.com/category/it-transformation/

https://blogs.vmware.com/cloudops/it-transformation

The Human Body, the most advanced factory – the real big data


Since mankind was self aware we seek to understand how the human body and functions work and how the soul is integrated in the whole system. It has taken in the early days of earth to now to optimize the factory and develop higher function, which

we like to call consignees, brain, and social empathy. The human ecosystem in in essence a factory. In numbers, every day around 20 Billions of cells will be replaced by new ones out of the 10.000 Billions we have. This means that each 10 years your body is rebuild. This cost energy in addition we also loose energy, around 50 to 360 Watt for keeping the factory running. By the way most fitness relevant trackers take this in consideration. On a daily basis this will add up to around 2,9 kWh. In the case of heavy work or sports this will go up of course.

Comparing the DataCenter with our body, we will find astonishing parallelism, like the nerve system and the network, like the blood and the power, like the heating and cooling. This comes from the same physics we and the DC act in. If we dig into that, we will see that the nature has solved most of the demands with much more creativity. Also we find dedicated systems, which autonomous action to keep the fabric running. I think on the limbic system

EMC DC Durham

EMC DC Durham

and our ability to react on external wounds without having a big escalation. Root cause is done on the fly for the minor issues.

A little more details on this fascinating comparison you can read in the next episodes:

  • Episode I: The Human Body a optimized fully automated factory
  • Episode II: The Blood in the human factory
  • Episode III: The Sensory
  • Episode IV: The Big Data approach in the Human factory
  • Episode V: The Control in the human Factory
  • Episode VI: The CyberControl in the Human factory
  • Episode VII: The Chain of Command in the Human Factory
  • Episode VIII: Automation in the human factory
  • Episode IX: Energy consumption model in the human factory
  • Episode X: Influence of the Soul in the digital Factory
  • Episode XI: Final thoughts on the digital factory

What kind of Analytics ?


In various discussion with my customers and colleagues. I experienced a very controversy discussion around analytic.

Some understand the „what has happens,” also called root cause, others want to predict the future, meaning „what will happen,” a third group tries to answer the question, „what could happen“. Funny enough often is the answer a combination out of a combination of these. When it comes to our very fast decision making and consuming society this can cause some friction.
In the first case, lets call it descriptive analytic, it is the reason many companies invest money to avoid that, but after the incident, not predicted, happens.2434048_300dpi

 

 

Often you find it difficult to get to the root cause, since simple the data or information is no longer available to get to the bottom. In the 2nd case, statistical model show the business based on historical information what can get wrong, lets call it predictive analysis. In the last case, data scientists use machine learning to find the possible future, based on different information streams, it is known as prescriptive analytic. Like in chess where in the first move all possible is and there some prediction could be made, based on the information of the players, over time when more moves have taken the information and moves getting clearer.

 

The human brain is capable of doing this kind of prescription on the fly, we call that experience. By the way a reason why often young managers or entrepreneurs fail, but this is worth another blog. The key is here that the brain has build, by and expert, enough information and channels that the brain could do both predictive and prescriptive.
Having the scientists build the model, means the mathematical representation of the information combination, it can lead to a variety of possible outcomes, we call the data science or machine learning. Key is the the amount of information, it deep and historical very long. This is also on of the boundaries.

 

Many CIO and CTO from companies I talk today do not keep the data, for many reasons, mostly cost. So what can a Data Scientist than do ? Simple he build the model and than run it. Over the time it will be better, preserving the information. Like in the chess where you can see the extreme professional players analyze and predict movements in depth untrained could not imagine , or even provoke movements. This is not yet in this science embedded.

 

A couple of week ago I was fortunate to see a start-up and talk to the founders. It was all about customer intimacy and combining information in a company to serve a better customers relationship. I would drive this one dimension deeper. Why not using this as an internal knowledge base. Let´s predict what the corporate user needs to find around a specific subject, not only internal sources also external can be used here.

 

I am still wondering why HR representatives and headhunter not use predictive analysis to identify the right candidate for a job?
Yes you’re right the information in a company is often so filed that we will not have the possibility to combine them. Here we go with the concept of a data lake, building one repository of information and use it for internal and external benefit.

prop-jet engine

However the last topic is the „what could happen“ case, the descriptive Analytics. This is all about possibilities and often risks. I guess this is used in Six Sigma, and also man kind is used to do that. The mother warns the child about the possible accident, when walking on the street. The company strategic plans is all about that scenarios. I think this method means „I do not have enough data to be more precise. „

 

So what to do? Keep all the data, maybe randomized and anonymized, but keep it, because you never know what business can be build out of the treasure the company has generated over years.

 

Welcome in the Age of Information !