Digital Transformation » Technology » Company killer

FOR ENGINEERS and managers, optimising things is a way of life – we just can’t help ourselves. Even if a machine or system is running well, we just know we can improve it.

But optimising seems to have become something of a management fetish without really understanding the risks. The modern credo is now cut the slack, focus on the money earners, take out as many people as possible, reduce all the waste, and focus on just-in-time delivery at minimum cost and maximum profit. Compound this with the need to keep markets and shareholders sweet, and thereby justifiably maximise the bonus pot, and optimisation becomes an effective company killer.

A degree of optimisation makes sense, as we shouldn’t be wasting anything, but history and practical experience tell us that overdoing it is always ruinous. Over-optimisation crushes people, departments and complete companies. If, for example, one person’s illness, a power outage, or non-delivery of vital parts results in a business totally seizing up, then the end is near.

Without exception, under-investment and under-provision always incur costs that outweigh the initial savings or other benefits. Ask yourself: would you like to travel to work in a family car or an F1 racing machine? The F1 would be exciting for sure, and it would get you there at great speed. The family car, however, would do it more modestly, day after day, year after year. Without fail. Performance really does equal brittleness, and it is not only axiomatic – it is well proven.

All systems – man-made, biological, natural, contrived – follow a so-called bathtub curve of reliability. Early failures in a system are usually followed by a long period of stability – good health and near failure-free operation – to be followed by a wear-out phase and, finally, catastrophic failure or death.

As we increase the performance of anything, which by definition includes efficiency, the wear-out and death phase comes in earlier and earlier. On another dimension we can map performance or output as a function of time, reliability and resilience. To achieve high performance, every component has to be honed, which makes them more and more focused on a single role that is critical to the whole operation. For reliability and resilience, on the other hand, we would like to see some load-sharing or multi-tasking capabilities built in. But, as ever, higher performance has an unavoidable cost in terms of a reduced lifetime. There’s no getting away from it – you can’t end-run the laws of physics.

For these reasons, and years of design and management experience, I feel my blood run cold when I see optimisation being taught in business schools and slavishly implemented by unthinking and unknowing managers. Chasing peak performance is an expensive race, and all too often turns out to be fatal.

Another aspect that is mostly overlooked is the mode of failure. With a well-designed system, we see a gentle and progressive decline over a noticeable period. However, over-optimisation always invokes abrupt failures without precursors or any warnings. This is the worst possible formula for most companies, and is always expensive in terms of downtime and customer relations. Remember Blackberry and the power outages in parts of the US?

Blindly designing systems and optimising organisations without any overall performance insights is foolish, dangerous and always leads to some expensive management surprises.

In good design it is necessary to make wise choices about the particular application. Is your company a missile or an aircraft, a production shop or a lab? Is it optimised enough, or can it take a tweak or two? The corporate landscape is littered with the results of getting this wrong. ?