Last month, I reviewed the effects of the dreaded Millennium Bug.[QQ] The bug was, I said, widely misunderstood – especially by the media, which concocted stories about “expert predictions” that, at midnight on 31 December, planes would plummet, nuclear reactors would melt down … and so on. The real risk – of date processing failures in a multitude of business systems – was more prosaic. Many have taken place, and there will be more throughout 2000. But, because of the work of thousands of dedicated people, they have been isolated and undramatic. We should be grateful.
Nonetheless, important questions remain. Why have countries and organisations that have largely ignored the problem experienced little more difficulty than those that did a lot? And, as the global cost approaches £400bn, was it necessary to spend quite so much?
First, developing countries are far less technology-dependent than developed countries – and more used to coping when things go wrong. That applies also to Russia and eastern European countries, where Y2K action was limited.
The few large developed countries were most at risk – and that’s where most money was spent. Even so, it is claimed that in some, for example Italy and Spain, not much was done. Well, it’s true that their governments were less publicly active than those of, say, the US and UK. But their governments and large corporations certainly didn’t ignore Y2K – for big business, in particular, it was a global problem, not a national one.
Also, it’s expensive to be first – and Y2K leaders had good reason to help the laggards. The scale of international cooperation was remarkable.
Initiatives taken, for example, by the US and UK got things moving. But it meant they spent much more than some others.
Y2K was also essentially a problem for big, complex organisations. Just as with countries, smaller organisations faced simpler problems. Many are finding that they can fix, say, an invoicing problem, and that invoices can be done by hand in the meantime.
And the “massive” costs were not so large per organisation. For example, the £20m yearly average for the top 100 UK businesses was a small proportion of their IT budgets. Yet it meant £6bn over three years – so the £20bn-plus estimated for the UK alone doesn’t seem so high. And those that spent these sums have a better understanding of their IT resources and priorities, have sharpened up contingency planning and done valuable housekeeping.
This is a better result than for many more grandiose IT projects.
But all this doesn’t entirely answer the questions. It’s hard to believe that every organisation got it right, that every programme was finished on time, that nothing of significance was missed, anywhere throughout the developed world. Yet there have been few major problems, and such an outcome just doesn’t fit with usual experience of big projects. So, perhaps the bug wasn’t as big a threat as it seemed – perhaps companies did spend too much.
Well, it’s easy to be wise now. Before 2000, the fears were real enough.
Download our Whitepapers
The greatest was the unknown: no one understood our dependence on computers; or the interdependencies between computer systems – both within and external to organisations; or the impact on wider society if things went wrong.
It would have been irresponsible to ignore the risk. After all, no insurance company, whose essential business is the assessment of risk, would provide Y2K cover – except under near-impossible conditions. In retrospect, they could have made a lot of money.
Organisations that examined the problem decided that the job had to be done. I know of none that decided to ignore it – where it was ignored, it was because of ignorance not careful assessment. Uncertainty about the outcome continued to the end: I know of no major organisation that scaled its project down as it learned more about it.
I believe that those that did little and escaped problems were lucky.
Just as someone who doesn’t buy insurance is lucky if his house doesn’t burn down. But that doesn’t mean that those who took out insurance should feel they wasted their money – or regret that the house remains unburned.
In those early days, no one could predict how things would turn out.
As it was, the outcome seems to have been relatively benign. It might not have been – and those that pioneered solutions, got on with it and did the job took the only rational and responsible course. None regret it today.
But there’s another issue. Y2K has demonstrated that we don’t really understand the importance of computers to our society. Probably we are less dependent on them than we thought. Perhaps the claims made by the computer industry are false – it is arguable that we are not yet living in the “digital age”. Yet that age is coming, and unless we improve our understanding of these matters, we may not be ready when it arrives.
Robin Guenier is the executive director of Taskforce 2000.