If 20% of all operations ended in the untimely death of the patient, would you be a big fan of major surgery? If your solicitor told you straight up that he only successfully completed 80% of his cases, would you feel comfortable giving him your business? And if one-in-five businesses failed, would you still buy shares?
In the IT industry, a one-in-five failure rate is the norm. In March this year respected information technology analyst Gartner Group estimated that the average company wastes 20% of its IT budget on misguided and inefficient spending. Worldwide, this figure translates to $500bn of wasted corporate investment every year.
The statistics on IT project failure are staggering. In 1997, Boston-based analyst the Standish Group revealed that 22% of US IT initiatives were either cancelled or not completed. And of those that did see the light of day, 84% were either late or over budget. A year later, the group found that in 1996 alone, $82bn had been wasted by US companies on failed projects. Some say the realities are even worse. Rakesh Kumar, a vice president for research firm Meta Group, estimates that more than half of all IT projects fail or at least end up over deadline and budget.
It’s not just projects that crash and burn which waste money. Customers also pay too much for basic software. In 2000, the International Software Benchmarking Standards Group analysed 800 projects in 20 countries, and found some businesses pay as much as seven times as others for identical software applications.
Given this appalling waste, you almost begin to wonder why anyone goes anywhere near installing new IT. Yet so crucial is technology to today’s corporation that global consultancy Bain & Co was quoted last September in Business Week as saying that IT now accounts for 50% of all business equipment spending. This is double what it was 25 years ago. And London Business School professor Michael Earl has said that any chief executive not spending a fifth of his or her time thinking about technology is shirking their responsibility to shareholders.
This translates to a world where you really do have to plan that one-in-five of your train journeys will end in derailment. That’s life, it seems, as far as the IT industry is concerned. “In any portfolio you’ll always have duffers,” says Michael Cullen, a senior manager in Andersen’s technology risk consultancy. “Due to the complexity of technology and the competitive pressure to undertake projects, the issue is not whether to undertake them, but how to manage the risk.”
In the old days – up until the 1980s, say – risk management meant internal management of the IT function. The result was internal fiefdoms of white-coated technicians servicing mainframes that never seemed to produce the data the front-line business needed. Thus came the rise in outsourcing, partly spurred by the growth of giant US technology service specialists such as EDS, and UK players such as Hoskyns, which is now part of Cap Gemini Ernst & Young, as many finance directors and boards looked to solve the problem. The idea was that you could contract your IT to an external party with a strict contract and all your troubles would melt away.
Alas, they didn’t. Outsourcers provide in the main either a simple maintenance function, or embark on projects on your behalf that seem just as likely as in-house alternatives to end up in the pages of the computer press under banner headlines. There’s even evidence things are getting worse, not better, through outsourcing. Last year, analyst Giga Information Group warned that, in the same period that company use of outsourcing in one form or another had risen from 80% to 92% of projects, the risk of failure had climbed as well, from 20% to 30%.
TAKE THE BLAME,br>Apart from a return to an agrarian economy, is there a solution? Yes, say experts – but you’re not going to like it. Firstly, it may cost more in the short term. Secondly, it turns out that a necessary first step in managing IT risk better is to face the fact that an awful lot of the time it’s not pernicious vendors or unscrupulous consultants that are to blame for installation difficulties: it’s you.
“The traditional way to define projects from the user point of view is to put a stake in the ground. Stakes, by definition, don’t move. But project requirements do, every time,” says IT expert David Marsh, head of strategy consulting at small UK consulting house Differentis.
Kumar is more forgiving. “With the best will in the world, organisations and providers try to scope projects out at the start, but both sides have different conceptions of what timescales and solutions really add up to,” he says.
Download our Whitepapers
Customers also suffer problems deciding what they want, argues Steve Swift, a 20-year veteran of the UK software industry, who now heads his own consultancy, New Gen. “When a project goes wrong it’s often down to internal resistance from people whose job has been changed without them being asked or informed,” he says.
Another factor is that customers want the “best toys”. “There’s always a desire, especially in government, for the latest and greatest technology,” says Chris Crane, central government partner at BT Consulting, who was previously a public sector customer himself. “A better way to proceed is to ask what capability is needed, and then to find out what can offer that.”
The major lesson is that IT is too important to handle in cavalier fashion.
Given the amounts of money involved, FDs need to be better acquainted with the risks and rewards and spend more time evaluating the business needs and the technology. John Evans, senior VP global finance and treasury for Key Equipment Finance, last year oversaw a successful implementation of a general ledger package from SunSystems. “We were very strong on our planning and tried to be as realistic as possible. We spent half a day with everyone across Europe sitting with diaries making sure it was all scheduled. Don’t be afraid to have meetings – you’ll save money later by spending a little in the early days,” he says.
In February, Computer Weekly magazine and process consultancy the Coverdale Organisation polled 800 senior IT professionals to try and find out why things go wrong. The most important reasons, say the techies, are non-technical reasons – problems with communications, leadership and clarity of purpose, with the biggest reason being the project management process
Most experts in project management agree that the three biggest project omens of doom are a lack of commitment from senior management – so the project ends up not being seen as important; a lack of ownership – so it’s no individual’s fault if it wins or loses; and a failure to integrate project goals with larger business aims.
The National Audit Office has produced a seven-point checklist for civil servants on how to better procure IT systems. These are: cancel if the project seems over-ambitious; ask, are your in-house resources adequate to manage the project? If not, limit the project’s scope; think about business complexity when setting specifications; don’t leave the detailed payment mechanism to be decided later; don’t create distorting payment incentives; satisfy yourself the sub-contractors are managed within contract requirements; manage the risk that isn’t transferred.
But overall, small is beautiful. Analyst firm Giga Information Group has concluded that the recipe for delivering a successful IT project is to keep cost below $750,000, complete it in under six months, and have no more than six people on it – but even these guidelines only grant a better than 50% chance of success.
In November, UK car hire company Holiday Autos had to abandon a plan to implement SAP after a firm ran up a 10-month delay in installing the system; the client is now suing.
Just before Easter, Barclays said a computer fault left employees of 20,000 UK companies without their monthly pay. The bank, the UK’s third biggest by assets, was unable to pinpoint the cause of the error in its systems.
Also in March, BT inadvertently published on the web thousands of ex-directory numbers, including 5,000 phone numbers hackers could use to get access to corporate networks.
In November, UK computer giant ICL confirmed that a multi-million pound crew-scheduling system it was developing for BA had to be junked after a two-year over-run.
An estimated billion pounds of taxpayers’ money has been wasted since 1997.
Inland Revenue, March 2000. The government woke up to the fact that a 10-year contract with EDS to upgrade hardware and communications at 600 tax offices had already cost £1.04bn – the budgeted cost – and the final tally was likely to end up as £2.4bn, more than double the original amount.
Nirs2, National Insurance recording system. Government saw the original fee charged by implementors Andersen Consulting rise from 1995’s “between £45m and £76m” to 1999’s “between £70m and £144m”. The system has since been so bug-ridden that in 1997 1.3 million pensioners were underpaid by £43m. Accenture paid £3.9m compensation to cover other problems that led to pension companies not being paid properly as well.
Swanwick. The air traffic control centre finally became operational in February 2002. The original proposal was signed by IBM in 1992. Big Blue was replaced in 1994 by a new contractor that was itself replaced in 1996 by Lockheed Martin. Lockheed Martin handed in a system after £480m had been spent, but another £180m had to be spent fixing it. A Government-commissioned report found the increase in project cost had been down to changes in requirements and underestimation of the scope and timing of implementation. Meanwhile, a satellite-based air navigation system set for 2001 delivery won’t see the light of day before 2009 at the earliest.