Digital Transformation » Technology » IT integration

IT integration

Instead of re-engineering incompatible applications, or building myriad specific links between them, the latest approach to IT integration suggests you get a translating system to sit in the middle. It then takes data from each system and passes it on to others in their languages.

In an ideal world, no company would ever waste time and money on large-scale integration projects that aim to get different, incompatible systems to talk together. There would be no need because no one would be dumb enough to install systems that couldn’t share data between themselves in the first place.

Unfortunately, the history of IT has been the history of a large number of good ideas deployed in isolation. It’s as if a number of unconnected engineers all arrived at various points on a river and spontaneously began erecting different bits of a bridge – so it doesn’t take a genius to see how much better things would have been if they’d started with a common plan and worked in a structured way.

One of the more notable instances where a large integration project went wildly astray was the Post Office Counters project, Pathway. The Government gave ICL the £1bn-plus contract in the late 1990s. The idea was to bring a banking-style network and tills to the UK’s 19,000 post offices. By 1999 the project was two years late and massively over budget. It was eventually scrapped at a declared cost to the Post Office of £71m. However, a Report from the House of Commons Trade and Industry Committee stated that the Post Office’s reported loss of £415m in its 1999 to 2000 financial year, was “largely attributable” to Pathway. ICL argued that government departments and ministers who moved the goal posts were to blame for the problems.

Bob Stream, UK general manager for integration company WRQ, says that a typical large company has anything from 30 to 50 separate applications, which is to say systems that are not connected to other applications in the company. The mergers of recent years have exacerbated this difficulty, leaving companies with a wide range of incompatible systems – which they now need to connect up.

It is hard to overstate this problem. Because each system was developed by its original programming team in isolation from the others, it has its own unique way of doing things. What companies discover when they start trying to move information between these systems is that they all have unique data definitions. One puts the customer’s surname first, the other expects the first name first. One expects a product number to be a 10-digit alpha-numeric, the other expects it to be a 15-digit numeric code, and so on and on. Every minor inconsistency turns out to be a massive roadblock preventing the smooth flow of data.

The result, in practice, is that customers, business partners and suppliers are kept hanging on the phone while harassed staff try to access information spread across a variety of different systems to deal with their queries – a situation which is just not good enough.

TAKING THE STRAIN
However, while companies may know that the inefficiencies in their systems are strangling their business, making the pain go away is not a simple matter. There are at least three options facing companies which want to move toward a single common view of their data. They can simply scrap everything and start again with an integrated approach. They can craft point-to-point links between each of the systems that really do have to talk to each other. Or they can build a “middleware” integration layer which sits between systems, moving data and transforming it, eliminating the need for zillions of connections.

The first option, as Kevin Malone, software technical strategist at IBM’s software division argues strenuously, is a complete non-starter for the vast majority of companies. Malone points out that even when companies were deep into the preparation for the Millennium Bug and the great Year 2000 date issue, most of them didn’t rip out and replace everything.

“The reality was that when people looked into their legacy systems, they found that these were the systems that really ran the business and they just couldn’t switch them off. If they didn’t do a rip-and-replace job on their systems for Year 2000, they are certainly not going to start doing that today, in the middle of a global downturn, when the pressure on IT budgets has never been greater,” he says.

Point-to-point integration is carried out all the time. However, there are two problems with point-to-point links. First, as any “join the dots” exercise will show, linking more than a couple of different systems point to point soon gives you a hideous spider’s web of links. Second, if something changes, an application is upgraded, say, many of the joins will have to be revisited and amended. The huge cost of ownership this necessitates has helped give point-to-point integration a bad name.

THE THIRD WAY
This brings us to the third option, the need to build some kind of middleware bridge that can span the whole enterprise and all its various applications.

The major drawback here, as Stuart Hamilton, chief technology officer at e-business consultancy Axon Group points out, is that companies rarely have a budget for integration. Instead, integration work tends to be predicated on and to draw from budgets targeted at specific systems roll-outs and lines of business. “We very rarely see a budget that is specifically earmarked for integration. Certainly, companies should take a strategic view of integration, but in practice they very rarely take it seriously enough,” he says.

Hamilton’s point is that budgets tend to be allocated for specific projects that map directly to business activities. The fact that experience shows that 30% of the average IT project spend has to go on integration has not, to date, been sufficient to get integration championed consistently at board level. “It is much harder to build specific ROI cases for generalised integration projects,” he says. “The only time it gets into the spotlight is when, almost coincidentally, some part of the business initiates a project that cuts across a lot of systems. ERP implementations have integration as a theme, particularly when they impact multiple parts of the business. In general, however, integration is only dimly seen as a strategic issue.”

At Microsoft’s recent launch of its middleware XML server, BizTalk Server 2002, the company’s e-business server group general manager, David Kiker, and Jess Thompson, research director at Gartner Research, argued that companies need to establish centralised, in-house integration teams. Armed with the right middleware, they can make a huge impact on corporate performance, Kiker argued.

It is likely that during the next two to three years this idea will be taken up more and more by companies. Microsoft’s product, however, is only one among a number of tools and product suites that are aimed at enabling data flow between applications. Moreover, as is so often the case with IT systems, it is probably likely that most integration exercises will be carried out by specialist external providers.

SWANN SONG
Allen Swann, president of international operations at CRM and e-business vendor Chordiant, argues, for example, that his company and others now have the capacity to bring together information from multiple sources and have it appear almost instantaneously on purpose-built screens. If companies can have this kind of single-view access, he says, it doesn’t matter what systems they are using to run their core business applications.

Once this happens, the whole point of differentiating between legacy and bleeding-edge systems drops away. Companies can basically use any system that has a proven capability to meet a specific business need.

Any information captured by that system can be integrated with all other relevant information in a new, purpose-built e-business or CRM package.

“We are doing integration between islands of operational data held on different systems. Our approach is to use industry standards such as Java and XML to deliver the information needed in real time,” Swann says.

The difference between this kind of approach and the point-to-point hand-crafted connections between systems discussed earlier, comes down to code reuse. By wrapping the data in reusable “objects” and reusable software components, the following generations of systems can reuse most of the programming effort involved in the first generation product. “One of our US banking clients found that it saved 40% on the overall project time in the second generation of its CRM system, and 70% in its third generation system,” says Swann.

Swann makes the technical but important point that old programming bound business processes fairly tightly to data. “Typically, the data had to be structured in a certain way, and the work flow was bound to it. However, these new methodologies allow us to separate the process from the data,” he says.

A common process in a telco, for example, would be upgrading a user’s handset. Today, where the ability to spot up-selling and cross-selling opportunities is fundamental to driving increased revenues, a telco will want to modify this process. It will want, for example, to add instructions to the process that prompts an agent to cross-sell upgrade insurance when the customer orders a next-generation handset. Having a work-flow engine that sits separate to the data makes it a simple matter to add an additional set to the work-flow process.

GOING TO MARKET
“Our view is that companies will become much more marketing-led over the next five years or so. Marketing is going to need a single view of the customer and it is going to want to be able to use its data to give the customer a more satisfying experience. This doesn’t mean changing the underlying systems. But it does mean investing in projects to separate out the business logic from the data,” Swann says. “The presentation layer is also being split off since it is now almost impossible to predict the nature of the device that a user might deploy to contact the company. It could be a PC, a handheld computer or a mobile phone.”

It seems that given the plethora of modern integration and middleware platforms, user organisations which structure their integration projects carefully need not feel that they are throwing good money after bad. The arguments against integration, proponents point out, are based on failures and cost overruns in old fashioned point-to-point integration.

Enthusiasts for the new web- and middleware-based approach to integration, as exemplified by Chordiant and others, argue there is now no barrier to data flows that cannot be efficiently overcome. This is great news both for companies with a collection of mismatched applications and packages, and for those on the acquisition trail, which have to take their targets’ IT systems as they find them.

 

CASE STUDY – THREAD MAKER SEWS UP KEY DEAL

Coats North America, one of the largest manufacturers of thread for industrial and home sewing was faced with a threatened “revolt” by a major apparel manufacturer – one of its key customers – which wanted a B2B integrated system that did things its way.

The problem Coats had was that the customer had its own ideas of what the stock numbers and structures should be. It did not want to use Coats’ SKUs (stock keeping units) or part numbers to order thread or check prices and availability.

“They wanted to be able to check inventory and order our goods in their language, not in the way our order processing system talks,” says Bart Austin, application development manager at Coats North America. The customer also wanted Coats to deliver this functionality in six weeks.

Coats chose to use a web-based integration product, Verastream Integration Broker, from WRQ to comply with its customer’s wishes. The point about this approach is that it provides what WRQ calls a “non-invasive framework” for combining IT systems. The integration is done at the business-process level – leaving the original systems untouched. WRQ says traditional integration methods, such as old-style screen scraping and host application re-engineering are slow, costly and sometimes risky. However, a small team working with the WRQ product was able to finish the entire project within the time-frame stipulated by Coats’s customer and at a fraction of the cost that would have been entailed by a traditional systems integration approach.

With the new application in place, Coats’ users can now access information from anywhere in the world through the internet. This enables the customer to specify a “ship to” point, depending on which manufacturing contractor they are using. They can also order thread from Coats in the terminology they wish, and the program does a cost-reference against Coats’ part numbers, enters the order, and places the order.

Share
Was this article helpful?

Leave a Reply

Subscribe to get your daily business insights