THE banks are doing it, and so are oil companies, gaming operations, telcos and systems houses. It all started well over 20 years ago and soon, they will all be doing it. Just how do you test complex systems, networks, applications and software?
Combinatorially, it is impossible to test anything that employs millions of lines of code, thousands of chips and dynamic routings of inputs and outputs spanning thousand to millions. The days of ‘a, b, c’ logic and ‘flood/stress testing’ are long gone, and the days of the in-house test teams are numbered.
People who design systems, build the hardware, and write the code are fundamentally incapable of testing their own brainchild. A different and more naïve methodology with greater degrees of deviant thinking is required. In short, this is an ideal space and problem set for the hacker minds. Here, the ‘white hats’ rule. For decades, banks have employed specialist companies to attack their systems to find opportunities and bugs presenting failure potential and attack opportunities. It is better to be attacked by a friend than an enemy.
More recently, gaming companies and their back-end support suppliers have engaged in the practice of rewarding gamers and hackers alike for turning up vulnerabilities before beta software has gone live. In a slightly different twist on this practice, some oil companies have published vast databases, with equally big rewards for any person or group who can demonstrate an ability in finding oil and gas deposits.
How come this all works so well? One local gaming company I know had a 30-strong test team (isolated from all the development groups) that tried to find pre-release product flaws and security vulnerabilities. They did a reasonable job, but something always slipped through. And such slips often proved expensive, so the company posted a beta program and offered £100 per bug identified. Thousands of hackers appeared and spent weeks finding bugs at a fraction of the normal cost. The outcome was so good that the test team was disbanded and assigned to far more productive work and this testing practice was adopted as the new norm.
So who are the hackers? They are black, white and grey hats who can see an opportunity to make a lot of money fast, and they are driven to do so. The most recent convert to this highly efficient and cost-effective model is United Airlines which is paying for bug finds in airmiles.
Of course, this is not so far removed from the common practice of you and me debugging applications launched by the biggest, best, smallest and worst software houses. We discover bugs through everyday use and report the failures. And we do it for free. But this all poses the obvious question: is there a better way? The greatest successes have now been chalked up by the military and aerospace where they employ three (or more) software groups of widely different ethnic, education and experience backgrounds to write applications using three (or more) operating systems. Ditto three (or more) hardware design teams using three chip sets and design philosophies to create three base platforms. Run all three (or more) in parallel with majority voting at all output ports and/or key decision points, and the degree of reliability is dramatically enhanced by decorrelating all soft and hard failure mechanisms.
While this approach works well and comes with a proven track record, its adoption is reserved for the most mission-critical applications due to the high cost. Of course, AI systems might step up to the plate in the future, but they are still too honest and lack the deviousness of the human mind. But who knows when that could change? For now, the white, black and grey hats are safe and assured of a growing customer base and income.
Peter Cochrane is an IT consultant and former chief technologist at BT