There’s no denying the buzz around Artificial Intelligence or AI. Hyped by corporates and lauded by journalists, it has now begun the rapid descent to the ‘trough of disillusionment’ in Gartner’s 2017 Hype Cycle (the stage in a new technology’s lifecycle in which the public realises that it may have over-exaggerated a technology’s potential and so treats this sudden realism with a strong dose of pessimism). The net result of this process is that AI has become easy to talk about, but hard to implement in large organisations.
For financial services, this is both a predictable outcome and an obstruction to those who wish to explore the prospects of AI. International banks and investment firms cannot afford to be swept up in such narratives as they distract from the hard reality of what needs to be done to make the most of a promising technology. Instead, firms should consider more strategic and pragmatic questions; for example, can AI be a solution to any of our long-standing problems? Can this technology help to offer a service more in-line with customer demand? What are the pitfalls in implementing AI in a large corporate?
After a decade where the compliance, risk and regulatory agenda has dominated company resources and boardroom discussions – for example enabling fixed-income dealers at U.S. and European banks to comply with MiFID II has cost $20bn per year alone – these questions have been forgotten. Innovation has, in a way, been sidelined.
But in order to realise the competitive advantages that arise from utilising the AI’s speed and insights, it is essential that individuals at financial services firms take the right kind of initiative.
In the first instance, you need to ascertain what pre-existing business problem AI can solve and, in doing so, save costs or deliver added value. Because of the risks involved with innovation, as a rule of thumb the potential return on investment for any non-mandatory work needs to be increased by a factor of ten. A compliance department can always quantify its value to the business. Tech innovation needs to be able to do the same.
Then there is the issue of focus. If they are being honest, most large banks will tell you they have outdated processes and legacy systems built into the core of their operations. On top of this, decision makers are often inundated with AI-related emails, usually from regtechs offering the latest ‘magic bullet’ mainly focussed on the services they can provide rather than what the business actually needs. To combat this, shortlist the regtechs with the most relevant experience, present them with the same problem and ask them to work on a solution separately. Then take your pick based on an honest assessment of a practical solution. Through this process, the large firm is more likely to develop AI projects which are business-led and profitable, and the small firms receive direction on the products that the industry actually needs.
Thirdly, familiarise yourself with the internal processes required to gain access to funding. Consider both what you need to do and who you need to get on board to prevent your initiative being stifled by procedural issues. While frustrating, these processes are essential. Firms which allocate funding for innovation through such procedures avoid the risk of pursuing projects born out of human bias – for example, where they are chosen because they are easier to comprehend, or even because of personal connections – rather than by their business case; in other words, it ensures the all ideas get a fair hearing and that only the most promising are pursued. This approach is useful for leadership as well since a formalised system provides clear accountability for new projects and provides management with a fair chance to give its stamp of approval.
Yet even with this systematic approach to leading innovation, there will indubitably be further pitfalls along the way, particularly with a sensitive issue such as AI. While any given technological change can be met with opposition when colleagues are tech averse, AI can be met with extreme resistance as it is widely perceived as a way to cut jobs.
To mitigate this, honest and transparent communication is essential. Providing a clear outline of your objectives and AI’s potential to benefit the users of this technology will help to counteract the uncertainty that surrounds it.
Similarly, it is useful to include colleagues who will have to use the new system in the design phase, to secure their buy-in from the outset. Although this may be time-consuming – and, in some cases, awkward – it will yield significant long-term benefits as users will understand the relevance of the new system to their role and the suspicion around AI will be reduced.
To start with the end in mind though, requires one final consideration: will this AI-based innovation actually benefit your clients? Are they even on board with the way their data will be treated? Business model change is most successful when it considers what the customer wants. In financial services, this axiom is habitually overlooked. So much time is spent dealing with regulators, lawyers and other advisory bodies that the client can become a footnote in the whirlwind of day-to-day operations. But proximity to the client is crucial. Only by understanding your client’s perspective on AI can you deliver a project that is truly worthwhile.