Opinion: Testing Digital Advice Algorithms
The below opinion piece is in response to a recent submission by the Financial Planners Association in response to ASIC’s Robo Advice Consultation Paper 254.
There should be a level playing field between digital advisers and human advisers. Whether it be advice industry bodies, financial institutions, fintech businesses, or financial advice software providers, every organisation with an interest in digital advice appears to agree with this statement. Or at least they claim to. However, once we delve deeper into some of the submissions, the hypocrisy reveals itself. Does everyone really want a level playing field? Or are some organisations trying to place unreasonable hurdles in front of fintech businesses, in order to protect their own competitive position?
One of the key areas of contention is how the person responsible for the advice (known as the Responsible Manager) should satisfy themselves that their advice is being produced as intended. Some organisations have recommended that digital advisers algorithms should be tested and verified externally, possibly by an actuary. On the surface this might seem reasonable. What is the harm in getting a highly skilled and qualified professional to provide an additional layer of verification? Being an actuary, you might think I would support this stance. Not at all. Lets look at the implications of this recommendation.
Firstly, requiring digital advisers to have their algorithms independently verified would be inconsistent with the treatment of human advisers. The fact is that human advisers have processes (that is, sets of rules) in place to make recommendations. In some cases, those processes are formal and documented. In other cases, the process resides in the advisers head. Either way, a process exists.
For example, a human adviser might classify their clients into 5 risk categories (eg high-growth, growth, balanced, cautious, conservative) based on responses to a questionnaire. Then they will recommend one of 5 model investment portfolios which corresponds to the risk classification. Similarly, the adviser might have a process to recommend a Transition to Retirement pension to working clients aged between 60 and 64. Furthermore, human advisers frequently use software such as XPlan, Coin, Provisio and Midwinter to prepare their advice. These software packages use algorithms.
In all these examples, there is no requirement for the process or algorithm to be independently verified. The only requirement is that the adviser and Responsible Manager can demonstrate that the advice (that is, the output) is reasonable and in the best interests of the client. The focus is the output, not the process that was taken to get there. And there is no requirement for external verification, whether that be to verify the process or the output.
So why should the rules be any different for digital advisers? The processes and rules that a digital adviser uses are no different from the processes and rules adopted by a human adviser. The only difference is that when a computer follows these rules, the result is always applied consistently. In contrast, when a human applies the same rules, inadvertent errors can be introduced. If anyone should be verified independently, it is the human adviser who is more prone to errors. But then we wouldnt have a level playing field.
Secondly, external verification is expensive. One of the great benefits of digital services is their ability to reduce the cost of providing financial advice, and therefore make good advice accessible to more Australians. As ASIC states in Consultation Paper 254, We believe that digital advice has the potential to offer a convenient and low-cost advice service to clients. Over 80% of Australians dont seek advice, and the high cost of advice (usually thousands of dollars per year) is one of the main reasons cited. External verification will increase the cost of digital advice for consumers. Whats more, it will make many early stage digital advice businesses non-viable, which will reduce competition and consumer choice. In addition, it will make digital advisers reluctant to improve their algorithms, which will reduce the quality of their advice.
Thirdly, external verification is slow. Every time a digital adviser wishes to change their algorithm it would need to be signed off externally. This could take weeks. The result will be less experimentation and innovation. Whats more, it will mean that digital advisers arent able to quickly respond to market and legislative changes, which will once again reduce the quality of their advice. For example lets suppose there is a sudden stock market crash. A digital advice business identifies that an algorithm change would be in their customers best interest. The employees work through the night to design, build and test and test the modifications. They are ready to implement their changes, but they have to wait for sign off which will take 2 weeks. What should the business do? Run with the existing sub-optimal algorithm (which is not in clients best interests)? Or stop providing advice altogether, which is also not a good outcome for their clients? To deliver the best possible advice, the digital advice business should have control over the advice they provide.
Finally, to require external verification would be inconsistent with situations where technology is used in the delivery of other financial services. Banks, insurance companies and fund managers deliver financial services to millions of customers every day. The services they deliver rely extensively on technology. Algorithms and software are typically used to identify customers, transfer money, charge interest and fees, calculate balances, issue statements, and contact customers.
Where technology is used in banking, insurance and funds management, it is expected that a testing framework should be in place. It is the Responsible Managers role to ensure that the testing framework is robust and fit-for-purpose. However, no external verification is required. Rather, it is recognised that the Responsible Manager is in the best position to determine whether testing and verification should be carried out internally or externally. Why should financial advice be treated differently?
Anyone who genuinely believes in a level playing field, wants to create more affordable advice for Australians, and understands how digital advice works, would strongly oppose a requirement for advice algorithms to be verified by external parties.
Greg Einfeld is Co-Founder of Plenty (http://www.plenty.com.au)