When robo advice platforms first surfaced several years ago, many flesh-and-blood financial advisors were afraid that these tools would make them obsolete — or least force them to lower their fees. (Much of that fear has subsided.) But robo advisors are susceptible to the same conflicts of interest that have affected human advisors, argues Tom Baker, a professor at the University of Pennsylvania Law School.
In a recent essay, Regulating Robo Advice Across the Financial Services Industry, Baker and his colleague, Benedict G. C. Dellaert of Erasmus University Rotterdam, evaluate the challenges to regulating the automated advice industry in a land where existing laws are designed for living, breathing human beings. For example, while most existing robos are based on low-cost passive strategies that provide market exposure, Baker says it could become a big problem if robos start promising to beat the market.
“The challenge for regulators is to develop the expertise that they need to evaluate whether robo advisors are competent and honest,” he says.
Here, Baker talks to WealthManagement.com about where robos could go wrong, ideas for how to regulate them and the hybrid robo phenomenon.
WealthManagement.com: What new challenges does robo advice raise for regulators?
Tom Baker: One is the ‘square peg, round hole’ problem — when it comes to financial intermediaries, our regulations and laws were written with people in mind, not algorithms.That’s one problem and that strikes me as easily fixable without a lot of heavy lifting by the regulators.
The other problem is that the people who have designed the first generation — Personal Advisor, SigFig, Wealthfront, Betterment — I would call them idealistic. They’re well-intentioned, using good research, trying to do a good job to be quite consumer-centric and operating basically to promote a passive investment strategy that has an excellent theoretical and empirical pedigree. But there’s no reason that a robo advisor couldn’t be put together incompetently or in a way that would further the interests of the entity employing the robo advisor.
I’ve had a student write a paper for me where he says there ought to be something like a driver’s license for a robo advisor. Before a robo advisor can go to the public, there would have to be a demonstration that it is competent and honest. I’m not prejudging whether that should be the regulatory approach or not, but it’s certainly one possibility.
WM: What would be some of the red flags?
TB: In the paper, I identify four different components of a good robo advisor. A robo advisor is matching a person to a financial product or set of products that’s good for them. In order to do that, the robo advisor needs good data about the relevant attributes of the people, and it needs good data about the relevant attributes of the products. When it comes to the kinds of funds that you would use for a passive investment strategy, there’s good data about it, so that’s not an issue.
The second part of the robo advisor is the algorithm or model for matching the people to the product. There, if you’re following a kind of passive investment strategy, there’s also decent research on what makes a good portfolio. Still, someone could do that incompetently. Some guy who’s great at programming but maybe hasn’t done enough around building a decent portfolio could do a bad job of that.
If you were to move out of the 'I’m developing a passive portfolio for someone that’s designed to help them be indexed to the market,' into a position where, 'I have a robo advisor that’s going to beat the market,' there I would start getting really nervous about that algorithm. If I’m doing something exotic or something that promises to beat the market, I would more worried about whether that was good.
There’s the element of what we call in the paper 'the choice architecture.' In a world where I’m giving people an environment in which they’re going to make choices, that could be done badly. The decision environment could be set up in a way that leads the consumer to choose the plan that paid the best commission to the intermediary. Choice architecture could be incompetent or it could be evil.
The final piece is the security/integrity/privacy — the IT part. We don’t have a lot to say about that, but that’s obviously something financial services regulators are worried a lot about.
WM: Should advisors be afraid robos are going to take their jobs?
TB: You look at who’s getting real marketshare in terms of robo advisors. It’s the BlackRocks, it’s the Vanguards — the hybrid robos. It’s when they’re talking to people who are advisors.
WM: What aspects of financial advice cannot be automated?
TB: A robo advisor is not very good at coaching you. They’re not very good at explaining why you want a balanced portfolio. They’re not very good at helping people realize what they need the money for in the future. And, 'how do I deal with the fact that I don’t have as much as I thought I had?'
It’s this hand-holding apart from the portfolio selection that people still need individuals to do.
WM: Do you think that the financial services industry will evolve so that this hybrid robo is the dominant business model?
TB: Yes. The idea that the investment advisor has to act in what is the best interest of the customer is inescapably going to become the norm. Not even the Trump administration was willing to kill the fiduciary rule in the employment context.
Imagine I’m an investment advisor. I’m not allowed to do funny business, like steering the customer to the things that are going to pay the highest commission to me. And if I do that, I get caught red-handed. So therefore, if I’m going to take seriously the idea of picking a portfolio that’s good for my customer and I’m going to take seriously the limits of my ability as a person, then why wouldn’t I use a robo advisor? They’re simply better at building the portfolio.
WM: Do you think future regulation of robo advice could lead to a shakeout of robo advisors?
TB: Right now, it’s the Wild West. But I think if we were to develop a system, whether it was certification or standards, I think it would tend to favor the existing, big players. These big players, they’re like, ‘Fine, regulate us. Our stuff is good; we’ll prove it.’ And they have money to spend on compliance. I do think it’s going to make it harder for some Silicon Valley startups.
There is a legitimate concern about regulators developing too fixed an idea of what [makes] a good robo advisor. Then everyone uses the same model and suppose that model is bad.
That’s why at the end of the paper we talk about this ‘contest of contests’ thing. We want there to be a diversity of ways of certifying robo advisors as being competent, so they’re not all the same model.
Any kind of certification or oversight process is going to make it harder for a group of 20-something computer science geniuses to start a company that’s going to displace the biggies.
WM: You say that you’re advising undergrads interested in helping professions like nursing or social work to consider careers in financial planning. Why do you think they’d be good at it?
TB: Think historically about the kind of person that became an investment advisor in the past: that person might have majored in economics or business. They were not learning how to be empathic or how to coach people. As it becomes increasingly clear that the software programs — well-designed by very smart people — are better at picking portfolios than humans, it just seems like the comparative advantage of the person over the machine is going to be favoring people with those more soft skills.