The SEC’s proposed new AI rule threatens to weaken advisors’ fiduciary duty, according to a head attorney for the Investment Adviser Association.
The danger of the new rule is the proposal of a “brand new framework for handling conflicts” in connection with technology tools, IAA General Counsel Gail Bernstein told WealthManagement.com during the association’s annual compliance conference this week.
“What’s going to be very challenging is that everyone understands what the fiduciary framework means, and by creating a new rule that overlays something on top of it, I think they’re potentially weakening the fiduciary duty,” she said. “It’s almost like you’re proposing a rule for the sake of proposing the rule, as opposed to, ‘Is there a gap and do we need to fill it?’”
SEC officials contend the proposed rule would limit conflicts of interest arising when brokerage firms or asset managers use AI tools to make investment recommendations or trading decisions. SEC Chair Gary Gensler has argued that investors desperately need the rule for a world where they can be micro-targeted with products and services.
However, the IAA argued the solution to the problem was far too broad. In an unusual step for the organization, the IAA recommended that the commission scrap the rule.
A final version of the rule is expected to be released this spring.
In a discussion at the conference with Bernstein in his last week as the director of the SEC’s Division of Investment Management, William Birdthistle said regulators should not wait until a crisis arrives before responding.
“If anyone here is a parent, you don’t wait until the child is in the street. You can act beforehand if you see what’s coming very well,” Birdthistle said. “Clairvoyance and prognostication are difficult, and no one gets it right all the time. But this is one where I think the degree of risk is very obvious.”
Bernstein countered that while the topic of generative AI was “scary” and needed thoughtful risk governance, the current proposal falls far short.
Jennifer Klass, a partner with K&L Gates, echoed previous concerns that the technology covered under the rule could extend beyond AI and large learning modules into well-used, long-established tools. Klass described the rule’s definitions of covered tech as “broad enough to drive trucks through” and that it was at the heart of much of the industry’s criticism.
“All we really know from the definitions is it relates to ‘investment-related behaviors or outcomes,’ which, if you’re an investment advisor, that’s pretty much all you care about,” she said. “The concern was that a covered technology could be almost anything.”
Bernstein believed the SEC recognized that the definitions were too broad and hoped they were thinking through how to make them “more rational.” However, even if the definitions were narrower, she said the IAA would still prefer that the SEC withdraw the rule.
“The question I asked William Birdthistle this morning was, ‘What is it actually about, and what are you trying to do?’” she said. “It’s not clear that fixing the definition is going to answer that question.”
Klass questioned whether the SEC needed a new rule specifically for AI in the first place, as the existing Advisors Act rules are media neutral, and an advisor’s fiduciary duty clarifies what conflicts are and how advisors must address them.
“We keep coming back to that as a framework that has worked over decades for many different new technologies, and it’s not clear why there are features of AI that make this existing framework unworkable,” she said. “What’s so unique about AI that you can’t apply fiduciary duty?”
As evidence, Klass cited existing regulations and guidance impacting advisors’ use of AI, including their fiduciary duty, 2017 staff guidance on robo advisors and the marketing rule, among others.
Examiners are also looking into firms’ disclosure and marketing procedures regarding AI, as well as policies and procedures for compliance and conflicts. In her final week as deputy director of the IA/IC Exam Program in the SEC’s Examination Division, Natasha Vij Greiner noted that many advisors were “getting it wrong” when it came to AI-related disclosures (Greiner will succeed Birdthistle at the helm of the Investment Management Division).
Bernstein said even if an SEC regulation focused on the actual technology of generative AI, they’d want to see more analysis before proposing a rule. Instead, Bernstein believed they could support guidance detailing the need for a principles-based risk governance framework.
“Our view is if this is about conflicts, you don’t need a rule,” she said. “If you feel like advisors need to understand better how to think about conflicts with certain frontier technology, think about giving guidance.”
Birdthistle acknowledged whether or not the commission withdrew or changed the rule, the problem would remain. As evidence, he cited the “conundrum” he faced following meetings with AI engineers about their products.
“I ask, ‘How does it work?’” he said. “‘Stuff goes in, ‘box’ does magic, stuff comes out.’ That’s not a reassuring answer.”
But while some in the industry believed that disclosures help soothe situations like this, Birdwhistle had trouble imagining disclosure alone could solve the issue raised in that meeting.
“What are you disclosing? You can’t disclose that, that the algorithm performs in ways unknown to its engineers,” he said. “That doesn’t sound like meaningful disclosure."