On Tuesday, Morgan Stanley Wealth Management announced it was only “one of a handful” of organizations receiving early access to OpenAI’s new GPT-4 offering.
The news coincided with OpenAI’s announcement that it was releasing the next iteration of its wildly popular GPT-3 artificial intelligence model.
GPT-4 is still not available widely, and Morgan Stanley is “currently the only strategic client in wealth management” able to use it, according to the company.
“Designed specifically for and by Morgan Stanley with appropriate controls, financial advisors and their teams will use the internal capability to ask questions and contemplate large amounts of content and data, with answers delivered in an easily digestible format generated exclusively from MSWM content and with links to the source documents,” the company said in a statement.
Morgan Stanley also has other ongoing AI projects, including ones that focus on advisor workflow and client and prospect communications.
William Trout, director of wealth management for Javelin Strategy & Research, said Morgan Stanley “has a track record of using leveraging digital tools and data to empower the financial advisor.”
“The incorporation of the OpenAI GPT-4 chatbot follows the implementation into the advisor workflow of 'next best action' capabilities, which had built on a years-long effort by Jeff McMillan (chief analytics and data officer) and Morgan Stanley to bring into alignment massive amounts of client, market and reference data,” said Trout in an email to Wealthmanagement.com. “The launch of this internal chatbot promises to further break down information silos at the wirehouse. Connecting the data dots will both boost the efficiency of advisors and enable them to identify new opportunities for the client. The effect will be to dramatically upskill the Morgan Stanley financial advisor.”
This is just the latest in a long string of announcements by wealthtech firms that claim to incorporate ChatGPT into their applications. Companies like FMG, Orion and Broadridge have already started integrating ChatGPT, while others are taking a more cautious approach.
“We are currently not using Open AI in any formal manner but are in the process of exploring its possibilities in our SEO endeavors as well as our blog and newsletter content,” said Rishi Bharathan, the CEO of WiserAdvisor, a prospect referral platform for advisors. “Since we produce mostly consumer-facing content, which our users engage with while making financial decisions, we need to be extremely diligent before we publish any content.”
Meanwhile, at the Technology Tools for Today (T3) conference in Tampa, Fla., AI was, unsurprisingly, a hot topic.
Raj Madan, head of technology for wealth management platform AdvisorEngine, pointed out during a panel discussion that machine learning is a subcategory of AI but requires a lot of data to be useful.
“The whole idea of machine learning is it’s more programmatic,” said Madan. “It’s using algorithms to look at sequences and patterns and use those sequences and patterns to make some sort of predictions.”
Madan said his first ML project 13 years ago failed.
“And it gave me a great lesson,” said Madan. “Machine learning does not work without an abundance of data.”
Machine learning applications have been limited because it’s really analyzing patterns, observing those patterns and then producing predictions, said Madan—good for things like fraud monitoring with credit cards, but of more limited use in situations where data was scarce or generated infrequently.
The breakthrough, Madan said, was the arrival of generative AI.
“Now we’ve gone from just making small predictions to actually producing text or imagery,” said Madan. “ChatGPT is a chatbot. You give it a prompt. It gives you content back.”
ChatGPT is using the discipline of natural language processing, which draws from “a very large corpus of material,” said Madan. ChatGPT-3 used around 45 TB of data, including books, articles and internet sites and produces a large language model.
“(This) ultimately allows ChatGPT … to go off and create a complex sentence based on the patterns it understands from all the corpus of material it’s read,” said Madan. “Everything in the machine learning space is math. It’s all statistics. ChatGPT is using probabilities to figure out how to complete a sentence.”
ChatGPT, beneath the covers, is using a library called Generative Pre-Trained Transformers. Transformers are often known as “neural networks.” Madan said since 2018 when Google brought the idea to a wider audience, Microsoft, Google and OpenAI “have been in an AI race to essentially see who could produce the most sophisticated solution for that transformer algorithm.”
Madan said ChatGPT-3, which was released widely in November 2022, is an example of “the singularity.” This idea was first put forward by Dr. Ray Kurzweil and is defined as the moment when “technology will become so advanced the growth will become exponential,” said Madan.
Madan said another milestone will come in a few months when ChatGPT-4, which is many times more powerful than ChatGPT-3, is available to the wider public.
So, how can advisors use ChatGPT-3 in their own practices? Madan gave the examples of client communications and coding, especially with Excel, to start with.
But, he said, this doesn’t mean it will replace the roles of advisors.
“ChatGPT and all chatbots out there essentially mimic your question,” said Madan. “If your question’s not right you’re not going to have the right answer. It just knows there’s a pattern out there. That’s all it really understands. We’re not going to replace anybody with this technology. This technology is going to be an assistant to the advisor. It is going to help them get a head start or a leg up.”
There are, however, many concerns for advisors looking to use the technology in their firms, said Madan. Among those are so-called "hallucinations," where chatbots make errors and give information that not only cannot be true but are absurd statements that defy logic.
“It’s all about probabilities and statistics,” said Madan. “You’re not going to give free rein to ChatGPT. You’re going to have some oversight. Especially if you’re in the enterprise space.”
Madan said using unfettered AI for client communications could generate content that easily runs afoul of the SEC marketing rule and FINRA 2210 rule.
And then there’s the problem of bias, which can be added to AI without users or programmers even realizing it.
“ChatGPT uses a bunch of data and uses a bunch of text, books, articles, newspapers and essentially digests that and produces this model,” said Madan. “The problem, is what if the content it’s producing is biased? If it’s biased, what’s going to happen to the model? Are there any humans in this room that aren’t biased? No. … And developers can actually program bias accidentally, not on purpose.”
Copyright law and plagiarism are also questionable for those who use AI for content creation.
“The Fair Use Doctrine says you can use copyright for certain things. You can use it for research. Is ChatGPT doing research?” said Madan. “This is some of the legal issues that occur here.”
Another concern is that ChatGPT-3 stopped "training" in 2021, meaning it stopped ingesting new content into its network.
“So, it’s been two years since it stopped training,” said Madan. “What if you have some content that wasn’t copyrighted before 2021 and now is copyrighted? That’s going to be an issue.”
Madan said many of these fears will smooth out over time, though.
“It’s all about the maturity of this space,” he said.
Even so, Madan said, “ChatGPT requires oversight and cannot be blindly trusted. You cannot overstate this."