Skip navigation
ai-prompt-engineering.jpg Photo by Rob Burgess
Eric Ludwig (left) and Chet Bennetts from The American College of Financial Services

Wealth Management EDGE: With AI Prompt Engineering, It’s ‘Garbage In, Garbage Out’

The more fine-tuned the questions, the better the responses will be.

To get helpful answers from generative AI chatbots, advisors must know what questions to pose and how to ask them.

Eric Ludwig, director of The American College of Retirement Income, and Chet Bennetts, assistant professor of financial planning and program director at The American College of Financial Services, emphasized this point during their “Role of AI in Retirement and Longevity Planning” session at Wealth Management EDGE at The Diplomat Beach Resort in Hollywood Beach, Fla.

The pair reviewed some of the most popular chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, Microsoft’s Copilot and Google’s Gemini. Ludwig said advisors who don’t currently use these services may already be employing similar ones without realizing it, including Apple's Siri, Amazon Alexa, Netflix and autocomplete in Microsoft Word and Outlook. These AI models generate outputs based on what they learned from the data they were trained on user interactions.

Bennetts said advisors should think about these models as an expanded version of the Monte Carlo method, which finds a distribution of the probability of returns given a set of circumstances.

“If we’re flipping a coin, what’s the probability that this flip will be heads versus tails? If you do it enough, you’re going to get this distribution that isn’t exactly 50-50, but there’s going to be a mean,” he said. “Now, imagine a 30-billion-sided coin where it’s all words.”

Ludwig said advisors should consider employing a service like ChatGPT in their practices only if they have the expertise to verify that the output is accurate and willing to take full responsibility for inaccuracies.

“It’s a know-it-all that’s constantly learning,” he said. Based on its training, generative AI is still vulnerable to societal pre-existing biases. It likely could not replicate qualitative analysis.”

Using the “prompt engineering” technique means that the more fine-tuned the questions, the better the responses will be.

To highlight the importance of this developing skillset, they showed a job posting for a prompt engineering manager at Anthropic, with a salary range of $320,000 to $520,000.

“It’s part art and part science,” said Bennetts. “It involves crafting or adjusting the input to improve the model’s response.”

They created an acronym to explain the steps for crafting the best prompts: RATE, which stands for role, ask or assignment, tone and extras.

Ludwig said role means who you are and who you’re writing to. After choosing the ask or assignment, the tone needs to be specified. (Is it supposed to be funny? Serious?) Then come the extras. This involves additional detail specific to that query that will narrow down the response.

“I always try to say ‘please’ and ‘thank you’ just in case, so when it overtakes the world, ChatGPT remembers me as a friend,” said Bennetts.

The important part, said Ludwig, is to be as specific as possible and not leave anything up to interpretation.

“Garbage in and garbage out,” he said.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish