ChatGPT and its competitors have already achieved some impressive milestones – being able to pass the bar exam for lawyers and help resolve medical cases.
ChatGPT and its competitors have already achieved some impressive milestones – being able to pass the bar exam for lawyers and help resolve medical cases.
Are these AI tools ready to replace your financial advisor?
Hello! You are reading a premium article
Are these AI tools ready to replace your financial advisor?
The advantages of AI consultants are obvious at first glance. Professional financial advice is expensive and out of reach for many Americans. AI could reduce these costs and make intelligent, personalized advice available to everyone 24/7. AI can also expand the range of financial decisions covered by advisors and provide more holistic advice. Today, people not only need help mixing ETFs into a portfolio, but they also need to make difficult decisions regarding savings, insurance, and debt management, among other things.
But while AI can do some things just as well as a financial advisor and sometimes even perform better, it cannot replace human advisors. Still.
To understand why, let’s look at five essential qualities for effective financial advice, how AI currently performs and what it will take to get AI where it needs to go.
1. Debiasing
Let’s start with the bad news.
One of the most important things a financial advisor demonstrates is bias, or helping clients avoid costly mistakes caused by behavioral tendencies. Consider the tendency for people to overestimate short-term losses and invest too conservatively, even when their investment horizon is 30 years or longer. In a study I conducted with Richard Thaler, people who were shown a one-year chart of investment returns invested 40% of their portfolio in stocks, while those who were shown long-term charts invested 90% of their portfolio in stocks – although both Investor groups invested for the long term.
A good advisor can help people make financial decisions that are consistent with their long-term goals. They distract clients from the short-term charts or latest market fluctuations that constantly pop up on mobile phones and help clients choose investments that fit their actual time horizon.
Unfortunately, a working paper led by Yang Chen of Queens University in Canada showed that ChatGPT exhibits many of the same behavioral tendencies and biases that a good advisor tries to minimize. For example, people tend to choose riskier options after suffering losses as they try to break even. In Las Vegas this is called a double down. ChatGPT suffers from the same tendency, which can lead to costly errors. If an investor loses a lot of money after the crypto crash, ChatGPT might think they should buy even more crypto to double the risky asset.
And it’s getting worse. That’s because AI tools are also very cocky. It’s not that they get things wrong sometimes – it’s that they think they’re right too often. This can reinforce existing biases because not only does the software not self-correct, but it can also give human customers a false sense of security.
To improve the performance of AI advisors, we need to create metarules – that is, a rule that governs other rules – to help the software overcome these biases. One possible approach is that every time the AI recommends a particular financial measure, it also examines reasons why this measure could be a mistake. It’s like an internal audit that forces the software to think about what it might have missed.
Because of the way these AI tools learn, meta rules are often required. They are known as large language models, or LLMs, and are trained on huge text datasets from the Internet. Because the Internet often presents human nature in an unfiltered form, the software reflects many of our lesser impulses and tendencies.
The good news is that AIs are almost certainly easier to distort than humans through the application of metarules. While we can’t directly edit the software running in our heads, we can reengineer our AI models.
2. Empathy
The next key quality of a consultant is empathy. Imagine an investor who is nervous and worried about market volatility. Research shows that investors’ background moods can have a strong influence on their financial decisions: fear leads to risk avoidance and anger leads to more risk-taking. The role of a good advisor is to provide reassurance and support during market turmoil so that fear and other emotions do not affect our long-term financial prospects.
The good news is that ChatGPT excels at empathy. A recent study compared ChatGPT and human doctors’ responses to real patients’ questions posted in an online forum. Responses were then rated by a panel of healthcare professionals for both quality of information and sensitivity.
The result was a resounding victory for AI. Healthcare professionals were almost four times more likely to say ChatGPT responses provided “good or very good” information. However, ChatGPT was almost 10 times more likely to be empathetic. Specifically, 45% of AI responses were rated as empathetic or very empathetic, compared to only 4.6% of physician responses.
These results suggest that there are some important financial advisor tasks that AI can already perform very well. While advisors don’t always have the time or ability to reassure their clients during market corrections, AI technology can help them become more human, or at least increase their humanity. For example, the next time there is a sharp market decline, advisors won’t have to limit themselves to making a few calls to their wealthiest clients. Instead, AI can deliver empathetic responses tailored to each customer. For example, if a client reviews their portfolio daily, AI can provide reassuring data about long-term market trends as well as the costly effects of market timing.
3. Accuracy
Another important quality of an advisor is to present the facts accurately. While AI can be unbiased, it still needs to base its advice on accurate representations about investing, inflation, taxes, and more.
More bad news: The bots are currently very unreliable and make a lot of mistakes. For example, when I asked a leading AI tool to help me choose between Vanguard and Fidelity Nasdaq index funds, it received a very impressive response that focused on their long-term performance and expense ratios. The only problem was that the company used the wrong funds as the basis for its analysis, using numbers from a Vanguard S&P 500 fund and a Fidelity real estate fund. It was both very confident and completely inaccurate.
This problem can largely be solved with plug-ins, i.e. external tools that the AI uses to supplement its known weaknesses. When you ask Google a math question, a calculator appears next to the answer. AI tools should do the same. In addition to using a calculator, AI advisors should be integrated with reliable financial databases like Morningstar to ensure their models and recommendations are based on accurate representations of the financial world.
“Too often, people view language models as complete solutions to every problem rather than components of intelligent applications,” says Dan Goldstein, senior principal investigator at Microsoft Research, specializing in AI and human-computer interaction. “The financial world’s optimized systems and vast data repositories will not be replaced by AI – they will be accessed by AI.”
4. Best interest
Advisors must act in the best interests of their clients. For example, you cannot recommend a more expensive fund class just because it makes you more money. So, in theory, AI should be less likely to run into conflicts of interest. Unlike people, ChatGPT does not try to maximize its income.
But that’s just theory – we don’t really know how well the AI will work. One possibility is that it will have similar problems to humans. For example, one study found that investors are more likely to buy mutual funds with higher marketing costs, even if those costs hurt their overall performance through higher fees. Although these funds are likely inferior investments, consumers are influenced by their advertising. AI could fall into the same trap, as funds that spend more on advertising could play a larger role in the AI database.
Given this uncertainty, it is important that AI architects review the digital advisor’s recommendations. This is similar to a meta-rule, except that the focus is not on eliminating bias, but rather on eliminating conflicts of interest.
Fortunately, it is likely that AI will be easier to monitor for conflicts of interest than a human advisor. If the software starts recommending investments with high fees or mortgages with high interest rates even though there are cheaper alternatives, the AI tools may even be able to perform an automatic correction, such as: B. a spell checker to fix a typo.
Goldstein believes a key is to emphasize transparency. “When decisions are made behind closed doors, we can only wonder about some of these problems,” he says. “But if the inputs and outputs of every decision are logged, they can be subjected to scrutiny that was never possible before.”
5. Consistency
Good financial advice should be consistent. This means that if the same client passes the same portfolio to different advisors, they should offer similar advice focused on the same proven principles.
However, research suggests that advisors struggle to provide advice that consistently reflects their clients’ goals, circumstances and preferences. A recent study showed that after their advisor dies or retires, clients tend to invest in funds with different fees and risk profiles and are referred to a new, randomly selected advisor. This is not because their investment preferences have suddenly changed, but rather because the new advisor has imposed his or her own beliefs on their portfolios. When the new advisor chose risky investments for his personal portfolio or expensive funds, he assumed his clients would prefer that too.
This should be a fixable issue. AI advice should be able to achieve consistency by confirming that it is providing the same advice to customers with similar financial needs and preferences. And once the AI tools reach consistency, the software should give the same advice to customers in the same situation, just as Netflix recommends similar content to people with the same viewing history.
What the future could look like
Many improvements are required before AI can become an effective financial advisor. Nevertheless, it is clear that AI will play an important role in the future of financial advice.
What could this future look like?
One possible model comes from the medical field, where intelligent software and doctors have been working together as a hybrid team for years. Physicians in particular are increasingly relying on AI tools to improve the quality of their care, as these tools can generate a long list of possible diagnoses that can reduce misdiagnoses or shorten the time to diagnosis.
Of course, a human doctor still needs to filter the expanded list of possible diagnoses generated by ChatGPT and select the best diagnosis. This suggests that AI can help us expand our thinking, even if it can’t find the answer itself.
Although there are no studies on the quality of hybrid financial advice, I expect the hybrid model will prevail provided people learn to work effectively with AI. One reason for this is a behavioral tendency known as algorithm aversion – people tend to reject automated software unless it is close to perfect. This means that most customers prefer AI financial advice monitored by a professional, just as a pilot would be expected to monitor the autopilot in the cockpit.
In addition, a hybrid approach is also likely to significantly improve access to advice. I hope human advisors use AI to serve more people.
What about the Americans who still can’t afford a human advisor? I believe AI can be used for 24/7 advice, provided we address the critical issues related to accuracy and bias.
And if you’re a financial advisor, I wouldn’t worry about losing your job to ChatGPT. (Autopilots didn’t put pilots out of work.) Instead, I’d focus on how you can use technology to give better advice to more people.
Shlomo Benartzi is a professor emeritus at the UCLA Anderson School of Management and a regular contributor to Journal Reports. He can be reached at [email protected].