In wealth management and financial planning, AI’s enhanced prominence comes at an industry crossroads.
Like many, I remember the heady days at the turn of this century when algorithms emerged to revolutionize industries, including high frequency trading in investing. These quantitative, blackbox PhD creations have since become mainstream in trading. But can you remember the mystery and the reactions? Confusion: it’s just a big regression analysis. Disbelief: It can’t possibly filter through billions of options that fast. Discomfort and denial: Algorithms can’t tell me how to do my job!
But the conversations were very much inside our industry. Fast forward…now the term artificial intelligence (AI) is on the tip of everyone’s lips, largely thanks to the new era of generative AI. Cue more mystery and reaction.
As a solutions marketer at FNZ, the global provider of end-to-end wealth management platforms, I need to keep up to speed in this fast-moving conversation. To do so, I reached out to some AI experts, including FNZ's internal team: Craig Allan, Head of Delivery; Brian Song, Head of Quant and AI, North America; and Wylie Smith, Head of Back Office Product.
And one very prominent external expert: ChatGPT itself! I asked it to write an article answering the question: How will AI impact wealth management?
Important discussions that ChatGPT missed
Risks are missing throughout this piece. Which calls up two important facts about working with generative AI:
You must know a fair amount about a topic to assess the validity of the answers. This is all plausible, but there are gaps. Unless you are an industry expert, you wouldn’t necessarily notice.
You need to consider how to ask the exact right question. In fact, our FNZ AI experts suggested that they could provide “prompt engineering” for this article so that ChatGPT would produce an answer with more nuance. I declined. That’s the point: most people would not know or take the time to work on “prompt engineering” to get more sophisticated results. That takes knowledge that most end investors do not have.
Which is why it’s important to look at what a basic ChatGPT answer produces, and point out a few other important areas that I would have liked to see addressed:
Hallucinations
One of the biggest areas of concern with generative AI is around “hallucinations”. This is the term being used to describe when generative AI provide nonsensical or outright false information. This is an odd curiosity in some situations, but when people’s saving and investing is on the line, hallucinations are frightening and have the potential for dire and long-lasting consequences.
What about user confidence?
We have learned in practice and from financial behavior research that when finances get complicated and/or markets start to fluctuate, humans trust other humans for help. This is why our recent FNZ data analysis showed such a dramatic variation between people’s behavior when they had a financial advisor vs do not.
Fiduciary concerns
The SEC sent a letter to the US congress recently about using AI ethically for financial advice. Their position is that they have “ample authority under the Investment Advisers Act of 1940 to oversee and monitor the investment adviser industry’s use of technology to provide investment advice to investors.”
Under the Investment Advisers Act, investment advisers are fiduciaries to their clients. So, does this mean that an entity using AI for financial advice must ensure that the recommendations and/or actions will fall in line with fiduciary behavior?
Similar conversations are happening globally. There will certainly be more discussion and regulations coming in this area.
Bad actors
A recent New York Times article also highlighted the potential for bad actors to leverage generative AI to make scamming and phishing more convincing. Perhaps it goes without saying that fraud is possible with any technology, but in this case, AI could enable convincing fraud at a scale heretofore unseen. It’s something to which everyone in our industry should pay attention.
How did ChatGPT do overall?
Let us know what you think! We'd like to hear what else you see missing from this conversation.
Authors
B. Blake Howard, Group Head of Solutions Marketing
Brian Song, Head of Quant and Artificial Intelligence
Craig Allan, Head of Group Operational Efficiency
Wylie Smith, Head of Group Operational Efficiency Business Analysis
Sources
Definitions for AI and algorithms aided by Koridor article from June 2022 and McKinsey’s Economic Potential of Generative AI from June 2023
BCG, Resetting the Course, Global Wealth Report, June 2023
NY Times article AI in Financial Advice, May 2023
SEC IAC Letter on Ethical AI, April 2023
Disclosures
The information contained within this publication is not intended as and shall not be understood or construed as financial advice. The information provided is intended for educational purposes only and you should not construe any such information or other material as legal, tax, investment, regulatory, financial or other advice. Nothing in this publication constitutes investment advice, performance data or any recommendation that any security, portfolio of securities, investment product, transaction or investment strategy is suitable for any specific person. FNZ Group and any its affiliates are not financial advisors and strongly advise you to seek the services of qualified, competent professionals prior to engaging in any investment.
All opinions expressed by the author within this publication are solely the author’s opinion and do not reflect the opinions of FNZ Group, their parent company or its affiliates. You should not treat any opinion expressed by the author as specific inducement to make any specific investment or follow an investment strategy.