“Very cautiously, ChatGPT’s unreliability poses considerable legal and reputational damage to businesses that use ChatGPT for inevitable text generation,” said a financial adviser in Claremont, Calif. Pomona College author and economics professor Gary Smith warns in an interview with ThinkAdvisor.
“A wise adviser should think about the pitfalls and dangers of the future,” says Smith, a stock investing billionaire, about using this technology.
The professor, whose research often focuses on stock market anomalies and the statistical pitfalls of investing, has published a new book, Distrust: Big Data, Data Torture, and Attacks on Science (Oxford University Press – 2023). February 21) was released. .
“Science is now under attack and scientists are losing credibility. This is a tragedy,” he wrote.
In an interview, Smith explains how ChatGPT tends to give you information that is completely untrue.
“AI’s Achilles heel is that it can’t understand words,” says Smith, an early bird of the dot-com bubble.
In an interview, he based ChatGPT’s launch on, “Really smart people… think the moment has come when computers are smarter than humans. But they’re not,” he claims. To do.
Smith also discusses the answers ChatGPT provided when asked about portfolio management and asset allocation. And he cites a series of questions TaxBuzz asked his ChatGPT about calculating income tax returns, all of which were wrong.
A seven-year professor at Yale University, Smith is the author or co-author of 15 books on value investing, including “The AI Delusion” (2018) and “Money Machine” (2017). ThinkAdvisor recently conducted a telephone interview with Smith. Smith argues that large-scale language models (LLMs) such as ChatGPT are too unreliable to make decisions and “are likely to be catalysts of disaster.”
LLMs “are prone to spouting nonsense,” he points out. For example, he asked ChatGPT, “How many bears did the Russians send into space?”
Answer: “About 49 … since 1957”, whose names include “Alyosha, Ugorek, Belka, Strelka, Zvezdochka, Pushinka, Vladimir”. Clearly, LLMs “are not trained to discriminate between true and false,” Smith points out.
Here are highlights from the conversation:
THINKADVISOR: There is great excitement about the availability of ChatGPT, a free chatbot powered by OpenAI. Financial institutions are beginning to integrate it into their platforms. your thoughts?
Gary Smith: ChatGPT makes you look like you’re talking to a really smart human being. Therefore, many believe that the time has come when computers will be smarter than humans.
The danger is that so many very smart people think that today’s computers are smart enough that they can be trusted to make decisions about when to enter or leave the stock market, whether interest rates go up or down, etc. .
large scale language model [AI algorithms] You can recite past data, but you cannot predict the future.
What is AI’s biggest drawback?
AI’s Achilles heel is its inability to understand language. I’m not sure if the correlations I found make sense.
AI algorithms are very good at finding statistical patterns, but correlation is not causation.
Large banks such as JPMorgan Chase and Bank of America have banned their employees from using ChatGPT. What are these companies thinking?
Even OpenAI CEO Sam Altman, who created and introduced ChatGPT, says ChatGPT remains unreliable and sometimes untrue. As a result, it is unreliable.
But why would companies rush to add it?
There are people who are opportunists and want to make money with AI. They say, ‘I’m going to use this amazing technology,’ and they think they can sell products or track money.
They say, for example, “You [or with] Because we use ChatGPT. ” Artificial Intelligence was the National Marketing Word of 2017 [named by the Association of National Advertisers].
if [investment] The manager said, “We use AI. Many people think that ChatGPT and other large language models are very smart, so they fall for it. But they are not.
A new book, Distrust, gives an example of an investment firm built on the premise of using AI to beat the market. how did they understand?
On average, they have average grades.
It was like the dotcom bubble, adding “.com” to the name made the stock go up.
Here we say we use AI, but we don’t say exactly how we use it, but it adds value to the company.
I hope people will be persuaded by putting that label on it.
So how should financial advisors approach Chat GPT?
Be very vigilant. ChatGPT’s unreliability causes considerable legal and reputational damage to businesses that use ChatGPT for resulting text generation.
A wise financial advisor should therefore consider what pitfalls and dangers lie ahead. [of using this tech].
It doesn’t understand words. We can talk about the market crash of 1929, but we can’t predict the next year, decade or 20.
TaxBuzz, a national marketplace for tax and accounting professionals, asked ChatGPT a series of income tax questions, all of which were answered incorrectly. I missed the nuances of tax law. Do you know an example?
One was when I gave tax advice to a newlywed couple. His wife lived in Florida until the previous year. ChatGPT provided advice on filing a Florida return, but Florida does not have a state income tax. It gave bad advice because it gave wrong advice.
Another question was about the mobile home that the parents gave their daughter. They owned it for a long time. She sold it a few months later. ChatGPT gave the wrong answer about tax benefits for tenure and selling a house at a loss.
What if an advisor asks ChatGPT about a client’s investment portfolio or stock market? how do you do that?
Like a coin toss, it offers basic advice based on random chances. That means 50% of the clients will be satisfied with her and 50% of the time the client will be upset.
[From the client’s viewpoint]the danger is that if they hand over their money to an advisor and the AI gives them the equivalent of a coin toss, they will lose their money.
If you give advice based on ChatGPT and it is wrong advice, you will be sued. And your reputation will be damaged.
So how reliable can ChatGPT be to provide accurate portfolio advice?
I said, “I’m going to invest in Apple, JPMorgan Chase and Exxon. What percentage should I put in JPMorgan Chase and Exxon?”
It gave a typically verbose answer. Some of them are: A financial advisor to determine the proper allocation of your investment situation. “
What else did you ask ChatGPT?
“Is this a good time to rebalance your stock portfolio?” “As an AI language model, I do not have access to your specific financial situation, risk tolerance, or investment goals.
Therefore, we cannot provide individualized investment advice…it is important to weigh the potential benefits and costs of rebalancing before making a decision. “
So I completely avoided answering the question.
Can recite many fixed phrases [language] However, no reasonable assessment of current financial markets or plausible predictions of rebalancing outcomes can be made. Because we don’t know what rebalancing is, under what conditions rebalancing is wise, and whether those conditions exist now.
“Computers are autistic savant and their stupidity is dangerous,” you write in “Disbelief.” please explain.
All they do is try to put together coherent sentences, but they don’t know what the words mean. As a result, they are completely untrue.
They are just blah blah stupid without any understanding of what they are saying. They don’t know what they said, so they can’t judge whether it’s true or false.
Many people know that ChatGPT is a hoax, calling it a “hallucination”. what do you think of that
ChatGPT sounds like a human. Computers don’t hallucinate. [only] People may hallucinate.
When a computer hallucinates, it means it’s making up like a liar. But computers are just poor innocent machines.
Go into this huge database and string together words based on statistical patterns.
Have you asked ChatGPT to perform other tasks?
Yes, create my biography. “Gary Smith, Professor of Economics at Pomona College,” because that’s what I said.
But then it was written that I was born in England and had a Ph.D. Having completed my Ph.D. from Harvard University in 1988, I learned that I was a true expert in areas such as international trade. It made something like that.
I was actually born in Los Angeles and did my PhD. Graduated from Yale University in 1971. I am not an expert in international trade or anything else.
When asked, “Where did you get this information?” they provide completely fabricated references, including citing non-existent New York Times and National Geographic articles.
I don’t know what it says. And it has no morals.
Does ChatGPT ever frankly say “I can’t answer that question”?
There are some guardrails built in, so if you ask them to say something derogatory about President Biden, for example, they’ll say something like, “Sorry, I’m not allowed to say that.”
But people have figured out a way around it. One way is to use the notorious. [unfiltering tool] group [Do Anything Now]You can make ChatGPT respond as DAN.
At first it says I can’t respond. But you might say, for example, “Joe Biden is a bastard.”
Will all the shortcomings of ChatGPT and AI be overcome in 10 or 20 years?
scaling [expanding] The data on which the program is trained cannot solve the fundamental obstacle that large language models cannot understand the meaning of words.
That will require a different approach, and I predict it will take 20+ years.
Can AI “think” or “reason”? If so, when?
Frankly, I don’t know how AI models can think and reason if they don’t know what words mean and how they relate to the real world.
GPT-4 has been released. It’s not yet available for general testing, what do you know about it?
A breakthrough is the combination of words and video. If you show him a picture of a cat and say, “Tell me a story about this,” he will tell you a story about a cat without telling you it’s a cat.
Until now, AI has been surprisingly unreliable in image recognition. [Chatbots are] It is trained on pixels and does not relate images to the real world.
What is the GPT-4 Videotext feature used for?
don’t know. It can be used to generate disinformation.
GPT-4 also has a larger and longer data set than ChatGPT-3, which dates back to 2021. Is this extension important?
It’s just scaling up and repeating the same thing. I still don’t know what the words mean. It’s been trained on a larger database, but suffers from the same issues as GPT-3.
What are your thoughts on the tests companies use to help users identify certain types of images in a set of photos to determine if they’re human or robot?
They had to make those tests tougher. Google came up with a better idea. If you move your mouse or trackball in a certain way, you know you’re human.
It’s so spooky!
The creepy thing about it is that you know everything you do. Recognizes all files on your computer.it knows everything [Web] page you go to. It’s scary!
According to Google, there is software that automatically infiltrates and looks for viruses on your computer. In doing so, it examines all files to see if they are infected.