ChatGPT became as quickly embedded in American culture and lexicon as the latest TikTok trend or Taylor Swift craze, but how it will work in everyday life long term is still a mystery.
Some believe it will revolutionize the way we live, learn and work. But many, including the Biden administration, believe that before we can know what that function can do, we need to fully understand what it cannot do.
For example, one lawyer recently realized that legal research is probably not ChatGPT’s forte. That’s when the case cited in the AI-generated brief turned out to be a complete hoax and had to be explained to an infuriated judge. And while this merely has a negative impact on the client’s lawsuit, and certainly on the attorney’s reputation, this type of false or misleading information can be fatal in healthcare. Consider what happens when AI begins to move from its traditional use in non-clinical administrative environments to more clinical settings.
Older versions of AI are already in use, such as reading recordings for certain scans and speech-to-text conversions, but a recent study found that 195 common patients approached both AI chatbots and chatbots We tested ChatGPT’s ability to respond empathetically using a list of questions. doctor. We then asked a panel of physicians to blindly rate which answer to each question was better. The judge preferred his ChatGPT response in 79 percent of cases. Not only were they considered to be of high quality and empathy, ChatGPT responses were about four times longer than physician responses.
But how practical is this in a clinical setting? TRUE? This is never a problem.? The same problems that plagued our ill-fated lawyer exist in this situation. ChatGPT continues to make fundamental mistakes in math, and answers often contain false or false facts.
But what it means for healthcare providers is help, especially in the face of provider burnout and increased messaging to patients due to increased telemedicine usage in recent years. Providers can greatly benefit from simply running questions from their patients through ChatGPT and editing them as needed. This could potentially free healthcare providers to spend more time on face-to-face patient care. Other proposed uses include summarizing a patient’s medical history and writing clinical care notes and discharge orders. However, many wonder what ChatGPT has to do with issues such as patient confidentiality, bias, accessibility and protected medical information.
And what is the legislative and statutory landscape surrounding this emerging technology? It incorporates the recognition of Recently, the government also solicited public input on AI risk mitigation and hosted hearing sessions with workers using AI in their professions, including healthcare workers.
Senate Majority Leader Schumer Meets with Bipartisan Group of Senators to Craft Comprehensive Bill to Regulate AI, Shortly After Several Congressional Committee Hearings Including Testimony of CEO of Company Backing ChatGPT embarked on. While it is clear that legal action is being taken, specific details about what guardrails will be put in place for this technology are currently lacking.
When specifically asked about its role in the healthcare industry, ChatGPT said it was, according to citations, “a groundbreaking development poised to reshape the healthcare landscape.” Humility may not be another hallmark of his ChatGPT, but few would think, at least in some capacity, that the technology would not take hold.
So for now, the healthcare industry is exploring opportunities, awaiting final answers on when, how and where.