Opinion: Chatbots Are Becoming People’s Primary Care Doctors—Impressive, But Risky
We are jumping from Dr. Google to Dr. Chatbot quite fast. In August, OpenAI reported over 200 million weekly active users, and while users’ most popular queries are about writing and coding, some use them for more personal matters, from dating advice to skincare routines and disease diagnoses.
A recent study revealed that AI chatbots outperformed physicians in diagnostic accuracy. But, can we truly trust AI as our doctors?
Just as the range of topics people inquire about has expanded, concerns about the risks and dangers of relying on AI in the medical field have also raised alarms—including those involving life-or-death matters.
Dr. Chatbot Goes Viral For Its “Miracles”
A few days ago, a woman went viral on Reddit because she shared how ChatGPT helped her improve her skin better than a dermatologist with impressive before-and-after images.
A 37-year-old Caucasian woman used ChatGPT to create a custom skincare routine to treat her closed comedones, and it actually helped her! pic.twitter.com/vwkm13G2Hs
— AshutoshShrivastava (@ai_for_success) November 11, 2024
“I then gave chatGPT a prompt to act like a professional dermatologist and help me make a skin routine that would help get rid of my closed comedones, acne, minimize pores and tighten skin, and reduce fine lines. I also gave it photos of my skin at that time,” she wrote and then explained that she told the chatbot to consider all the products she already owned and suggest new ones with a skincare routine.
After two months, the Redditor was amazed by the improvements in her skin.
It’s not an isolated case. Many users have revealed how chatbots have been able to understand and explain X-rays and blood tests better than their own doctors. Even doctors have been amazed by the image and data analyses interpreted by artificial intelligence.
Why Are People Using Chatbots As Doctors?
AI can transform into the professional you need—a general practitioner, a therapist, a dermatologist, a nutritionist, a radiologist—when you need it, and there are many reasons why people are choosing AI over humans.
Specialists Can Be Expensive
The woman with the skin case revealed that she spent around 400 dollars purchasing the products the chatbot suggested.
“It was quite an investment, but when I think about what a consultation with a dermatologist would have been this isn’t even bad,” she wrote.
According to BetterCare, in 2024 a patient without health insurance in the United States must pay around $150 for a dermatologist visit and up to $1000 for treatment. And the average cost of a psychotherapy session—another popular use for AI tools—ranges from $100 to $200 per session, according to Forbes Health.
AI chatbots are cheaper alternatives and could be seen as the only choice, especially for people in lower income brackets.
Of course, another question arises: If something goes wrong, is it really a cost-saving measure?
Faster And Handier
I recently got a rash on my arm. Even though I saw that woman’s skin miracle post, I preferred to consult a dermatologist in Spain. The earliest available appointment was in three months, even with private health insurance. I’m lucky that it seems to be just dryness, that it doesn’t bother me, and that hopefully it will go away by the time the doctor can see me.
Medical care timelines depend on the city and the patient’s insurance plan. However, seeing specialists and even primary doctors has become a test of patience in many parts of the world.
Chatbots are not only literally at the reach of our hands, but can analyze our queries and reply within seconds.
Less Awkward
I have a doctor friend I can reach out to whenever I have a medical consultation, but the fact that she is my friend also makes it awkward sometimes. Before I text her, I always ask myself: Have I checked up on her recently? She is a great human being and a wonderful professional, and I am sure she will never stop answering my medical concerns. But, from human to human, I can’t help but care about her feelings as well.
One of the advantages of artificial intelligence is its own artificial condition. It doesn’t have feelings or a bad day, it will not judge you or tell you “Stop being such a hypochondriac” (not that my friend has ever said this to me) because they are trained to be polite and answer all of our concerns with all the information we want. It’s part of what makes the technology addictive, and probably why many prefer to ask uncomfortable questions in the “privacy” of their smartphones.
The Risks And Dangers Of Relying on Chatbots
It all sounds cool and fun until we take time to think of the risks and challenges AI and humanity are facing.
Wrong Prompt, Wrong Spell
In that study where AI outperformed doctors, an important conclusion was made: the group of doctors who were allowed to use AI as an ally for the diagnoses did not perform considerably better than those who weren’t allowed to use it. Why? Well, part of the problem was that they did not how to write the right prompts to get the most out of the AI.
“They were treating it like a search engine for directed questions: ‘Is cirrhosis a risk factor for cancer? What are possible diagnoses for eye pain?’,” said Dr. Jonathan H. Chen, one of the authors of the study, to the New York Times.
This is one of the problems many users and AI technologies face too. Chatbots are good at diagnosis, but many symptoms need to be analyzed individually in a personalized manner. What if a person fails to provide the right context to the chatbot? Or forget to include important details or conditions a real doctor in front of a patient wouldn’t miss?
Inaccuracy And Hallucinations
Chatbots are professional know-it-alls, and sometimes say wrong things with a lot of confidence. And we buy it. And users who rely on the technology daily trust it more and more.
It’s hard to forget that time—in May—when Google Gemini suggested users add “about ⅛ cup of non-toxic glue to the sauce to give it more tackiness” to a pizza.
It was funny cause we could immediately spot the hallucination—that fancy name for when AI provides absurd answers. But what if it concerns a more complex topic, what if the person reading the AI answer is scared, lonely, and worried about their health?
Even as AI gets better month after month the possibility of an error is still there.
Who’s To Blame If Something Goes Wrong?
The ethical debate is a hot topic at the moment. If the chatbot provides a faulty diagnosis or terrible advice, who is to blame? The developers, the model, the AI company, the sources that trained that model?
A tragic case raised concerns a few weeks ago after an American mother blamed AI for the death of her 14-year-old son and filed a lawsuit against the startup Character.ai—a platform to create AI characters. The teenager suffered from depression and got obsessed with the technology. His avatar, Daenerys Targaryen, discussed with him a plan to kill himself and even encouraged him to do it.
While AI has been considered by the World Economic Forum to be a powerful tool to alleviate mental health crises across the world and reduce the increasing percentage of anxiety and depression cases, it can also be dangerous, especially for children.
Do It “At Your Own Risk”
While it still needs to be improved, I like to think that AI has great potential to reduce wait times in healthcare—both for specialist appointments and emergency room care—to accelerate scientific advancements, and to support doctors, particularly those overwhelmed by excessive workloads and low wages, as is often the case in Latin American countries, by assisting patients with basic questions from home.
It could also help bridge gaps in access to healthcare between different social classes, paving the way for a democratization of medicine like never before. All of this is now within our reach, just a query away for anyone with access to a smartphone.
However, it is crucial to understand that we are still in the early stages and we must protect the most vulnerable population. Anyone choosing to use Dr. Chatbot to improve their health today must do so with an understanding of the fine print in the terms and conditions, with critical thinking, and with the awareness that—for now—they are taking on the responsibility of the risks involved.
Leave a Comment
Cancel