Use of AI as a Self-Diagnosis Tool: A Boon or Bane?

Use of AI as a Self-Diagnosis Tool: A Boon or Bane?

Reading time: 3 min

  • Shipra Sanganeria

    Written by: Shipra Sanganeria Cybersecurity & Tech Writer

  • Kate Richards

    Fact-Checked by Kate Richards Content Manager

We’ve all consulted “Dr. Google” with a minor medical issue, only to accidentally terrify ourselves into believing a minor skin rash is probably a flesh-eating disease… or worse.

The availability of new generative AI applications, like ChatGPT, as well as new AI-based apps meant to help identify symptoms, is projected to one-up “Dr. Google”’s capabilities. Apps like Ada and First Derm are already leading the race, allowing users to scan symptoms and generate a diagnosis with the help of AI.

On one hand, this could have massive benefits to medical patients as well as medical workers. iDoc’s First Derm application is meant to reduce long wait times for dermatological care globally, for example.

“We have a preliminary Artificial Intelligence (AI) that can identify 33 skin diseases. In 24 months our AI will be better than any dermatologist at diagnosing skin diseases,” iDoc’s First Derm Linkedin page reads.

It’s easy to understand why many have their concerns. Multiple studies have been conducted to test these tools’ capabilities when it comes to conducting accurate medical diagnoses, translating medical jargon, and how effectively they can automatically summarize drug information.

According to a 2023 study published in The Journal of the American Medical Association, 39% of online health queries resulted in accurate diagnoses using ChatGPT. The World Health Organization (WHO) reports that over 40% of the world’s population has limited access to healthcare.

So, it’s easy to see how generative AI tools could be a boon to many people in this sense. This quick, cost-effective, and convenient access to medical information is expected to not only reduce medical costs but also improve health literacy and triage efficiency.

And AI’s rise in the healthcare market is inevitable: its global worth in the healthcare market is expected to increase by almost 50% by 2029.

At the same time, the limitations these tools come with could have very damaging consequences if not used responsibly. The possibility of inaccurate or misinterpretation of information is bad enough. But there are so many other things to consider, too, like ethical concerns regarding private patient data and the general fear that AI could potentially replace medical professionals.

“We need more research on the optimal uses, benefits, and limits of this technology, and a lot of privacy issues need sorting out,” Zahir Kanjee, the first author of the JAMA study and assistant professor of medicine at Harvard Medical School, said in a Beth Israel Deaconess Medical Centre report, reiterating that AI chatbots can’t replace medical professionals.

In order to stay safe, users should always verify any AI generated diagnosis with a medical practitioner and recognize the many limitations that are inherent in these tools, including gender and ethnicity biases.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!
5.00 Voted by 1 users
Title
Comment
Thanks for your feedback
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Show more...