Asking a chatbot for health advice? After the recent launch of OpenAI's ChatGPT Health service, there are several important factors to consider.

Ирина Орлонская Exclusive
VK X OK WhatsApp Telegram
The author of the article is K-News. Permission from the K-News editorial team is required for copying or partial use of the material.

With the launch of ChatGPT Health by OpenAI, many questions have arisen about the appropriateness of turning to AI chatbots for medical recommendations.

As the number of users seeking advice from chatbots increases, it was only a matter of time before companies began offering solutions dedicated to medical issues.

OpenAI has released ChatGPT Health — an enhanced version of its chatbot that, according to the developers, is capable of analyzing users' medical data, including health records and data from wearable devices, to provide health information.

Currently, access to the program is available through a waiting list. Competing company Anthropic also offers similar features in its chatbot Claude for a select group of users.

Both companies emphasize that their technologies are not intended to replace professional medical assistance and should not be used for diagnosing. They can be useful for summarizing and clarifying complex medical test results, preparing for a doctor's visit, or identifying significant trends in health that may be hidden in medical records and statistics.

But how reliable and safe are they in analyzing health data? And should we rely on them?

Here are some important points to consider before discussing your health with a chatbot:

Chatbots can offer more personalized information than traditional internet searches

Some medical professionals and researchers who have tested ChatGPT Health and similar platforms believe that this is a step forward compared to existing methods.

Although AI systems are not perfect and can sometimes make mistakes or provide inadequate advice, the information they provide is often more tailored to the specific situation than what can be found through Google.

As Dr. Robert Wachter from the University of California, San Francisco noted: “Often the alternative is a lack of information or guessing. Therefore, if these tools are used responsibly, they can provide really useful insights.”

In countries like the UK and the US, where waiting times for doctor appointments can stretch for weeks and time in emergency departments can be lengthy, chatbots can help reduce anxiety and save time.

Moreover, they can indicate the need for urgent medical attention in the case of serious symptoms.

One of the advantages of the new chatbots is that they can tailor their responses based on the user's medical history, including medications taken, age, and previous doctor visits.

Even if you do not provide the AI with access to your medical data, Wachter and other experts recommend sharing as much information as possible to make the responses more accurate.

Do not turn to AI for alarming symptoms

Experts, including Wachter, emphasize that in some situations it is best to seek help from a doctor rather than a chatbot. Symptoms such as shortness of breath, chest pain, or severe headaches require immediate medical attention.

Even in less critical situations, patients and doctors should be cautious with AI programs, says Dr. Lloyd Minor from Stanford University.

“When making significant medical decisions or even for less important health issues, one should not rely solely on the conclusions provided by large language models,” adds Minor, dean of Stanford's medical school.

Even in cases like polycystic ovary syndrome (PCOS), where symptoms can vary, it is better to consult a doctor, as this will affect treatment choices.

Be mindful of privacy before uploading medical data

Many of the benefits offered by AI bots depend on how willingly users share personal medical information. It is important to remember that data transmitted to AI developers is typically not protected by federal privacy laws in the US that govern the handling of sensitive medical information.

The HIPAA law protects medical information and imposes penalties on medical institutions for its disclosure. However, it does not apply to companies creating chatbots.

“When a person uploads their medical record to a large language model, it is not the same as handing it over to a new doctor,” explains Minor. “Consumers should be aware that the privacy standards here are completely different.”

OpenAI and Anthropic claim that user data is stored separately and protected by additional security measures. The companies do not use medical data to train their models. Users must separately consent to the transfer of such information and can withdraw their consent at any time.

Despite the growing interest in AI, independent research on such technologies is still in its early stages. Initial results show that programs like ChatGPT perform well on medical exams but do not always interact effectively with live people.

A study from the University of Oxford involving 1,300 participants found that users of AI chatbots for information about hypothetical diseases did not make more informed decisions than those using traditional online searches or their own judgments.

In cases where AI chatbots were presented with written medical scenarios, they correctly identified the underlying condition 95 percent of the time.

“There were no problems with that,” says lead author Adam Mahdi from the University of Oxford. “All the difficulties arose during communication with real participants.”

Mahdi and his team identified numerous communication issues. People often did not provide chatbots with enough information for accurate diagnosis. In turn, AI systems frequently gave mixed responses, making it difficult for users to distinguish between correct and incorrect information.

The study conducted in 2024 did not include testing of the latest versions of chatbots, including ChatGPT Health.

A second opinion from AI may be helpful

The ability of chatbots to ask clarifying questions and extract important details from users is an area where, according to Wachter, there is still room for improvement.

“I believe they will become really effective when their approach to communicating with patients is more 'medical,' and the dialogue resembles a real conversation,” says Wachter.

Nevertheless, one way to increase confidence in the information received is to use multiple chatbots, just as patients do when seeking a second opinion from another doctor.

“Sometimes I enter the same data into ChatGPT and Gemini,” shares Wachter, referring to the AI tool from Google. “And when their responses match, I feel more confident that it is the correct answer.”

The article "Turning to a chatbot for medical advice: is it worth it?" first appeared in K-News.
VK X OK WhatsApp Telegram

Read also: