Doctors caution as Saudis turn to AI for medical answers first

Doctors caution as Saudis turn to AI for medical answers first
Increasingly, people are turning to AI tools before visiting clinics, using them for everything from symptom checking to diet planning.
Short Url
Updated 01 May 2026 15:55
Follow

Doctors caution as Saudis turn to AI for medical answers first

Doctors caution as Saudis turn to AI for medical answers first
  • Experts warn of risks from misinterpretation, anxiety and loss of clinical context

RIYADH, ALKHOBAR: Artificial intelligence is moving beyond hospitals and research settings into everyday health decisions across Saudi Arabia. 

Increasingly, people are turning to AI tools before visiting clinics, using them for everything from symptom checking to diet planning. What used to be simple online searches has evolved into ongoing conversations with chatbots that generate tailored responses within seconds.

“In many cases, they come in treating the physician almost as a second opinion after what they’ve already seen through AI. It used to be doctor first, then search. Now for many, it’s AI first, then doctor,” family physician Dr. Mohannad Al-Qarni told Arab News.

Patients are now arriving with AI-generated interpretations of symptoms, medications, and lifestyle concerns —reshaping how consultations unfold.

“You sometimes hear very direct questions shaped by AI use, like: ‘Do I need a colonoscopy in my case?’ Many patients use it as a starting point and come in with a baseline understanding, looking for confirmation. The challenge is making sure that information is accurate, contextual, and doesn’t create unnecessary anxiety,” he said.




Physicians in Saudi Arabia are increasingly encountering patients who consult AI tools before visiting clinics. (Creative Commons)

For some users, AI has become a daily health tool. Reem Al-Harbi, 27, who lives with PCOS and IBS, began using ChatGPT to manage persistent digestive issues.

“ChatGPT suggested starting with laxatives as an initial solution and put me on a low FODMAP diet for eight weeks to figure out which foods trigger my IBS in order to have some easily digestible food. Then I gradually went back to my normal diet — and it worked.”

“It turned out onions and gluten were the cause. Now, even when I get symptoms, I know what triggered them and how to adjust my diet so my stomach goes back to normal,” she said.

She now relies on it for structured meal planning and tracking dietary impact on blood sugar levels.

Others use it more cautiously. Hayat Hasan, 32, said AI can be helpful for recipes and emotional support but should never replace medical professionals.

DID YOU KNOW?

• Some patients arrive at clinics with AI-generated questions.

• Families are using AI to interpret unclear or conflicting medical diagnoses.

• Some users upload lab reports to AI tools to help interpret results.

• Mental health experts warn AI can sound convincing even when it misinterprets symptoms.

“ChatGPT is customized based on us, so it’s unhealthy to just go with its advice as it’s not an experienced professional. (We) shouldn’t trust it 100 percent in the first place. It’s a computer that doesn’t know your medical history or medical consequences of actions or dosages that are 100 percent true or proper,” she said.

Still, she noted its value in managing anxiety. “Instead of me basing my actions on panic-based feelings, it helps me talk through my emotions until I’m calmer,” she added.

AI is also being used within families for early interpretation of symptoms. “I tried it with my mom, who is 45 and has pain and a burning sensation in her stomach. She went to several doctors, and each one gave her a different diagnosis,” said 25-year-old Noura Al-Harbi.

“I explained what she was feeling, and told (the chatbot) to ask me questions. It went through the symptoms in detail until we got close to the most likely diagnosis.” The chatbot suggested acid reflux and gastritis, later confirmed by a doctor.

Another common use is decoding medical reports. Sarah Al-Zahrani, 29, used AI while her mother was hospitalized with pneumonia.




AI tools are being used by patients to interpret lab results before discussing them with medical professionals. (File)

“I would scan those documents, take copies, and upload them to ChatGPT to get interpretations of the results. This helped me formulate questions to discuss with the doctor, especially when something seemed off. Interestingly, the doctor even asked me if I was a doctor or a fellow colleague!” she said.

However, doctors warn that interpretation without clinical context can be misleading. “AI can over-interpret minor abnormalities or miss the clinical context, so patients end up worried about results that are not actually significant,” Al-Qarni said.

The risks are more serious in mental health. Psychiatrist Dr. Heba AlSaad said the main concern is not just incorrect answers, but confident-sounding misinformation.

“The biggest concern is misdiagnosis, because AI may confuse normal stress reactions with psychiatric disorders, or miss serious conditions like bipolar disorder, psychosis, substance use, or suicidality,” she said.

“Another major risk is a wrong treatment plan, where someone delays proper care, stops medication, or follows generic advice that is unsafe for their condition.”

She added that AI cannot interpret non-verbal cues or assess risk levels — both essential in psychiatric care.

Patterns of overuse are already emerging. “We sometimes see patients using AI reassurance repetitively, which can worsen anxiety or obsessive thinking rather than relieve it,” she said.

“Others may receive oversimplified advice such as ‘just manage stress,’ leading to delays in seeking help for depression, ADHD, trauma-related disorders, or psychosis.”

Still, she acknowledges its controlled use in supportive roles. “AI can be useful as a supportive tool for psychoeducation, mood tracking, journaling prompts, reminders for coping skills, and helping patients organize questions before appointments,” she said.

But she is clear about limits. “AI should not replace clinical diagnosis, suicide risk assessment, psychotherapy, medication decisions, or management of complex psychiatric cases.”




AI-powered clinical systems are being developed to support doctors with
real-time, evidence-based medical insights.(File)

The popularity of AI tools also reflects deeper systemic gaps.

“Many people turn to AI because it is free, immediate, private, and available anytime. This highlights that mental health systems still need better accessibility, affordability, and public trust,” she said.

Doctors are already adapting. “Patients are already bringing AI into the consultation process, and many physicians are also starting to use it as a support tool in their workflow,” Al-Qarni said.

“The issue is not that AI is always wrong — it’s that it can sound convincing even when it lacks context,” he continued.

Users also recognize limitations. “Wrong macros estimation can happen, so it’s good to also have a good knowledge of nutrition beforehand and read the macros on the food you consume,” said Mazyar Ali Nicolas Javadi, 26.

“I would never use it for mental health or other topics that are more ‘important’ and where a wrong answer could have dangerous consequences.”

Beyond consumer use, AI is now entering clinical infrastructure. One example is HakeemDx.

Founded by Bilal Adi, the Saudi-based platform is designed as a clinical decision support system rather than a public chatbot.

“HakeemDx is designed as a clinical decision support engine that combines advanced language models with a system that retrieves trusted medical guidelines, research, and drug data before generating answers,” Adi said.

Integrated into hospital systems, it processes patient data and produces structured, evidence-based outputs for clinicians.

“We addressed this by positioning HakeemDx as an assistant, not a replacement, with transparent, evidence-backed answers … allowing clinicians to validate it in real settings and build confidence over time.”