Larry Magid: Dr. AI? Not quite but still helpful ...Middle East

mercury news - News
Larry Magid: Dr. AI? Not quite but still helpful

Let me begin by saying that you should never make an important decision based solely on information you get from an AI chatbot like ChatGPT, Google Gemini or Meta AI. They can “hallucinate,” take things out of context and fail to understand a wide range of variables. That’s especially true when it comes to medical advice. I would never take a pill or submit to a medical procedure purely on the advice of an AI chatbot.

Related Articles

Magid: Wearing a device that listens to everything you say and hear Magid: Apple AirPods function as ‘hearing aids’ Magid: Microsoft Turns 50 Apple Watch vs. Google & Fitbit Magid: Switching from Android to Apple isn’t so easy

Having said that, I do use AI tools to help in many of my decisions, including medical ones, but if it’s an important decision, I always do lateral research either by visiting a website of a reputable expert organization or consulting a knowledgeable professional.

    Where it’s helped me

    I’ll give you some examples. Last summer, while I was in eastern Europe, my thumb started showing signs of infection. I first decided to wait till I got home to seek treatment, but it got very painful in the middle of the night. I didn’t have easy access to medical care, so I turned to ChatGPT for advice. It told me that one of the remedies is Keflex, a common antibiotic. As it turned out, I had that and a couple of other antibiotics with me, thanks to a prescription from a physician who suggested I carry these in case I ever need them while overseas. I didn’t take the pill right away but instead consulted websites from Mayo Clinic, the UK’s National Health Service and other highly reputable medical sources, which confirmed the advice. I took the Keflex but also messaged my doctor who got back to me the next day confirming that this was a good choice. Later, I met with a dermatologist who prescribed an additional course, just to be sure I would fully recover.

    A few months ago, a doctor prescribed a drug to help clear up a sinus inflammation without explaining the side effects. Even if I have a doctor’s prescription, I don’t take any drug – not even an over-the-counter medication – without doing my own research.  I consulted a generative AI tool that told me that the drug was mostly safe but highlighted potentially dangerous side effects. Based on that, I consulted another doctor who provided advice on how to minimize those side effects and had a good outcome from the course of treatment.

    An explainer but not for diagnosis

    I do not use AI, Google, or any other online tool to self-diagnose the cause of symptoms because symptoms can often be associated with a wide variety of possible causes ranging from totally benign to life threatening. If I have a symptom that concerns me, I consult with a doctor, not the internet.

    But AI can be useful in interpreting medical test results if you don’t have immediate access to professional advice. Like many medical facilities, the clinic that I use posts blood work and scans to an online portal sometimes before they’re seen by the doctor. Reading these reports can be confusing and even stressful, especially if you see an abnormal result that you don’t understand. AI can help between the time you get the results and when you hear from your doctor.

    For example, I once got a radiology report that had a finding that I didn’t understand. It was on a weekend when I couldn’t immediately reach my doctor, so I sent him a note via the online portal but also imported the report into ChatGPT, which explained in plain English that it wasn’t a significant health risk. That relieved my immediate anxiety, and sure enough, my doctor confirmed that it was nothing to worry about.

    I admit I was reluctant to do this research on my own, because if the chatbot had reported that it was significant, I would have been anxious until I got more information from the doctor. Because I saw the finding on the radiology report, I was already anxious until ChatGPT told me it was not likely a big deal.

    Interpreting reports

    I also use generative AI to help me understand the reports I get from my Fitbit, Apple Watch and Rigconn smart ring. These devices provide information such as pulse rate through the day and overnight, resting heart rate, variable heart rate, oxygen saturation, cardio recovery, walking heart rate, average cardio fitness, breathing rate, sleep stages, skin temperature variation and more. I understand the meaning of some of these, but not all of them. And even though Apple and Fitbit do a pretty good job explaining these metrics, they don’t put your results into any context. So, if I’m concerned or just confused, I take a screenshot of the results, load it into a GAI service and get an interpretation. So far, I have been reassured that I’m in pretty good shape. I once loaded all of the results into ChatGPT, which helped provide me with an overview of my fitness levels that none of the device’s apps could do individually.

    I’m not suggesting that others should wear multiple fitness trackers or even a single tracker. I do so as part of my work as a tech columnist and my curiosity about these devices.

    AI can be wrong or interpret information out of context. At least for the foreseeable future, it will not replace doctors and other health professionals. And even if it could, I would miss the compassionate and personalized service you can get from good and caring health professionals. Technology has its place, but it can’t replace human kindness.

    Related Articles

    Supreme Court signals it will let fuel producers sue over California emission standards As Tesla falters, these new EVs are picking up the pace Intel set to announce plans to cut over 20% of staff San Francisco-based Databricks to hire hundreds in India to accelerate AI boom ‘Change is inevitable’: Berkeley OKs demolition of historic theater

    Larry Magid is a tech journalist and internet safety activist. Contact him at [email protected].

    Read More Details
    Finally We wish PressBee provided you with enough information of ( Larry Magid: Dr. AI? Not quite but still helpful )

    Also on site :