The growing role of artificial intelligence in daily life has brought new scrutiny to the threat machines and automation pose to the jobs we perform and rely on. For all of the discussion that's taken place, much of it still remains in the realm of speculation. What we can say we safely know at this point is that artificial intelligence won't replace the role that human physicians play in medical care. While devices like Alexa and Google Home can be great for accessing information quickly, there's much more to any medical practice than just knowing the right answers and approaches. So much of what a patient gets from a visit to the doctor's office is a personable human experience that the limitations of artificial intelligence can't replicate.
What doctors should worry about, or at least be made aware of, are the risks their patients face by becoming overly reliant on artificial intelligence when it comes to tracking their personal health. The imagined scenario above may seem absurd to you, but in reality, it isn't that far-fetched. Before smart devices became a staple of modern living, many of us deferred to online resources like WebMD to self-diagnose our symptoms and get a quick answer to what was wrong and how to fix it. Smart devices can provide the same results with even greater speed and less effort, appealing to those same people who are more eager to turn to technology for help before turning to a physician. They may see it as a means to save money, or they may feel confident enough to treat their ailments once they have an idea of what they are. Regardless of their reasoning, these people are unknowingly putting themselves at greater risk for the sake of an advantage that doesn't really exist.
Most people understand that when they ask their smart devices a question, it utilizes an internet search to find relevant results before reading the top answer back to them. This function can work well to answer simple questions as they relate to health and medicine (such as “Does suboxone block fentanyl?” or “What are the side-effects of methadone?”), it becomes a much more complicated issue when you use the same technology to try and pin down your symptoms to a specific disease. That's because anyone collection of symptoms could result be in several different diagnoses, ranging from the mild to the severe, with no effective way to determine which one is right. At best it could be enough to convince them to visit a doctor. At worst it could leave them to guess which diagnosis is correct, leaving them open to worsening symptoms or spreading their illness to others.
Doctors should make it a point to ask patients about how they're using smart technology in regard to their personal health. They can make sure their patients have a good understanding of how the technology works, why the results they get may not be reliable, and that the answers they get are no substitute for a proper diagnosis from professionals. Patients can integrate AI into their personal health, but they need to know that it shouldn't be more than a small part of their overall routine.
Even if medical practitioners aren't using smart devices in their work, it's in their best interest to stay informed of how they are being used by patients and how they may complicate matters in healthcare. AI doesn't pose a threat of replacing doctors, but it can threaten the health of their patients and make their jobs more complicated in the process.