The year 2054 serves as the backdrop for Steven Spielberg’s 2002 film Minority Report. It featured a bespoke eye-surgery robot controlled by a backstreet surgeon using visualisation technology. John Anderton, the protagonist, underwent a double eye transplant. Was this vision too futuristic for 2054?
As generative artificial intelligence (AI) reaches an inflection point, AI is increasingly imitating science fiction and interfering with every bit of our lives and lifestyles. The use of AI in medical science, namely in the diagnosis and treatment of human patients, has the potential to be a game-changer. It’s like having a top-notch physician in your pocket.
Ahmedabad-based cardiologist Tejas Patel made history in 2018 when he used a vascular robotic system to execute the first in-person telerobotic coronary intervention on a patient nearly 32 kilometres away. However, Stanford University was possibly the first to develop an AI tool called MYCIN in the 1970s that was intended to help doctors diagnose and treat bacterial blood infections and meningitis. Today, our lifestyle is dominated by an array of AI-controlled tools. In fact, investments in health care AI doubled globally in 2021, compared to the previous year.
AI is demonstrating promise in the diagnosis of infectious diseases, cancer, diabetes, and ocular and cardiovascular problems. Applications for predictive AIs are much more varied, including determining the propensity for Type-2 diabetes, heart disease, Alzheimer’s, and kidney disease. As a result, a paradigm shift is underway: Health care is becoming a non-doctor domain. While the internet has made information more accessible, generative AI has made it possible to assimilate knowledge. Health care could already be delivered in far more efficient ways, thanks to AI. Some big tech companies are also aiming to merge health care with AI. And big data analytics, deep learning, and iterative practices come into play.
Surgery is now routinely performed by robots. Not in Alien prequel featured a surgical robot performing major operations without human control!
But how easy is it for human patients to accept “Doctor AI”? According to a 2019 Harvard Business Review article, patients are reluctant to use health care provided by medical AI as they believe that their medical needs are unique and cannot be adequately addressed by algorithms. This mindset continues. According to a recent Pew Research Centre survey, 60 per cent of US respondents said they were “uncomfortable” with AI being used to make medical diagnoses or treatment recommendations. The empathetic bedside manners of a doctor cannot be replicated by AI, at least not yet.
In the 2002 movie, Die Another Day, James Bond is scanned and undergoes a blood test by an autonomously operating robot to confirm his identity. While “Doctor AI” has demonstrated promise in the diagnosis, prognosis, and perhaps even treatment of a variety of medical problems today, there have been, and will continue to be, hiccups. Consider the Google Flu Trends released in 2008 as an example. The goal was to aggregate search queries in order to create accurate flu predictions before they were done by America’s Centres for Disease Control and Prevention. However, it suffered from significant data flaws. The majority of people don’t know what “the flu” is, which leads them to search Google for any “flu-like” incident, even if it is not one at all. This demonstrates the risk of adopting AI that is based on “wrong” data.
The Covid pandemic also triggered an explosion of AI tools; however, for various reasons, the majority of AI-powered programmes for disease diagnosis and treatment, as well as AI platforms for forecasting the spread of Covid, were largely useless.
In addition, we are aware that AI itself has the potential to make mistakes. One would ask if “Doctor AI” can likewise “hallucinate” and outright “lie,” since generative AIs have been observed to do so occasionally. The effects on health care could be very negative. Who will be held responsible for that? Doctor AI? The related human doctors? The company manufacturing Doctor AI? Can these possibilities be reduced, if not entirely eliminated, by optimising Doctor AI’s design? Not very easy, though. As a result, there are significant ethical, legal, and regulatory concerns with deploying AI in medicine. Appropriate regulatory frameworks must be created.
The writer is professor of statistics, Indian Statistical Institute, Kolkata