Like it or not, artificial intelligence (AI) is here to stay. Before analyzing the pros and cons, it’s helpful to have a brief historical perspective. The term Artificial Intelligence was first coined in 1956 by John McCarthy. In 1969 Shakey was the first general-purpose mobile robot built.1969. It is now able to do things with a purpose vs. just a list of instructions. In 1997 IBM created the supercomputer Deep Blue which defeated the world champion chess player. In 2002 the first commercially successful robotic vacuum cleaner was created. Between 2005 and 2019 we saw advances spanning speech recognition to smart homes. And most recently, in 2020, Baidu released the LinearFold AI algorithm to medical and scientific teams developing vaccines for COVID 19.
Google’s Oxford Languages defines AI as “the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.” Applying this definition to optometry, both visual perception and decision making are the two components that most significantly align with our daily clinical tasks and objectives. For example, when we currently encounter a patient who reports distorted central vision (metamorphopsia) we would likely perform a dilated fundus exam and optical coherence tomography (OCT). Based on our “visual perception” we then make a diagnosis and formulate a treatment and management plan; this is “decision making”.
Now, what if our imaging devices not only allowed us to pinpoint the involved retinal layer(s) but also gave us a list of differential diagnoses and “highlighted” the most likely one? This clinician would be totally on board! Thinking about clinic time saved, greater confidence in my diagnosis, and timelier referral to a subspecialist when indicated, are all potential benefits of AI that I would eagerly welcome. Apparently, I am in good company. A recent study published in the Journal of Optometry, looking at optometrists’ perspectives of artificial intelligence in eye care reported that 72% of 400 optometrists completing a 17-item survey believed it would improve the practice of optometry.1 Along with conceptual acceptance, however, the willingness to move forward and implement any new technology is contingent upon that technology meeting precision, accuracy, and medicolegal standards. Indeed, in the above cited study, half of optometrists had concerns about the diagnostic accuracy of AI (53.0%).
Taking a broader view of the potential benefits of AI in the healthcare delivery system, we can envision greater reliance and confidence in telemedicine, improved access to specialists’ guidance in rural communities via primary care providers, and the formation of robust data bases leading to better disease and treatment analytics.
But along with AI’s potential value come potential risks. Poorly designed systems can lead to misdiagnoses. While medical errors are not rare in our current system of care, AI errors are potentially different as patients and providers may react differently to injuries resulting from software than from human error. Also, an underlying problem in one AI system might result in injuries to thousands of patients. Training AI systems requires large amounts of data from sources such as electronic health records, pharmacy records, insurance claims records, or consumer-generated information. Data, currently fragmented across many different systems, further increases the risk of error. Breach of individual privacy is yet another concern; AI can predict private information about patients even though the algorithm never received that information. Cultural biases will occur if software is trained on data sets of limited and homogeneous populations. And, of course, inadequately designed software can increase the cost of healthcare delivery due to unintended consequences. Finally, will AI reduce the need for human workers? Will it lead to providers losing the ability to further the development of their medical knowledge? Questions such as these remain to be answered.
I believe that effective collaboration among software developers, public health experts and epidemiologists, health care providers and third-party entities, will be defined by a measured, ethical, and methodical approach. This strategy will hopefully successfully address and enable resolution of the potential pitfalls of AI. We must recognize that the current system is likewise not without most of the same deficiencies. Moving forward with new technology will set the stage for a “fresh start” with even better outcomes for patient care.