Written by: Theresa Sudaria
What is Artificial intelligence (AI)? AI is described by some as a system with 3 essential qualities:
Recent advances in AI technologies have enabled their broader use in multiple areas of medicine and led to their increased accuracy as well. Keep reading to learn some of the types of AI that are currently available, as well as some of the ways they are improving medicine in the world today.
Historically, AI consisted of rule-based expert systems based on collections of “if-then” statements. These were commonly used in the 1980’s and later periods.
Rule-based systems require both human subject experts and knowledge engineers to work together to construct a series of rules in a specific knowledge domain. These systems work well up to a point, but when the number of rules gets too large, individual rules begin to conflict with each other and the system tends to break down.
In addition, if the knowledge domain changes, the rules need to be changed, which can be a difficult and time-consuming process. Rule-based expert systems can be difficult to maintain because medical knowledge is changing so rapidly, that keeping the system up to date can require too much maintenance.
Rule-based expert systems are still widely used today to aid decision-making when using electronic health records (EHRs). The rule-based system applies the rules that seem appropriate to an EHR entry and uses the output to predict that the patient has a particular disease or condition.
ML refers to the ability of a computer or algorithm to learn and adapt without following specific instructions. ML requires access to a large, high-quality dataset. ML can be either “supervised” or “unsupervised.” Supervised learning uses labeled datasets to help predict outcomes, whereas unsupervised learning uses unlabeled datasets.
Supervised machine learning is designed to predict a desired output (e.g., a disease) based on the input data (e.g., photographs).
For example, in this practice, ophthalmologists and computer scientists worked together to test and deploy an automated image classification system to provide diagnostic support for early-stage diabetic retinopathy (DR) in patients with diabetes. DR affects more than 90 million people worldwide and is the leading cause of blindness in adults. Fundus photography is an effective method of monitoring the extent of DR in individuals and identifying patients who will benefit from early treatments. The problem is that, in many parts of the world, there are too few ophthalmologists available to read the fundus photographs and follow up with individual diabetic patients.
The ML system screens millions of retinal photographs of patients with diabetes. Teams of researchers and collaborating institutions showed that an AI system trained on thousands of images can reach physician-level sensitivity and specificity in diagnosing DR. The AI system was also able to identify previously unrecognized associations between image patterns in the fundus photograph and cardiovascular risk factors.
AI systems have reached specialist-level accuracy in many other diagnostic tasks as well, until they can predict patient prognosis better than clinicians and can assist in surgical interventions. For instance, AI-assisted medical image-based diagnosis is operating successfully in the following medical specialties:
Patient engagement and patient adherence are two additional examples of places where AI is gaining a foothold. These areas offer a great deal of potential value because the most patients proactively participate in their own care and well-being, the better their healthcare outcomes will be.
One of the current emphases in research is using ML and business rules engines to drive nuanced interventions along the care continuum. Another promising research field: the use of messaging alerts and relevant, targeted content that provokes action at the moments they matter. Research is also increasingly focused on effectively designing “choice architecture” to nudge patient behavior in a desired direction based on real-world evidence.
Machine learning (ML) may also be utilized to help with claims and payment administration as well. ML can be utilized for probabilistic matching of data across different databases. This allows the algorithm to catch incorrect claims that slip through the cracks, a skill that would have tremendous financial potential.
Understanding human language has been one of the primary goals of AI since its beginning, and NLP is the field of AI devoted to accomplishing this goal. NLP includes operations such as speech recognition, translation, text analysis, and other language-related goals. An NLP-based system is needed in healthcare because, for example, the average U.S. nurse spends 25% of work time on regulatory and administrative activities.
There are 2 basic approaches to NLP: statistical and semantic NLP. Statistical NLP is based on machine learning. It has recently contributed to the increased accuracy of recognition. Semantic NLP systems can analyze unstructured datasets, such as patients’ unstructured clinical notes. They can also transcribe patient interactions, prepare reports, and conduct conversational AI.
Veradigm has applied AI to EHR data to create research-ready real-world data that is more diverse than the data generated by most clinical research. Veradigm EHR Data is one of the largest real-world databases. It is derived directly from community-based physician EHRs, which is where most of the patient record generation occurs in the United States. Use of multiple EHR data sources means this database is populated with an extensive, nationally distributed patient population.
Because we have direct access to clinicians through their EHRs, Veradigm has access to the unstructured data as well as the structured fields—unstructured data that includes clinical notes, attachments, images, and more. Using machine learning and natural language processing, we can extract information from the unstructured clinical notes, providing a more complete picture of a more diverse population of patients.
In 2021, researchers from Veradigm were able to derive insights around the social determinants of health through our NLP and ML capabilities. Creation of a Mapped, Machine-Readable Taxonomy to Facilitate Extraction of Social Determinants of Health Data from Electronic Health Records was published at the American Medical Informatics Association (AMIA)’s 2021 Symposium and demonstrated how Veradigm created a framework for extracting and reporting social risk factors from ambulatory EHR data. This research provided an open-source Python dictionary to facilitate reporting and extraction for social risk factors that anyone can download.
In another example, Veradigm’s researchers worked with a biopharma client to analyze unstructured qualitative provider notes recorded in the EHR during visits with patients with atopic dermatitis (AD). Through ML and NLP capabilities, they were able to demonstrate that these encounters focused primarily on patients’ symptoms, and on symptom relief to a lesser extent. However, these encounters rarely document AD burden on daily functioning and quality of life. The study demonstrates how combining defined data from EHR structured fields with NLP-extracted information from provider notes captured in unstructured fields offers potential to broaden understanding of AD impact and management.
If you would like to learn more about Veradigm EHR Data—or how Veradigm can help you to expand your clinical research to a broader, more diverse population—contact us.