An Introduction to AI in Clinical Laboratories

March 2023 - Vol.12 No. 3 - Page #16
Download Digital Edition Share Save
Category: Laboratory Information Systems

The Artificial Intelligence (AI) revolution infiltrated the health care sector on the heels of Big Data innovation. Poised to disrupt anything it touches, health care AI research and development amassed 11 billion US dollars globally in 2021.1 For the medical laboratory, AI adoption offers a unique opportunity—a traceable “70%” (or greater) contribution to medical decisions. Optimized medical laboratory data streams that augment clinical practice unearth a pathway for leadership to pinpoint the laboratory’s direct value in patient care and contribution to downstream cost savings.2 Trailblazing in the digital arena should excite the clinical laboratory community at large. However, recent polling captured the chasmal deficit in specific AI education for medical laboratory professionals.3

Collaboration is Key

The complexity of data technology has led to specialty designation. Due to the severe deficit of data literacy education and training in health professional schools, new silos have been created within the hospital walls in the form of business intelligence analysts. Likewise, data scientists and information technology experts are not traditionally trained in any medical specialty, but are skilled in mathematics, programming languages, and computer science. In this way, they are essential collaborative extensions of health care practice in an AI world. For medical laboratory leaders, the challenge becomes liaising with the data teams to promote standards of best practice amongst emerging AI.

The grand promise of laboratory AI integration is in a struggle to offset the substantial trepidation among my medical laboratory colleagues. After all, the utility of AI algorithms hinge on medical laboratory data being findable, accessible, interoperable, and reusable (FAIR).4 These are current challenges in even the sleekest of electronic health records. Further, for ethical integration into clinical practice, establishing principles for patient autonomy, bias mitigation, algorithmic transparency, and explainability is paramount.2

AI’s Place in the Lab

Thus far, discussions on the best use of AI have largely focused on anatomic pathology imaging, “-omics” implications, and inventory management. Debates are ongoing as to whether AI is meant to augment or replace personnel, what steps would require laboratory involvement, and setting validation expectations. Furthermore, regulatory considerations for embedded AI integration in laboratory instrumentation, availability of FDA approved software as a medical device (SaMD), and institutionally developed algorithms are additional concepts open to question.

While controversy is expected, it is important for medical laboratory leadership to recognize that the use of clinical and anatomic data in AI modeling is not on hold until the colloquium reaches consensus. Innovation will continue regardless of whether the medical laboratory can establish an AI knowledge base and dispense expertise to technology partners. Therefore, it is advantageous for laboratory leaders to begin developing an internal policy with acceptability thresholds for AI solutions. This criterion allows for standardized investigation of any industry product prior to assigning dedicated project resources. It also can assist in the discovery of clinical need and appropriate patient applications for the AI solution.

To date, there is little in the way of a guiding framework available for medical laboratory leaders when approaching a pending AI integration within their workplace. The following comprises practical starting points for medical laboratory leaders interested in AI adoption or who are facing an impending integration.

Acceptability Checkpoints

The first checkpoint for AI acceptability in the lab must involve ethical development and application. Project leaders should insist on transparency regarding which specific medical laboratory data points will be utilized within the algorithm. Likewise, instrumentation vendors should be able to provide a detailed explanation of how the AI algorithm reaches recommendations or influences decision making. If the proposed AI algorithm is considered a “black box” and cannot be satisfactorily explained, this should prompt caution.

Accomplishing these first two checkpoints—data utilization transparency and vendor supplied education—will inform laboratory leadership’s ability to explain the AI-based decision making to providers, who in turn should be prepared to relay certain points to patients in a clear and basic manner. Patient/provider autonomy and provider trust should not be impugned by AI integration. Additionally, laboratory data used to augment clinical decision making should be provided with appropriate context and interpretative support. Where the AI algorithm resides in the chart is important; if, for example, a diagnostic output attaches directly to a laboratory result in a patient portal, could that cause harm?

Lab leaders must be clear as to what level of access vendors or research groups will have to patient data, with access being limited to what is necessary for proper firing of the AI solution at the appropriate time. Full access data streams should be avoided. Moreover, leadership may want to consider an informed consent process for any patient whose results may be utilized in AI solutions.

Bias and Outcomes

A third checkpoint for acceptability involves algorithmic bias and discriminatory outcomes. While these are subsections of ethical AI development, they can be harder to identify in the initial review. Lab leaders should query population statistics (eg, age, race, gender) of those utilized in the training of the AI solution, and whether the algorithm demonstrates bias in these factors during the vendor’s model development or evaluation stages. Further, it can be advantageous to determine if the vendor has set an acceptable bias percentage for the AI solution and whether there is a predetermined change protocol for escalation of observed bias during the laboratory’s validation phase.

The fourth checkpoint for acceptability covers operational considerations. It is necessary to factor in whether the AI solution is FDA approved or homegrown, and if it is scalable across various practice modalities. Depending on the laboratory tests required for the algorithm, leaders may have to consider changes to their instrumentation, reporting units, or test menu and be ready to articulate expected cost impacts to organizational leadership. For example, an algorithm utilizes high sensitivity Troponin I, but the analyzer in use only runs high sensitivity Troponin T. The decision now is whether to replace or supplement your instrumentation or adapt the algorithm to support Troponin T in the same way.

Likewise, the algorithm could be either locked or learning. A locked AI algorithm will provide the same result any time the same inputs are provided.5 A learning AI algorithm can change behavior when new inputs are added.5 This type of functionality impacts initial validation planning as well as future-state upgrade adoption.

Algorithm Development Evaluation

In general, AI is created via a five-step life cycle:6

  1. Data creation
  2. Data acquisition
  3. Model development
  4. Model evaluation
  5. Model deployment

Leaders should evaluate data origins used by vendors in training the AI model, including where data were gathered or purchased. Studies have identified regionally crafted AI solutions often do not translate when implemented into new regions or countries.7 It is also notable that FDA clinical validation can be performed by referencing existing data sources from studies conducted for a different intended use or by utilizing pilot programs.5

It is necessary to understand whether the AI solution was developed/evaluated in “model” conditions or if it has expected limitations and interferences. Medical laboratory leaders are quite familiar with how hemolysis, icterus, lipemia, bad bench techniques, dilution calculations, and shifts or trends in quality control can impact patient results. Thus, it is important to know if the AI program appropriately distinguishes incidences of junk data or identifies points that the laboratory may need to annotate if the AI outputs are deemed unreliable.

Project leaders would benefit from assessing the laboratory’s level of liability and responsibility if the AI solution is installed within the medical laboratory. This could take the form of the International Medical Device Regulators Forum (IMDRF) structure where the state of the health care situation (ie, critical, serious, or non-serious) is weighted against the impact of the AI solution toward the health care decision.5 Or, there are unique structural approaches where the AI is classified according to intent; possibilities include diagnosis, augmentation of clinical management, or informing clinical teams.5

AI Validation

It is possible for AI to grade or classify medical laboratory data correctly by the algorithm’s standard but be clinically incorrect in the output.7 Regulatory guidance specific to AI in medical laboratories is lacking,8 and this is true in both anatomic and clinical pathology sectors. However, the Clinical Laboratory Improvement Amendments (CLIA) do require that any high-complexity new test, device, or diagnostic aid undergo validation before reporting patient results.9

If AI solutions are deemed to be tests or diagnostic aids, laboratory leaders drafting validation plans would need to be inclusive of the true clinical needs and output expectations of the AI algorithm. These should be statistically quantifiable for all population groups listed in the vendor’s claims. A validation protocol could include sections for accuracy, precision, and any potential source (eg, age, race, gender, etc) or developer bias. If visual AI is involved, labs may consider including side-by-side (manual) evaluation, if applicable. Peer-based decision making also can be incorporated for correlation with AI algorithms as a comparison study to determine if the AI solution coincides with the laboratory professionals’ expertise.

As is standard for medical laboratories, special attention to analytical sensitivity and specificity is required to ensure against false negative or false positive classifications. Analytic stability is a factor that should be considered to confirm AI diagnostic outputs or classifications do not change due to specimen degradation. Discussions with medical directorship regarding acceptable specimen types, allowable error, or how to address results exceeding analytical measurement range would be necessary when AI solutions involve instrumentation. As previously mentioned, interferences with potential downstream impacts to AI outputs should be included in the validation plan.

Conversely, if AI algorithms utilizing medical laboratory data are determined to be SaMDs under the FDA, adherence to intended use statements with verifications of performance specifications under CLIA may be deemed permissible when paired with a bias analysis. This is not decided by governing entities and may not be for some time.

Implementation

Operational AI solutions within the clinical laboratory should be incorporated into existing quality assurance programs. Leaders should investigate when and how upgrades to the AI program will occur. Some vendors may provide quarterly releases and others shorter or longer intervals if it is a learning algorithm. The frequency should be well documented. Additionally, when an update occurs at the designated cadence, any intended use updates should be evaluated for inclusion in the laboratory’s workflows. For example, an upgrade may introduce decision-making for a new population or age group, such as Pacific Islanders or adolescents.

Quality teams will need to determine the frequency with which the laboratory should monitor changes in how the algorithm learns, if applicable, and when AI solution review is appropriate for locked AI. When new updates to the AI solutions occur, laboratory leadership will have to determine the degree to which the laboratory is responsible for ensuring the new updates did not alter previous AI outputs. Leaders will also need to develop a reporting and mitigation strategy if resulting errors or discrepancies are identified. Lastly, training program development and establishing AI competency may involve interdisciplinary partnerships with technology teams.

Conclusion

Now is the time for laboratory leaders to step outside of their practice comfort zones when it comes to artificial intelligence. Laboratory expertise is crucial for ethical AI development and successful AI integrations. While much is unknown regarding AI’s final impact on health care, by familiarizing ourselves with emerging technology, we can educate and empower our teams to be advocates for future integrative practice and hold a seat at the table for innovation within our institutions. The promises of AI, such as personalized reference ranges and automated bone marrows cytology, can be exciting. However, much work is yet to be done.


Melody Boudreaux Nelson, MS, MLS(ASCP)CM, is the LIS manager for laboratory at University of Kansas Health System. Over the last 5 years she has served in laboratory management and oversight roles for core laboratories, operations, and informatics. Melody is currently a resident in the doctor of clinical laboratory science (DCLS) program through the University of Kansas Medical Center. She sits on the Council of Laboratory Professionals and the Continuous Quality Improvement Steering Committee through the American Society for Clinical Pathology.


References

  1. Statista. Artificial intelligence (AI) in healthcare market size worldwide 2021-2023. Health Pharm & Med Tech. www.statista.com. Accessed February 14, 2023.
  2. American Association of Clinical Chemistry (AACC). Data Analytics and Laboratory Medicine. AACC Position Statement. July 29, 2021. www.aacc.org/advocacy-and-outreach/position-statements/2021/data-analytics-and-laboratory-medicine. Accessed March 7, 2023.
  3. Paranjape K, Schinkel M, Hammer, RD, et al. The Value of Artificial Intelligence in Laboratory Medicine. Am J Clin Pathol. 2021;155(6):823-831. https://doi.org/10.1093/ajcp/aqaa170
  4. Blatter TU, Witte H, Nakas CT, Leichtle AB. Big Data in Laboratory Medicine-FAIR Quality for AI? Diagnostics (Basel). 2022 Aug 9;12(8):1923. doi: 10.3390/diagnostics12081923.
  5. U.S. Food and Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan. www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device. Accessed March 7, 2023
  6. Ng MY, Kapur S, Blizinsky KD, et al. The AI life cycle: a holistic approach to creating ethical AI for health decisions. Nat Med. 2022;28: 2247–2249. https://doi.org/10.1038/s41591-022-01993-y
  7. Gundersen T, Bærøe K. The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models. Sci Eng Ethics. 2002; 28(2): 17. doi:10.1007/s11948-022-00369-2
  8. Yoshida H, Kiyuna T. Requirements for implementation of artificial intelligence in the practice of gastrointestinal pathology. World J Gastroenterol. 2021;27(21): 2818-2833. doi:10.3748/wjg.v27.i21.2818
  9. Centers for Medicare & Medicaid Services. Clinical Laboratory Improvement Amendments (CLIA). LDT and CLIA FAQs. cms.gov/regulations-and-guidance/legislation/clia. Accessed March 7, 2023.
WHERE TO FIND
Laboratory Information Systems
For a full list of vendors offering Laboratory Information Systems, go to: www.medlabmag.com/findit
Key Laboratory Information Systems Suppliers
CLIN1 Clinical Software Solutions
NovoPath LLC
SCC Soft Computer
Computer Service and Support (CSS)
Orchard Software Corporation
XIFIN, Inc
Login

Like what you've read? Please log in or create a free account to enjoy more of what www.medlabmag.com has to offer.

Current Issue