Perhaps the most common laboratory procedure performed for hospital patients and outpatients is complete blood count (CBC) or CBC with differential. CBC serves as a screening and diagnostic test for a wide range of conditions and diseases as well as a monitoring tool for treatment and disease status. Given its foundational nature and despite its relative simplicity, the veracity of this basic blood testing is essential. Therefore, thorough validation testing on all new hematology analyzers must be performed to ensure patient safety.
It is reasonable to assume that a newly acquired piece of diagnostic equipment would run as intended, as manufacturers perform their own validation testing to prove intended use and to fulfill regulatory requirements prior to launching a product in the market. However, the ultimate responsibility of verifying instrument performance specifications and characteristics prior to the patient testing falls to the end-user laboratory.
Some labs may overlook certain necessary components of new instrument validation under the common misconception that if the new analyzer is from the same vendor as the lab’s previous model, then all specifications can be transferred to the new device without being verified. For hematology analyzers, the verification or validation process breaks down into several essential elements: Accuracy, precision, reportable range, reference intervals, and evaluation of instrument flags. In general, the laboratory should follow manufacturer-recommended protocols for conducting the studies (under the discretion of the laboratory medical director), while incorporating recommendations and requirements from accreditation and regulatory agencies.
Hematology Technical Validation
With most new analyzer acquisitions, the vendor will provide on-site technical specialists to assist in device set up, validation, and operator training. The technical specialist along with the service engineer will perform basic system verifications, such as electronic and function checks, part alignments, calibration, running of QCs, and simple precision. Prior to conducting the various validation studies, it is important to discuss the validation plan and set up clear expectations with the vendor support specialists. For instance, the vendor and the laboratory might call for different required sample sizes and types of specimens to be included in the comparability study.
During this process, it is crucial to include lab personnel who have already received key operator training to work with the specialist during the validation. The key operator(s) should be familiar with analyzer functions, operation, and capabilities, and be able to review preliminary validation data and reports, and investigate outlying results. While it is accepted as the industry norm to utilize resources from the vendor for equipment validation, it is the laboratory’s responsibility to ensure the device produces results that are proper and satisfactory to that laboratory’s practice before initiating patient testing.
Essential Elements of Validation
Assessment of analyzer accuracy is achieved in part by comparing the performance of a new instrument with a reference method—most commonly the laboratory’s existing validated instrument. To the degree possible, the laboratory should use fresh patient specimens with test results that span the entire analytical measurement range. The sample size should be proportional to the routine workload and the sample type should reflect the laboratory’s patient population. For instance, if the laboratory generally receives numerous samples from the bone marrow transplant unit, it is essential to include severely pancytopenic specimens in the validation study.
Statistical analysis should be performed on the results from the new instrument and the existing instrument (or other reference method). In general, Passing-Bablok regression is preferable over simple least-square linear regression because the median is the appropriate measurement in the presence of outliers.1 For assessing the results of the comparability study, target limits are available in published references or listed in the instrument’s specification. Ultimately, it is up to the laboratory’s medical director to determine the acceptance criteria.
Proficiency testing (PT) material serves as another independent method for evaluating the accuracy of the test method. It is a good practice to obtain additional proficiency testing material specific to the method being validated. Results can be submitted to the PT program sponsor for a formal evaluation with comments indicating the submission is for validation purposes only, or they can be evaluated internally, if the event is already closed.
The laboratory must verify short-term (between run) and long-term (between day) reproducibility of the device.1 Short-term reproducibility should be verified using normal specimens as well as specimens that have results at medical decision thresholds (eg, low hemoglobin, low platelet). Each patient specimen is analyzed on the instrument multiple times with standard deviation (SD) and coefficient of variation (CV) calculated and verified, if they are within manufacturer-stated precision limits.
For long-term reproducibility, commercial control materials are usually utilized, as most hematology specimen parameters are stable for only 1-2 days (if refrigerated). CLSI guidelines define a specific method for estimating long-term repeatability and within-device imprecision. Commercial controls are tested in duplicate at each level (low, normal, high) twice daily for at least 25 days.1 SD and CV of results are verified against manufacturer claims printed in the control package insert.
Analytical Measurement Range and Clinical Reportable Range
The analytical measurement range (AMR) is the range of analyte value that the instrument can directly measure. The clinical reportable range is determined by the laboratory director and is based on the AMR. Usually, the lower reportable range is the lower limit of the AMR. If the upper reportable interval is higher than the upper limit of the AMR, the specimen is commonly diluted to bring the result down to the AMR, and then reanalyzed with the final results being calculated using the dilution factor. CAP standard COM.40600 requires verification of reportable range for each analytical procedure before instrument implementation.2
There are commercially available linearity kits designed for AMR verification. These kits usually contain different levels of concentration for each designated analyte of interest. If recovered results meet the acceptance criteria of the manufacturer, the results are said to be linear from point X to point Y, which represents the AMR.
Another approach is to use fresh patient samples that have pathologically elevated (high end of the claimed AMR) and reduced results (low end of the AMR), and correlate the results generated by the existing and new methodologies, given that the AMR has been properly validated on the existing instrumentation.
Per CAP All Common Checklist requirements COM.50000 and COM.50100, the laboratory must establish/verify reference intervals and evaluate the reference range during the introduction of a new testing methodology. According to CLSI guidelines,3 the laboratory can choose to validate reference ranges provided by the manufacturer (or other laboratory) by testing 20 healthy representative individuals. However, it is usually easier for the laboratory to validate its existing reference ranges given that the original validated reference ranges were established using a larger sample size.
If no more than 2 out of the 20 samples fall outside of the proposed reference ranges (ie, central 95% of the population studied), the reference intervals are considered verified and can be adapted for the new instrument. In addition, there are several statistical-analysis software types available that have reference interval verification functions.
Instrument Flags (Sensitivity and Specificity)
In addition to verifying the accuracy of automated differential by comparing the automated cell counts to the manual differential cell counts, the laboratory also should verify the performance of the analyzer flags. Most hematology analyzers have numerous built-in algorithms that trigger flags instructing the technologist to perform further intervention; namely, smear review or manual differential cell count. Typical flags include those for immature granulocytes, blast, atypical lymphocyte, some atypical RBC morphology (eg, fragments) and spurious platelet count (eg, platelet clump).
When accurate, these alerts are quite valuable, as most large hematology laboratories use instrument flags as the basis for setting up result autoverification rules or protocols for performing smear confirmation. Thus, it is important to verify that the instrument identifies important abnormal morphologies, but that the flags are not oversensitive resulting in numerous unnecessary slide reviews.
CLSI guideline H20-A24 provides detailed methods for performing verification studies of the automated differential cell count and instrument flags. Essentially, it indicates using 100 normal and 100 abnormal specimens in the study (abnormal specimens refer to both abnormal distribution of cell types such as neutropenia and lymphocytosis, and abnormal morphology, such as presence of blast or platelet clumps). For each sample, a total 400-cell reference count should be performed with two technologists each performing a 200-cell differential. The manual differential cell count and other morphologic observations are then tabulated along with the specific instrument flags. Based on the data, our lab prepares what we refer to as a Truth Table to summarize the findings (see FIGURE 1).
Based on the Truth Table, the laboratory can easily evaluate the performance of an instrument flag and verify whether the performance meets manufacturer claims. Obviously, if either the false negative rate or false positive rate is high, the laboratory needs to seek resolution from the vendor.
Additional Validation Studies
The point of a carryover study is to see if the analysis of a sample with a high analyte concentration will affect the analysis of a subsequent cytopenic specimen. Different vendors have different protocols for conducting carryover studies, but most protocols involve running a known high-concentration sample followed by a known low-concentration sample, and then performing a calculation that measures the percent of carryover. This calculated result is then compared with the manufacturer’s claim. CLSI guidelines specify the use of fresh or manipulated whole blood for this purpose.4 Saline, diluents, and commercial controls are not acceptable as substitutes for high-concentration samples or low-concentration samples due to matrix effects.
Mode-to-mode comparison should be performed if there is a difference in the aspiration fluidic pathway, sample aspiration volume, or analytical cycle. The laboratory verification process should be based on manufacturer specifications and acceptance criteria.
Per CAP requirement HEM.22000,5 the laboratory must have records that demonstrate adequate mixing of blood samples immediately before analysis. The intent is to ensure sufficient automated mixing time to homogeneously disperse the cellular elements in a settled specimen, thereby ensuring proportionate aspiration sampling of the specimen. An easy way to assess adequate mixing time is by performing a comparability study between the analysis of thoroughly mixed specimens and a reanalysis of the same specimens after letting them sit undisturbed for a few hours.
There are well known interfering substances that have the potential to cause the analyzers to generate erroneous or spurious results. Some common interferences that affect the different analytes in varying degrees include extremely high WBC count, cold agglutinins, EDTA platelet clumping, hemolysis, icterus, and lipemia. The laboratory should make every effort to include these samples in the validation studies and develop procedures to address such scenarios.
Depending on the staff resources that can be dedicated to validation testing, the entire validation process could take months. Lending to this timeframe is the duration of activities that inherently require more time, such as long-term precision studies or comparability studies (ie, the time necessary to collect and study a wide range of specimens under varying conditions), and interface validation, which is another highly detailed process.
Once the instrument has been fully validated and all of the above elements have been performed and documented (including raw data, statistical analysis, and outcomes for each step), a written evaluation of the test methods must be produced and signed by the laboratory medical director prior to patient testing. This evaluation must include a written assessment and acceptance criteria of each component of the validation study, records of investigation for any discordant results and an approval statement that includes the following suggested verbiage: “I have reviewed the verification/validation data for accuracy, precision, reportable range, reference range studies, (include all applicable studies), for (name of instrument or test) and the performance of the method is considered acceptable for patient testing.”(2016 CAP All Common Checklist standard COM. 40000)2
Remember, it is the laboratory’s responsibility to ensure proper instrument analysis and validity of results, and following a comprehensive program such as this will provide the peace of mind that your hematology instrumentation is performing as expected.
Su-Chieh Pamela Sun, MPA, MT(ASCP), is the program manager of the Central Laboratory and Specialty Labs at New York Presbyterian Weill Cornell Medical Center in New York. She received a BS in clinical laboratory sciences from SUNY at Stony Brook and an MPA from Robert F. Wagner Graduate School of Public Administration at New York University. Pam is a senior consultant for Healthcare and Laboratory Advisory (HCLA) consulting firm, as well as a voluntary laboratory inspector for the College of American Pathologists (CAP).