Middleware Is a Powerful Director's Tool

April 2017 - Vol.6 No. 3 - Page #6
Category: Middleware

As automated systems gain in complexity and many laboratory departments are consolidating into core laboratories, there is a persistent need to integrate technologies and control data flow in a way that classical laboratory information systems (LIS) cannot always accommodate. When you consider the combination of advanced diagnostic analyzers with process leveling provided by line automation (ie, tracks), the complicated interactions between analyzers and automation components can produce unnecessary obstacles.

Middleware (MW) is a type of software that serves as a conduit (and sometimes translator) for data flowing between diagnostic hardware and the LIS (see FIGURE 1). Although a basic MW communication bridge between an analyzer and an LIS may be simple, the scenario can become geometrically complex when factoring in components of an entire clinical laboratory operation; a system of devices that often requires independent and codependent relationships with each other, as well as the LIS or other information databases (see FIGURE 2). In order to maintain control over these interactions, MW establishes a system of rules to govern and analyze data flow.

MW rules are generally designed to perform the following functions, among others:

  • Route specimens to analyzers as they become available
  • Trigger specimen recall from storage via online modules
  • Create and route error alerts
  • Add or suppress test orders
  • Initiate appropriate repeat testing and dilutions
  • Process, hold for verification, and auto-verify results

The advances of MW in recent years have been impressive and include interface to and interaction with real-time quality control software applications, moving averages modules, and improved database and reporting features. While all of these tools help the lab function in a symbiotic manner, they also can be leveraged to provide the lab director with a clear view of lab operations.

The Director’s Role in Middleware Projects

Ideally, the laboratory director is expected to connect provider requests to the creation of MW rules, review and approve MW rules, and suggest updates to workflows that interact with rules, as needed. In practice however, the situation often is one in which the technical staff that interacts most with testing and rules (eg, department supervisors, bench tech, and information technology analysts) gain expertise in MW optimization, while the lab director is not as well-versed in MW functionality and takes a back seat during MW configuration. The potential pitfalls created by this scenario can be far-reaching. Unfamiliarity with MW rules and tools limits the director’s understanding of the true capabilities of MW and impedes the ability to troubleshoot. Most MW rules involve specimen movement as well as data communication, and a rules update idea on paper may not translate to proper function. Thus, a well-intended effort to fix a reported issue, without proper investigation, understanding, and assessment, can result in unnecessary changes to rules that could undermine the design of the MW rule structure.

Non-involvement in the initial phases of a MW build can lead to frustration when issues do arise, and can prompt rejection of a MW system that is actually well-designed and functioning properly. Therefore, it is optimal for lab directors to be purposely involved in MW projects. This includes undergoing the same training that IT analysts and technical staff receive, establishing a role in rule design and validation, authoring processes for system back up and track-based job aids, configuring and validating aliquot tubes, providing and/or attending live track training with technical staff, participating in troubleshooting during go-live, and giving post-go-live training at the automation track. These actions position the director as a resource who not only understands how the MW is configured, but also knows how it contributes to client workflows, results delivery, and client and patient satisfaction.

Creating a Middleware Command Central

Subsequent to training and education on the MW, it is important to provide the director with a series of tools for investigations and analysis. An important feature of practically all prominent MW is a specimen management (SM) interface window (see FIGURE 3). Such interfaces tend to be highly configurable and can contain as much information as is desired.

Directors can create copies of SM windows and turn off the refresh feature, remove panes or unneeded elements, and render it a director-specific window that allows quick access to current global information for investigations. Further SM windows can be filtered to create basic reports in the form of a filtered test worksheet that contains limited information, including, but not limited to:

  • Ad-hoc turnaround times (TATs) for selected tests
  • Test run investigation data
  • Reference interval data
  • Trend reports
  • Shift-specific data

Copy and paste and/or export functions allow for rapid data transfer to Excel or other spreadsheet-based data analysis tools.

Case Examples of Middleware as a Solution

Automated Detection of Incorrectly Loaded Whole Blood on a Track

An ambulatory patient has blood drawn for routine laboratory testing and the results are abnormal enough to be suspicious, but not abnormal enough to escape auto-verification. Per initial MW configuration, hemolysis and lipemia index testing is performed, but only for tests that are usually impacted by the index. In this particular case, both the hemolysis and lipemia are tested and graded “gross,” but minimally impact the tests ordered, so results are auto-verified and escape the review of laboratory staff. Once the ordering provider calls in the issue to the laboratory, lab staff members investigate the instrument, run quality controls, and unsuccessfully search for other possible causes of failure before bringing the issue to the director. The director, using only MW-based tools, quickly concludes that the specimen should be pulled and inspected for centrifugation, as it may be whole blood. The specimen is pulled from storage, quickly determining that the specimen was loaded to the wrong 96-position rack, which subsequently distributed it to a lane intended for specimens already centrifuged. This simple mistake caused the specimen to bypass separation and proceed directly to the analyzer.

What the director saw that the technical staff did not are the raw scores from the index testing. The audit trail (seen at the top of the SM window in FIGURE 3) contained detailed information not usually needed to perform testing, but which revealed raw index data and how rules interact with them; this is a MW feature that laboratory staff virtually never use, but which the director uses daily for investigations. A modified version of the SM window, tailored to be a turn-around time report, helped the director quickly locate and assess recent index testing. From the index data, the director was able to find patterns in the index results that revealed possible whole blood specimens. These data enabled the director to expand existing index-handling rules in the MW to catch these specimens and prevent any tests or calculations from being released pending manual intervention, specimen review, and repeat testing. In this example, the existing rules structure did not require significant alteration to achieve the process change, which involves testing indices for every serum- and plasma-based test on the track, and preventing release of all results associated with specimens that have a signature pattern of “gross” hemolysis and lipemia. The additional testing does not affect throughput on the track, uses a minimal amount of specimen, and has enabled laboratory staff to detect and correct mistakes before results are reported.

Indication of Test Interference by Staff

A complaint is received in the laboratory from the Emergency Department (ED) indicating a significant delay in cardiac troponin testing on the previous evening shift. Further discussion with the ED suggests that significant delays in needlestick testing tended to occur on the evening shift. In this case, the lab director can run a TAT report that is filtered by test and contains the specimen number, order time, result time, and test result. The resulting lists are easily exported to Excel for filtered by ordering location, result time, and calculation of TAT, enabling the director to compare testing records with shift-performance data. True to the ED’s report, a pattern emerges indicating the evening shift has a longer average TAT for troponin testing compared to other shifts, largely due to a greater number of TAT outliers.

Audit trails of testing flow patterns show that one particular clinical laboratory specialist (who always works the evening shift) produces longer TATs on average than anyone else. Review of the data with the specialist reveals inappropriately greater manual downloading than any other tech. (The manual download function resends order information from the LIS to the MW, thus initiating another (duplicate) test within a specific order. It is intended to correct rare errors in communication between an analyzer and the MW by “refreshing” the order; typically, this is only necessary when the initial testing attempt creates an error flag that cannot be reconciled by the MW.) The tech claims the testing takes too long, and that initiating a manual download produces results more quickly. What the tech is unaware of is that inappropriate use of the manual download feature creates new tests, and hence more results to evaluate, as well as wasted reagents and time. By reviewing the data summary with the specialist, it is easy to clarify that the testing is not unnecessarily slow, and that the tech’s actions are actually creating unnecessary work. Thus, unbiased data derived from the MW can be key to exposing misperceptions about the testing process and enabling targeted intervention and education.

Click here to see FIGURE 3.

Indication of Improper Test Results Handling

A complaint claiming “broken” hepatitis testing rules is relayed to the laboratory supervisor via shift log. Reactive results (which require duplicate repeats to confirm the result) are not being processed and released consistently, and the techs are manually entering results in the LIS. The supervisor inquires with IT staff about an investigation, but with ticket backlogs and short staffing due to illness, the supervisor is told it will be 2 days before the issue can be reviewed.

The director is informed of both the issue and the service ticket delay, and uses the TAT report window in the MW to pull all hepatitis tests performed in the last 30 days. Within a few minutes and without further processing, a pattern emerges identifying two specimen ID types. The first type is clearly generated by the LIS, whereas the second type contains shorthand IDs assigned to the specimen by techs manually ordering and loading repeat tests at the analyzer. By observing staff, the issue is confirmed. When LIS-based specimen IDs are used, rules are triggered and turn around is rapid; when shorthand IDs are used, the rules are baffled and cannot function properly.

Within 10 minutes, it is clear the rules are in fact working properly, but what is needed is specific technical staff training on how to handle reactive hepatitis test results. Within days, the retraining is complete and job aids are produced to keep at the bench. No rules changes are necessary, and the problem is identified and solved rapidly without the need for an IT analyst’s time or effort.


Examples such as these demonstrate that a MW-savvy lab director is more than a technical specialist; with MW tools at hand, the director can help answer and solve informatics questions and issues by viewing problems from a broad perspective and applying informatics-based solutions. This enables the lab director and the technical staff to develop a more collaborative relationship. Techs will come to rely on the director for investigative depths that reveal the true causative elements of technological problems.

By having a deep and wide view into the mechanisms of lab automation, a few practical lessons come to light: First, not every complaint requires an alteration of MW rules. Following, when a rule is needed, its structure must be carefully studied with input from IT specialists, to achieve the correct action prior to implementation. With MW tools at hand, the laboratory director can ask more pointed questions about processes, zero in on training gaps and missing job aids that would prove useful at the bench, and truly monitor the entire lab workflow. This enables the director to be in tune with rhythms in the lab, observe trends and patterns that arise, and predict potential problems when new tests or changes to workflow are introduced.

When the lab director has such insight, an internal layer of support is created for IT specialists, allowing them to focus on service requests that truly require an analyst’s work. Furthermore, most of the requests that are escalated to IT specialists already have root-cause identification and a proposed solution included in the request.

As with all automation in the clinical setting, MW is intended to facilitate the transmission of information between systems, but it also provides a view of this transmission, which can shed light on many technology related questions. Human error is far more prevalent than computer error, and today’s MW options provide the lab director with the necessary tools to balance the field.

Danyel Tacker, PhD, is director of clinical chemistry at West Virginia University Hospital and a member of the pathology faculty in the West Virginia University School of Medicine. Her sections of the laboratory include automated and special chemistry and mass spectrometry. Danyel also assists with specimen processing, outreach, point-of-care, and send-out functions of the WVUH laboratory.


Like what you've read? Please log in or create a free account to enjoy more of what www.medlabmag.com has to offer.

Current Issue