As hospital laboratories are expected to continue converting from profit centers to cost centers, senior administrative staff have begun shifting their attentive focus. Amid this change, and as speculation over the impact of repealing and/or replacing the Affordable Care Act adds further uncertainty, many organizations are facing workforce reductions, added steps in position approval processes, and hiring freezes, effectively preventing recruiting. In this heightened, cost-conscious health care environment wherein organizations are using benchmarking agencies to determine additional cost-cutting opportunities, it is critical to understand the lab’s labor costs in relation to its productivity, not only to justify additional full-time equivalents (FTEs), but also to defend current FTE allocations.
Labor costs represent the single largest item in the lab’s budget. Thus, effective lab management requires an in-depth understanding of the components of productivity measures and how they factor into benchmarking applications. Health system finance directors and CFOs rarely have intimate knowledge of laboratory business, so laboratory managers and directors must partner with their finance departments to help bridge these gaps among knowledge experts. If not, labs often face directives to further reduce FTEs to meet budget shortfalls.
Determining Labor Costs
There are three broad approaches to analyzing labor costs:
- Evaluate institutional labor costs in terms of employment relationships
- Conduct a comprehensive review of the technical workflow functions and tasks performed
- Use an accounting and budgeting analysis that evaluates the labor costs involved in delivering a billed test and/or service
Institutional Labor Costs
Institutional laboratory costs involve all four parts of the employment cycle:
- Recruitment and acquisition costs include position advertising and pre-screening activity costs. Certain pre-recruitment also should be factored in, as position replacement requests require multiple documents and time to collate productivity trends, position justifications, formal request forms, job descriptions, and other supporting documentation.
- Training, development, and competency costs are initiated once an applicant begins work. Depending on the organization, the lab, and the position, the time required to mentor new hires can be extensive and costly.
- Individuals become operational and productive in the third stage of employment, and labor costs are ideally outpaced by productivity gains as individuals execute the functions they were hired to perform.
- Termination or other employee separation, either voluntary or involuntary, can incur high costs. Performance-related terminations often require substantial counseling, remedial training, and review.
Technical evaluation is the second approach to analyzing labor costs. Most labs already use this process to establish lab fees for onboarding new tests or alternating existing test methods. This approach identifies and quantifies the labor component of the test method by determining how much time it takes to perform a task. This review must incorporate the total time from request and collection of the specimen, through the test performance, to the release of the final result.
The Clinical & Laboratory Standards Institute (CLSI), The American Society for Clinical Laboratory Science (ASCLS), and other accreditation and regulatory bodies suggest breaking down the testing process into the three basic phases: pre-analytical, analytical, and post-analytical. By separating the review into parts, managers can focus their attention on tasks requiring the greatest amount of effort, which is also likely to be driving the costs.1
Accounting and Budget Analysis
An accounting and budget analysis is the third approach to labor analysis. In the finance model, salary and wage cost behaviors can be categorized into total number of hours paid, actual hours worked, and hours paid as benefit. These indicators (or ratios) can be used to monitor staffing levels, productivity, and management performance against budget targets.2 Payroll departments categorize hours into groups to assess various operational measures. Though these descriptions sometimes are defined differently, as institutions may not agree whether hours related to education, training, and competency assessment are considered productive or not, total paid hours generally encompass all paid hours, including education, paid time off, or productive or non-productive hours. Productive hours are actual hours worked including overtime, and non-productive hours are typically defined as those paid for sick time and vacation or jury duty, ie, paid hours when the employee is not actively working in the lab.
Although somewhat controversial, a productivity index can be useful when comparing operations against other facilities and labs. Productivity metrics require the use of a structured model, incorporating billable procedures, workload units, and workload systems that are sometimes devoid of creativity, task timeliness, and quality service measurements. However, the overall function of a productivity metric is not to be an all-encompassing measure.
Additionally, all labs struggle with analyzing value-added tasks that do not receive credit for productivity, such as retyping blood products, quality control, maintenance functions, etc. Regardless, it is worth noting that every lab will have similar constraints in gaining credit for these tasks.
To determine productivity measurements, the lab must define units of work and units of labor in terms of inputs and outputs. A unit of work is defined as the unit of time, typically in minutes or hours. A unit of work can be difficult to determine, but managers often use time studies, Relative Value Units (RVUs), or task effort estimates. In hospitals, the most frequently used measurements are:
- Hours paid per patient day or daily census
- Hours paid per outpatient/ER visit
- Hours paid per number of billable tests
- Hours paid per weighted workload unit2
While no one of these options is ideal, most labs leverage either hours per billed test or hours per weighted workload unit. Admittedly, not all tests require the same amount of effort. For example, when considering a body fluid cell count, the effort and time will vary if the lab is performing an automated method or using a manual-count method. Additionally, the definition of what constitutes a test can vary. Further, is a CBC considered one test or seven? Here again, credit is not given for quality-related tasks, researching test questions for physicians or clients, instrument malfunctions, phone calls, or other related tasks.
The predominant feature to consider when selecting a productivity metric is sensitivity to volume changes. As fluctuations in workload volumes occur, up or down, the metric index must replicate the direction. Using billable tests per hour has advantages and disadvantages. The advantages include the ease of pulling data from finance reports, the best (although not perfect) comparison it provides, and the use of standard billed tests as established through CMS, insurance payers, and CPT coding. However, billed tests can be open to interpretation. For example, not all organizations include phlebotomy-billed procedures in their data. By definition, “billed tests” do not include quality checks, test research efforts, specimen manipulation, or any other necessary lab practice to ensure test quality. Additionally, test counts can be doubled up if one lab refers testing to another lab within the same system and there is a cost allocation from the testing lab to the original lab.1
Once productivity metrics are established, laboratories can make comparisons against other laboratories to identify potential process improvement opportunities or document processes in which the laboratory is more efficient than its external peers. Many benchmarking systems provide services to clinical laboratories, with benchmarking data available at a cost. As with every vendor, selection should be based on overall objectives, suitability of peer group sizes, and service and support beyond the comparative data.
Organizations strive to be in the top 25th percentile or higher. Of course, with common goals, that target becomes more difficult to achieve. As the ranges become tighter, operational efficiency changes have less impact on movement across percentile rankings. Ultimately, comparisons are made within peer groups who possess like size, complexity, and key features such as number of clinic locations supported, reference lab work, and courier service.
For benchmarking purposes, consistency in definition is a critical but often missed step. Labs must understand what is “in” and what is “out” in the total numbers. While the standard billed test (SBT) is likely the best comparative option for most labs, it requires that the tests are ordered by a physician or provider; are associated with a CPT code; generate a result, product, or billed phlebotomy procedure; and are performed by lab personnel.3
Weighted Workload Estimate Benchmarking
Using weighted workload estimates (WWLs) is another option for benchmarking. Essentially, the WWL assigns a value to a specific task in a standardized manner; typically, each unit is one minute. There are multiple ways for calculating the WWL, which can be arranged in a sequence of good/better/best. In one method, experts determine WWL by making an estimate of the time needed to perform a task. Another method is through simulation where techs perform tests to determine necessary completion times. A third method utilizes logbooks to enable techs to record each step in the process throughout designated periods. The fourth, and likely best method is benchmarking through the use of time-motion studies wherein observers watch techs performing the tests. Each option requires time, effort, and energy to complete, with higher accuracy requiring more time and effort.3
While used heavily in the past, WWLs are difficult to maintain and they lack the flexibility to keep up with ever-changing technologies. Also, WWL measurements need to be adjusted for manual versus automated functions. For example, reticulocyte counts on a cell counter versus Miller Disk will require varying amounts of time to complete. While it may be difficult to use WWL estimates for ever-changing technology, there is utility when applying to staffing models. To illustrate an example in the hospital setting where phlebotomists are deployed to the floors: Phleb A may be assigned an OB floor with requirements to collect samples from 5-7 patients, while Phleb B may be assigned to NICU with requirements to collect samples from 2-3 babies.
Acceptable productivity levels are often arbitrarily defined. Initial targets tend to be based on perceived units-per-labor standards and then adjusted over time. Many variables, such as whether labs are computerized or whether lab services support other functions of the hospital, contribute to the final “score” of a section or lab. Many leaders rely on general indicators, such as the number of billed tests per FTE to determine efficiency. Benchmarking and percentile rankings provide direction regarding whether additional FTEs are needed or reductions may be warranted.
Understanding how department FTEs are calculated, whether productive or non-productive, is vital to evaluating the productivity and benchmarking of the lab. Using billed units supports the productive side of the coin, but each lab should ask how the non-productive side is captured. One strategy quantifies hours of time used within non-productive allocations by considering what takes productive techs or staff members out of producing a unit: How many mandatory hospital staff meetings per year? How many department and/or section meetings per year? What is the impact on productivity when students are in the department? Are team members working on projects that will remove them from the production line?3 Asking these types of questions and quantifying the hours per employee will assist in measuring non-productive time, which can help to defend an FTEs targeted for removal. If the organization or hospital does not account for these non-productive functions, it does not mean they do not exist.
Two tools can be used to monitor department FTEs and justify existing positions. The first, a Position Control Chart (see FIGURE 1) uses an Excel file to track positions. Applicable at budget time or during the year to assist in position replacement management, it provides a visual control of positions approved as needed for lab operations, FTEs allocated by position, and whether or not the individual is functioning as a productive employee or trainee. The chart illustrates the positions needed, shifts assigned, and whether positions are necessary for replacement.
In FIGURE 2, the top portion demonstrates the number of FTEs assigned by position type, according to and provided by the finance department. Adjustments were made to correct inaccuracies that were not altered within the Finance section. The bottom portion depicts a typical schedule using the FTEs provided by Finance. In detail, the chart shows the number of FTEs by position type for the three shifts and weekends, and the FTEs needed for vacation coverage. Ideally, the chart should also include FTEs for competency, training, special projects, and other functions to account for non-productive time.
Comparing the top and bottom sections of FIGURE 2, the information creates a clear picture of the FTEs assigned and how they are used in the laboratory. In the event of a workforce reduction, this information can provide a foundation for conversations among laboratory leadership, finance, and senior system administration representatives that are likely to favor the lab more so than anecdotal comments of need.
Health care system finance managers and senior leaders often are unaware of the complexity of laboratory operations and all that is required to maintain good laboratory practices. Therefore, laboratory directors and managers are obligated to educate management and demonstrate how the lab functions at the FTE level. Understanding productivity metrics, benchmarking, and the productive and non-productive sides of FTEs are critical to any lab management efforts to add or even defend FTEs. To enable this, laboratory directors must embrace benchmarking applications that provide directional information to improve laboratory operations. Failing to do so will place the lab in a position for increased scrutiny, which is a challenging place to be. After all, laboratorians are comfortable looking into the microscope, not being under one.
1. Nowicki M. The Financial Management of Hospitals and Healthcare Organizations. 4th ed. Washington, DC: Health Administration Press; 2008,257-274.
2. Travers EM. Methods and Models for Measuring Laboratory Productivity, Costs, and Staffing. Clinical Laboratory Management. Baltimore, MD: Williams & Wilkins; 1997, 411-485.
3. Patton WD, Witt S, Lovrich N, et al. Salary and Wage Management. Human Resource Management: The Public Service Perspective. Boston, MA: Houghton Mifflin; 2002, 223-234.
Ron Purkapile, FACHE, MSc, DLM(ASCP)CM, SSGB, is the regional director for laboratory services of Ascension Wisconsin. Prior positions include administrative laboratory director at St. Mary’s Hospital in Madison, administrative laboratory director at St. Michael’s Hospital in Stevens Point, and supervisor positions within Dean Health System, in Madison and Janesville, Wisconsin. Ron is a member of CLMA and is the current President for the Wisconsin Chapter. He was recently elected to the International CLMA Board of Directors and is serving as Vice Chair to the Legislative, Compliance, and Regulatory Committee. Ron’s professional passion revolves around quality-driven, systematic approaches to improvement.
- In The Loop!
- Digital Edition
- Special Announcements