Authors: Ahmed Taher, Joseph Choi
You are attending your first Quality Committee meeting in your ED. The team is discussing a project they have been working on to decrease unnecessary urine cultures at the urgent care centre associated with the ED. The team has already created a project charter with an appropriate aim statement (1). They have engaged the appropriate stakeholders and found the best practices for ordering urine cultures in uncomplicated urinary tract infections (UTIs) (2). The team found a feasible intervention in creating a forcing function in the computerized order entry form to only allow urine cultures in uncomplicated UTIs for certain indications. The form will be part of the standing orders for the nursing staff when patients are triaged in the urgent care centre. The intervention was piloted, and the team lead showed the first PDSA cycle results on a run chart. You wonder: why this data is being presented on a run chart, and how can these charts be used effectively?
Welcome to another HiQuiPs post where we discuss run charts and their use in QI projects.
What is the purpose of a Run Chart?
In our previous post we discussed variation in QI projects and the difference between common cause (i.e. random) and special cause (i.e. non-random) variation(3). A run chart is a useful tool that aids in the analysis of data to help uncover non-random variation. There are several important uses of run charts, such as(4):
To visualize data on process performance
To determine if the changes tested resulted in an improvement in the process
To determine if the gains made by the improvement are sustained
To allow for better data analysis
The power of the run chart is that it preserves the order of events(5). This enables visualization of changes in the process, beyond classical statistics comparing means. The use of run charts in analytic samples also enables the use of probability-based rules to assist in identifying random vs. non-random variation (more on this later!).
Run Chart Construction
A run chart is a graphical display of data plotted in some type of order. This order is usually in time units but can be other sequential units such as patients, procedures, and specific events. This represents the horizontal or X-axis on the chart. The vertical or Y-axis of the chart is the quality indicator that is being measured, such as the number of urine cultures in our example.
A median is calculated as part of a run chart (e.g., median of 9 in the Figure 1 above), and this becomes the centreline (CL). The median is the number at which half of the data points are greater, and half are lower. It is used for the probability-based rules that assist in identifying non-random variation (the “run chart rules”).
Run Chart Displays
To illustrate the power of run charts, we will go through a few examples below (figure 2-4). They are a hypothetical scenario of an ED wanting to decrease inappropriate blood cultures, and introduce an intervention between weeks 6 and 7. Each chart has the exact same data, but as you will see, when plotted against time they tell very different stories.
In the figure below, it seems that the intervention may have produced a positive change. Prior to the intervention, all the data points were above the median line. After the intervention, the rates were all below the line. To further prove that the intervention was responsible for the change, we will discuss probability-based run chart rules a bit later in this article.
In the next figure, we also have a situation where all the data points before the intervention are above the median, and all those after are below. However, the chart also suggests that there was a downward trend even before the intervention was introduced. This would lead you to think that there may be other factors beyond the intervention that is worth investigating to see its impact on blood culture rates.
Finally in this last figure, there was seemingly an upward trend in the rates of blood cultures. However, after the intervention, there was a sudden and dramatic drop in the blood culture, which was not sustained and returned to its steady upward trend. This again warrants further investigation as to why there is an upward trend (even before the intervention), and why the effect of the intervention could not be sustained.
As we can see, using run charts can be very helpful to see what is happening to the process over time (which a simple before-and-after comparison wouldn’t show)
Probability-based rules in Run Charts
Looking at Figure 2.c, it may not be very clear if the intervention was a source of non random variation (i.e. a real change). To sort this out, we can use probability-based rules to determine whether the intervention led to a statistically significant change, or whether the effect we see is due to chance alone, similar to what is accepted in classical research5,6. Using this method, the median is used in the probability based calculations (as opposed to a mean).
There are four main rules that suggest the presence of non-random variation (4)(5):
Shift: Six consecutive points that are all either above or below the median. Points on the median do not count and therefore are skipped when counting consecutive points.
Trend: Six consecutive points that are all continuously trending up or down. Points that repeat (i.e. same value) do not break the trend, but are not counted. Therefore, you only count the first of repeat consecutive points.
Number of Runs: A run is a series of consecutive points on one side of the median. However, points are also encountered on the median itself and may make it difficult to calculate the series. A practical way to count how many runs are present is to count how many lines connecting data points cross median and adding one to this number. Non-random variation is found when there are too few or too many runs for the number of data points on the run chart. These ‘minimum’ and ‘maximum’ values beyond which there is non-random variation are calculated and found in tables such as adapted from Swed, & Eisenhart 1843 - see table below (7).
Astronomical Data Point: This is the only non-probability based rule and relies on visual inspection. It is a point that noticeably much higher or lower than the median relative to the other data points and seems to be an outlier. This does not equate to the maximum or minimum points of the run chart, which are always present and may be appropriate. This also highlights the importance of having engaged stakeholders who know the process and may be able to note what is clearly not in keeping with the usual process.
An adaption of this is as follows:
For the example below in Rule 3: There are 10 data points, and there are 2 runs (1 line crossing the midline , and then add one as discussed above). The table above shows that with 10 data points, the lower limit of runs is 3. Since there are only 2 runs, it shows that there may be non-random variation present.
Applying these rules to our own urine culture QI project data we find the following:
Other than visually inspecting the data and seeing that there seems to be a change after implementation, there is also a shift after the intervention with six or more consecutive points below the median (in this case, 10 points, with one point on the median). With this observation, we are more confident in concluding that the intervention caused special variation in our process (i.e. our intervention was responsible for the drop in urine culture ordering rates).
Summary:
Run charts are powerful tools to visualize data in QI projects, and in combination with probability-based rules, can assist with determining non-random variation in the processes we are investigating.
Join us on our next HiQuiPs post, where we will continue our discussion about run charts with important pearls, expanded uses, and common pitfalls.
Senior Editor: Lucas Chartier Junior Editor and copyeditor: Mark Hewitt
References
Taher A, Trevidi S, Chartier L, Vaillancourt S, Atlin C. HiQuiPs: Preparation Part 1 – General Considerations for ED Quality Improvement Projects. Canadiem. https://canadiem.org/general-considerations-for-ed-quality-improvement-projects. Published August 1, 2018. Accessed October 20, 2019.
Taher A, Atlin C, Mondoux S, Dowling S. HiQuiPs: Preparation Part 2 – Stakeholder Engagement and Behavior Change. Canadiem. https://canadiem.org/hiquips-preparation-part-2-stakeholder-engagement-and-behaviour-change/. Published September 1, 2018. Accessed October 20, 2019.
Taher A. HiQuips: Variation and Quality Improvement Processes. Canadiem. https://canadiem.org/hiquips-variation-and-quality-improvement-processes/. Published October 3, 2019. Accessed October 20, 2019.
Provost L, Murray S. The Health Care Data Guide: Learning from Data for Improvement . San Francisco: Jossey-Bass; 2011.
Perla R, Provost L, Murray S. The run chart: a simple analytical tool for learning from variation in healthcare processes. BMJ Qual Saf. 2011;20(1):46-51. https://www.ncbi.nlm.nih.gov/pubmed/21228075.
Langley G, Moen R, Nolan K, Nolan T, Norman C, Provost L. The Improvement Guide: A Practical Approach To Enhancing Organizational Performance. San Francisco: Jossey-Bass; 2009.
Swed F, Eisenhart C. Tables for testing randomness of grouping in a sequence of alternatives. Ann Math Stat. 1943;14:66-87.
Comments