Healthcare Visualizations: Are You Getting the Entire Story?
The emergence of powerful and user-friendly data visualization programs (e.g., Tableau, Qlikview, Spotfire, Power BI, etc.) has transformed analytical reporting. The amount of information conveyed by all types of graphs, symbols, sizes, and colors is staggering, and the ability to “drill down” on the fly to increasing levels of granularity allows for all manner of analyses. The power and ease of creating these visualizations, combined with the increased emphasis on making evidence-based decisions, puts pressure on leaders to request large amounts of data and graphics in order to make the most informed decision possible.
The downside of this data hunger is that it leads to the creation of simplified, context-free visualizations which may inadvertently lead to misinterpretations. Cramming as many visualizations as possible into a report or dashboard results in dumbed down graphs with critical information missing. It’s similar to reading a story where key details have been left out, forcing the reader to mentally fill in the blanks and complete the story. Without all the relevant details, it’s impossible to understand the full story.
Telling an incomplete story can lead to misinterpretation of the visualizations, most often in the form of a false positive (believing a change has occurred that really hasn’t). Furthermore, the reader is emboldened and confident in the interpretation of the story because it was built upon data, albeit incomplete and lacking context. These misinterpretations are often met with knee-jerk reactions to correct the “change”, leading to unnecessary actions being taken that waste time, effort, and money. Employees focus on low priority issues and incentives may be mistakenly awarded due to faulty interpretation of the data.
The importance of providing context for visualizations within the domain of patient satisfaction is emphasized below, although the lessons apply to any area of data-driven decision making. Tips are shared to overcome common pitfalls to ensure that the entire story is being told, and to illustrate our points, we show how data simulated around “in control” data processes can be made to appear as if change is occurring when it really isn’t.
Include a Comparative Frame of Reference
When a change does occur or a true trend appears, it may be difficult to know whether the change is expected or unplanned. There may be seasonal (e.g., day of week, month, quarter) or market-based (e.g., political/regulatory, economic, or social/cultural) effects influencing the metrics. Having a frame of reference with regard to past performance, as well as the performance of peers, enables the organization to accurately, and confidently, evaluate current performance.
Seasonal effects should be accounted for via inclusion of trend lines of performance over a comparable time period (e.g., last year). This serves as a frame of reference to which the current performance can be compared. If a consistent dip or upturn during a comparable time period is noticed, it can be inferred that there is the possibility of a seasonal effect above and beyond any true change to the metric of interest. It is also helpful to include integrated peer performance graphs to better understand performance. A downward trend may not be of concern if the rest of the industry is heading in the same direction.
The inclusion of vertical reference lines along the time axis are helpful in documenting known changes in the healthcare market. For example, if a major facility is undergoing renovation or a new healthcare law goes into effect, it may be helpful to mark the beginning (and potentially the end date) on the graph to identify and isolate periods of time that may have been impacted. This also accounts for changes in the variable of interest based on the event’s timing.
Figure 1 appears to show an upward trend in patient satisfaction up through the midpoint of the year, followed by a decline. However, before drawing any conclusions, the performance should be compared to a similar time frame in the past and/or relative to peers.
Figure 1: One year of data with upward trend
Figure 2 shows the same line graph along with the prior year’s data (orange line), showing what appears to be a seasonal effect where satisfaction peaks at the midpoint of the year. Consequently, the increases in the middle of the current year do not represent actual improvement because they are expected to happen based on what was observed the prior year. What would be worrisome is if the organization didn’t show increases.
A vertical reference line is placed at month 4 to mark the beginning of a facility renovation which could potentially affect patient satisfaction due to aesthetics, noise, inconvenience, etc. If we looked at the current year’s data in isolation, we may assume that the facility renovation had an unexpected positive effect on satisfaction. However, as discussed above, the inclusion of the prior year’s data provides evidence that the increase is merely a seasonal effect.
Figure 2: One year of data from current year (blue line) and prior year (orange line)
Account for Natural Variability
If average patient satisfaction is higher during the current time period compared to the previous, can we say that satisfaction increased? Not necessarily. The averages for any given time period are based on samples of a patient population. Even if patient satisfaction has not changed, variation in sample averages from one time period to the next is to be expected due simply to the luck of the draw of the participants included in your sample. To definitively say whether a change has occurred, the natural variability of the data must be taken in to account.
The best way to account for natural variability in a graph is to include upper and lower control limits, or horizontal lines that run across the chart that depict where the data is expected to be, assuming the current average is the same as the historical average. The inclusion of control limits effectively turns the graph into a control chart, a popular tool used within quality control. As long as the values are within control limits and don’t meet any of the criteria listed here, assume no change has occurred.
Figure 3 shows monthly satisfaction data that rises sharply, only to fall and rise again, then gradually decrease with an increase at the very end. There’s a lot going on in that story … except that nothing is happening. Keep in mind that the data used here was simulated around a known average, so there is no pattern despite what you think you see.
Figure 3: Monthly data
When looking at the same data in the format of a control chart in figure 4, it becomes clear that the ups and downs are just random variability, or white noise, because the data fit comfortably within the control limits. Until values appear outside of the control limits or meet the criteria referenced above, assume that no change has taken place. Of course, it would also be great to include a comparative line or two in this graph, but for the sake of visual parsimony, one was not included.
Figure 4: Monthly data with center line and control limits
The inclusion of control limits also effectively solves misleading scales where the graph zooms in on a narrow range of the vertical axis, making even minor changes appear to be large, such as commonly seen with sparklines. The control chart uses the data’s past variability to optimally size the axis so it displays the range within which over 99 percent of the sample averages are expected to be found.
Data visualization is a form of story-telling, so it’s important the visualizations make the story clear through the graphics and details. All graphics need to have a proper frame of reference, whether it be a properly sized axis, visual cues like control limits that show the typical range of data values, data from comparable time periods, or peer performance.
The ability to enact meaningful changes within healthcare is predicated on the ability to accurately describe the healthcare environment and detect trends. Therefore, much care must be put into how this information is presented to and interpreted by decision-makers within healthcare organizations. Otherwise, resources may be misallocated and opportunities to conduct meaningful change are squandered.
* This topic is so important Health Catalyst University has recently launched our Accelerated Practices (AP) program to train leaders in the appropriate use of data in decisions-making.
Would you like to use or share these concepts? Download this quality improvement presentation highlighting the key main points.