HealthManagement, Volume 17 - Issue 4, 2017

What data analysis tools can healthcare implement to streamline operations and improve efficiency?

Data analysis is essential to run many organisations, and even has increased in importance over the last several years. With cheap computer power and storage options it has become possible to analyse vast amounts and types of data to increase business performance. The term “Big Data” was coined and overhyped, and in TV commercials everybody could see how some companies claimed their Big Data technology could improve retail performance or jet engine maintenance. The truth is that all of this is possible and is actually done in many industry verticals, but healthcare is, as usual, slow to adopt all of these potential “game changing” technologies – but change is coming. Some simple tools I use in my BI lectures could be used to improve running a healthcare institution without a big investment, while some other tools require more due diligence and good partner.


A useful distinction in Business Intelligence is “operational analytics” versus “advanced analytics”. Operational analytics has been around to some extent, for example to analyse coding and look for any discrepancies that would allow re-coding and up-coding. But there are many more aspects of operational analysis that a healthcare institution should implement.


A simple improvement can be achieved by mapping referrals by a referring physician and referring zip-code. This can be easily accomplished with a free tool called Google Fusion tables, which allows geo-coding and mapping. This way, a heat map can be generated to identify if a certain disease is prevalent in a certain zip code, or if some referral regions are more active than others. While the disease heat map can be used for epidemiological research, the referral map can be used for marketing and outreach purposes.


Another common tool in operational analytics is the Dashboard. Dashboards can easily display values over time and against a goal. In a clinical environment, the goal could be to reduce hospital acquired infections.


Organisations that have successfully worked in the area measured their infection rate over time and set reduction goals, so the entire team worked to keep all dials “in the green”. In class, I use an old classic for Dashboards, MicroStrategy, which now offers a free desktop product. Such a MircroStrategy Dashboard could of course also be used to show complex financial data, and how each department is doing against their budget. Ideally, this is of course coupled with a strategy of activity-based cost accounting, which is the ideal foundation for bundled payments and drill-downs into cost overruns. Another management method that can be deployed here is the balanced score card methodology, in which different goals are managed together to meet strategic objectives. Instead of focusing only on cost or only on process, financial goals, customer goals, patient outcomes, process and capacity are managed together through a system of key performance indicators (KPI s). The aforementioned dashboard can be used to measure and display actual performance against the system of goals.


In this context, it can be useful to introduce a more detailed planning process, for example in the context of quality improvement. We have seen in many studies that quality improvement and cost reduction are often directly related, meaning higher-quality processes lead to better outcomes and lower costs. If you are planning such a quality improvement strategy, it can be useful to build a model and simulate how changes in one or more variables impacts results toward your quality goals. Nowadays there are, of course, very powerful model-based simulation tools and methods available, but in class I use a very simple, yet useful tool that is also totally free to use: Plannerslab. Plannerslab makes it easy to enter the different equations and build a model, and then use intuitive goal-seek and what-if methods to find an optimal improvement path. With the tools and methods mentioned so far it is possible to improve the efficiency of the organisation, and even to build the foundation for medical quality improvement initiatives; However, data analytics offer many more opportunities in healthcare.


The problem is that data in healthcare is often complex – it makes sense in context, but not necessarily to a machine. In the U .S. we tried to address t his problem by forcing physicians to enter structured and coded data, which has caused much dissatisfaction with the data entry process. But beyond usability, structured and coded data still does not cover all the data captured. There is still a vast amount of unstructured (and un-coded) data, such as images, radiology and pathology reports, progress notes etc, etc. So in order to get access to the potential of this data, other methodologies need to be used.


In class, we use SAS Data Miner, which also has a text analysis feature. It is ok for teaching, especially because you get an insight into all the statistics required to analyse unstructured data, but for healthcare institutions there are better options with readymade medical ontologies.


This brings us to the part of advanced analytics, where the goal is not only to determine if you are on the right path of process improvement or cost accounting, but to find a better way to diagnose diseases and pathways, a way to identify at-risk patients before an emergency and so on. We know for a while that 3M developed a number of clinical risk groupers, which can predict the risk of a particular patient to develop a severe chronic disease or have a higher risk of complications during hospitalisation. These risk groupers are heavily based on science and decision trees, and fed with coded health data. And herein lies the problem – the algorithm needs to be fed with nicely-groomed data in order to produce useful results. The so-called “Big Data” technologies like IBM Watson, but also Google Tensorflow, rely on probabilistic matching. This means that the algorithms ingest thousands of annotated data sets to analyse similarities. Once trained, they predict a certain result. Unlike the 3M decision-trees, which are based on underlying science, the results of these machine-learning engines can change every day. The more they learn, the more data they use, the more they might change their diagnoses or the confidence level of the previous diagnosis. This changing environment is kind of difficult to digest in the context of the CE/FDA process. Nevertheless, several U.S.based hospitals already build large annotated data sets based on medical images and annotated and coded reports that can be used as training sets, and we have seen successful applications of this approach to diagnose TBC, lung cancer and other diseases. Although it is an advanced research topic, the Google TensorFlow platform and ResearchCloud are good starting points to investigate this branch of advanced analytics, and therefore the fifth analytics tool with a potential to improve running a healthcare institution today, albeit with a longer runway than Google Fusion Tables, Micro- Strategy, BS C, and Plannerslab.