Analytics in Big Data is becoming increasingly sophisticated, according to speakers at the recent European Society of Cardiology congress in Rome. Advancing from a’ rear-mirror’ view, Big Data’s next stage will be predictive, using real-time data. The payback will be with prescriptive applications, moving towards personalised healthcare, population health risk models, optimising care systems, using natural language, machine learning and cognitive computing.
There were conflicting opinions on data quality. Harry Hemingway, University College London, said that data quality on this scale is not an issue. Multiple sources can be used. For example, if electronic health data includes mistakes and undiagnosed diseases, biobanks may enable subclinical measures of disease to validate diagnosis. John Rumsfeld, American College of Cardiology & University of Colorado, noted that preliminary studies using big data have shown that the higher the quality of data, the better the performance.
Big Data applications in health care include:
- prediction of risk and resource use, ie readmission and costs of care
- population management - monitoring, case finding, might be able to detect eg who would develop diabetes and heart failure, even earlier
- drug and medical device surveillance - better job at detecting safety signals
- disease and treatment heterogeneity - fascinating application, phenomapping, subgroups
- precision medicine and decision support - ‘-omics’; dec support, prescriptive, specific drug for specific patient
- quality of care and performance measurement - casemix, real time
- public health - health behaviours
- research applications - methods, validation - still much to be done on better methods and approaches cannot understate validation
Dr. Rumsfeld pointed out the literature on Big Data in healthcare is preliminary; there is not a large amount on Big Data analytics in cardiology. Analytics will not tell doctors how to intervene, and Big Data is not needed to tell us that older sicker patients will visit the emergency department again. Big Data solutions are not yet connected to care, and without considering social uses and clinical practice, it will fail to cure the health system. Big Data is already used by the Veterans Administration in the U.S. to predict risk or readmission. The information is fed back to primary care providers so they can intervene with those patients.
Rahul Potluri, founder of the Algorithm for Comorbidites, Associations, Length of Stay and Mortality (ACALM) study unit, University of Aston, UK, discussed Big Data analytics for population analysis.
Big Data will have useful analytics for cardiology, he suggested. In cardiology clinical trials are expensive, are subject to increasing regulation, and advancement is disappointing. Less expensive, registry-based clinical trials are being implemented that have less significant endpoints and shorter follow-up periods to lower the cost. Often trials are not giving real data due to exclusion criteria, he noted. There is already Big Data from registries so why bother with more? Because routinely collected healthcare data on large populations make existing cardiology datasets pale into insignificance, provided that the data can be used, developed and applied.
The ACALM project is a fully functional, cross-sectional and/or longitudinal research databases with real-life outcomes. Data is available from completely anonymous routinely available healthcare information on 1.2 million patients, increasing to 4 million. The data is used to work on health service research. They found, for example, that heart attack victims are more likely to die if admitted to hospital at the weekend, that they have 14% better survival if married, and have better outcomes if sent home at the weekend.
See Also: Study: Modelling At-Risk Sepsis Patients in the ED
Amalgamated datasets such as ACALM can potentially answer questions that would not be answered otherwise. They have the power of millions of patients, and offer defined time periods, follow up, real world data and real world patients, said Potluri. Currently it is usual to perform literature reviews and to develop and test hypotheses with carefully planned studies. Big Data offers the opportunity to analyse data when the research question is not known. It may suggest information which we would not think plausible with our existing knowledge.
Already large datasets generated in routine clinical settings, such as ACALM, can be mined using sophisticated algorithms to accurately predict the onset of septic shock, optimise patient-specific anticoagulation regimes, and assess the prospective risk of myocardial infarction.
Potluri emphasised: “Machine learning offers an opportunity to identify concepts rather than correlations in clinical data, thus promising to become an invaluable tool for data-aided decision making.”
Potluri did acknowledge the limitations of Big Data: data collection of quality, practicality of resourcing and running large datasets and on-uniform data. There are significant cost advantages of using routinely collected data, he said. “Power of the data versus the accuracy will be an important debate going forward”, he said.
“Big Data analytics will delineate a paradigm shift in cardiovascular medicine”, concluded Potluri. Big Data has the potential to enhance understanding of disease and outcomes, improve clinical care by predicting patient outcomes, generated ideas which can be complemented by basic science research, predict disease patterns and interactions, and streamline healthcare services and allow appropiate allocation of resources to where they are required.
Dr. Rumsfeld agreed that cardiology can lead the way in Big Data. It is already a world leader in having large registry programmes, he noted. In the future there will be more biometric and patient-reported data.
Image credit: Wikimedia Commons