When patients with acute, life-threatening diseases are treated in Danish intensive-care departments, physicians have few possibilities to predict the patients’ chances of survival; whether patients risk a very long course of treatment and the degree to which they will be able to function when they are discharged from hospital. Predicting this is an extremely complex matter as there are so many parametres in the patient’s history and current status that could affect the overall results of the treatment. Big data could provide many of the answers according to Professor Anders Perner, Consultant at the Department of Intensive Care at the Abdominal Centre at Rigshospitalet.
Need for new models
“As things stand today, we cannot adequately predict the outcome of a patient’s treatment. Therefore, working with the data staff at the University of Copenhagen, we’re going to try and develop new models so that we can better predict patient survival and prospects, including after patients have been discharged.
Patients are monitored very closely by healthcare professionals at intensive-care departments 24 hours a day, and a lot of data is recorded in a very short time. However, despite this mass of data to build on, we don’t yet have the tools to exploit it. Physicians often have to cast their nets very wide because the existing prediction models are imprecise.
It’s not as if we are not using the data at the clinic already; on the contrary. We collect a mass of data systematically as part of treatment and we have a good idea about what works. But with new models at hand, I hope that we can take a quantum leap forward in diagnosing and treating critically ill patients. We’ll be able to distinguish between thousands of patterns rather than just a few. And we’ll be able to fill in the gaps in our knowledge about patients’ condition and treatment,” said Anders Perner.
The life-long perspective
A new element is that researchers will be looking at life-long perspectives in the context of 7,000 acute intensive-care pathways. Among other things, this will by analysing patient records using ‘text mining’, a method that can skim texts and couple relevant wordings to the overall data analysis. Researchers will also be looking at historical patient data from the national patient registry 20-30 years back in time. This is to get the full medical history, including for periods in which data registration was not as frequent as during the time spent at the intensive-care department.
The project has just started at the Department of Intensive Care at Rigshospitalet and it will run over a five-year period with funding of DKK 15 million from Innovation Fund Denmark. Professor Søren Brunak and his team from the Center for Protein Research at the University of Copenhagen are heading the data analyses. The analyses will be run on a super computer at the Technical University of Denmark that will make it possible to dive into millions of different patient data across time scales.
Standard-bearer for bioinformatics: mission big data has started
Professor Søren Brunak from the Center for Protein Research at the University of Copenhagen and Rigshospitalet have a mission. Working with health researchers, he will use super computers to find patterns in health data that can improve treatment and understanding of the development of diseases.
If supermarkets like Coop and search engines like Google can work with enormous amounts of data and get something valuable out of the behaviour of millions of consumers and billions of internet users, the huge amount of health data on Danish patients must also contain a treasure trove of knowledge that could benefit patients and society. This is what Søren Brunak is working on.
“Denmark is bulging with health data. The sources of data are infinite and they are far from uniform; patient records, historical data from patient registers, blood tests, image diagnostics and text extracts. My job is to find a system and meaning in all this data,” said Søren Brunak.
Super computer required
There are many different types of data, and it is often only possible to process these meaningfully using very powerful computer resources: a super computer. Using a super computer, all types of data can contribute to painting a very detailed picture of symptoms in the development of diseases over a long period of time, and this is an important point, as Søren Brunak explained:
“For me, big data is not about a lot of data giving value in itself. It’s about the knowledge and the statistical relationships we can extract by comparing different types of data and sending them through a big machine. This can support research into treatment of diseases. We have to use the large amounts of data available to see relevant patterns and trends in patient pathways. If we can find new relationships that lead to new and improved clinical practice, then I’ll consider my mission accomplished.”
The big-data research area has grown in recent years, and Søren Brunak has been one of the powerhouses behind the development.
“In contrast to the traditional hypothesis-driven approach to research, the advantage of the data-driven method is that we don’t have to know what we’re looking for in advance. On the other hand, we have masses of data in which we are trying to find different relationships. This is a good asset in the hunt for new hypotheses and to identify treatment targets,” said Søren Brunak.
Sparring partner for clinics
In addition to the University of Copenhagen, Søren Brunak is also linked to the Persimune (Centre for Personalised Medicine) Global Excellence Centre at Rigshospitalet. As a sparring partner for clinical research communities, he is constantly forming new research alliances and launching projects in which big data can make a difference for the health services. At Rigshospitalet, new projects have been launched with intensive-care patients as well as cancer and HIV patients, and their treatments are now undergoing big-data analyses.
Editor: Jesper Sloth Møller