The arrival of artificial intelligence (AI) has generated a fair amount of anticipation that difficult problems in medical science will finally be solved. AI, which is developed by training computerized algorithms to learn by teaching themselves (machine learning) holds great promise for science in general, and for healthcare in particular.
The expression “Big Data” is now the umbrella term used to encompass AI, predictive analytics, deep neural networks, and other systems designed to process vast amounts of information. Today, the power of Big Data is such that computers can locate and flag proverbial needles of insight in haystacks of bewildering ones and zeros. Big Data can help identify which patients are likely to be readmitted to the hospital within 30 days, and patients receiving home care services can benefit from the power of Big Data.
But even computer algorithms need to be pushed in the right direction. A student reading history needs to be directed toward the right source material, and they usually benefit from a tutor to help navigate the library stacks.
It is the same with Big Data. The saying in computer science is “garbage in, garbage out”. This means that even the best-designed programs are useless if the inputs are nonsensical or otherwise inappropriate. If you speak gibberish to a language-learning algorithm, it will learn to speak gibberish.
There is a famous example of the “garbage in, garbage out” in health care. A study in 2015 reported the performance of an algorithm to predict which hospitalized patients were at risk for developing pneumonia, a common complication of hospitalization1. In most situations, the algorithm worked well: hospital staff were able to intervene early and prevent some complications. However, the algorithm consistently made one serious error: it would instruct physicians to discharge patients with asthma, even though they were in the high-risk group for developing pneumonia. The problem was that that asthma patients were not appropriate for input into the algorithm in the first place. They were supposed to be placed in an entirely different algorithm.
Haystack or Gold Mine?
It is true that big data sets can be messy. This messiness is precisely what gave rise to failures like the asthma patient snafu. However, messy data sets do not necessarily have to be unusable. To the contrary, thoughtful construction of algorithms can learn just the way we want them to. The key is starting with small problems and scaling up from there.
For example, a recent study described a machine learning algorithm designed only to predict which hospitalized patients were at risk of developing sepsis2. The investigators showed that use of the algorithm reduced in-hospital mortality from sepsis by over 60%. But as an added benefit, the algorithm also cut sepsis-related hospital stays by 10%, and sepsis-related readmissions were cut in half.
Smart Algorithms Plus Smart Humans
It turns out that smart humans are indispensable to successful implementation of smart algorithms. That was the case for Mission Health in North Carolina. A combination of smart algorithms and competent human stewards produced a 58 percent increase in the detection of serious bacterial infections, a 32 percent decrease in mortality from sepsis and 20 percent more patients getting to surgery on time3. Mission’s Big Data-driven algorithm was able to do this in a fraction of the time it would have taken humans to achieve the same results.
Bringing Big Data Home
Big Data’s early small successes promise to extend to many other areas of the healthcare industry, including the home care segment. It is known that hospitals often inappropriately discharge patients to high-cost settings, including patients who might do better at home. AI promises to identify those patients and direct discharge teams toward setting up home care services.
Connective technology platforms also promise to help patients receive appropriate care at home. These platforms connect patients and their families with post-acute care providers and their managers, ensuring seamless delivery of care. Big Data can also help reduce fraud, waste and abuse in home care delivery by recognizing patterns of fraudulent resource utilization that might be overlooked by human managers.
A good starting point for use of Big Data in home care is reduction in hospital readmissions, most of which are avoidable. Successful implementation of a Big Data-driven hospital readmission reduction program would effectively kill three birds with one stone: it could improve patient experience (improving satisfaction), increase the quality of care (improving outcomes) and reduce costs (eliminating waste). As Big Data’s early victories have shown, success requires coordination among good algorithms and dedicated professionals.
- Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission | Association for Computing Machinery
- Reducing patient mortality, length of stay and readmissions through machine learning-based sepsis prediction in the emergency department, intensive care unit and hospital floor units | NIH
- What does a successful AI and analytics program look like? | Healthcare IT News