Evaluating the Data on Digital Health
Cohen leads the Health Technologies program at the Johns Hopkins University Applied Physics Lab, which develops health technologies for clinical needs.
He discusses a recent report by the Geneva Association: “Digital Health: Is the euphoria justified?” The article, to which he was a contributor, found that digital health has proliferated rapidly while research on the benefits has lagged.
The Geneva Association report found that digital health technologies work best when they incorporate a wide range of what is known as behavioral change techniques, including goal setting, feedback, habit formation, peer support, information and incentives.
There are so many areas where it hasn’t been tested in a rigorous way; a majority of companies aren’t targeting a particular condition. Many areas we see where it may be panning out are spaces like diabetes, heart conditions and behavioral health.
Some conditions are more amenable to digital health because one doesn’t have to have a physical encounter to get data and produce an intervention. When care can be unidirectional, it tends to work well; mental health, for example, is particularly amenable to that. It’s the perfect use case for that; it can be used in conditions that don’t require an in-person visit to be successful.
[Other] major conditions where we could piggyback might be cardiology, neurology and gynecology.
We have a great hammer, and we need to find out what are the shiniest nails we can hit with it. An organization has to find out what their nails are—where they need help—and figure out where telemedicine can get the most impact. I’m not just talking about market opportunity but clinical impact as well.
I’ve produced a few reports on that notion. One was a study that really looked at the impact of digital health apps. There have been other assessments, too, of apps by specialties. We were trying to understand the evidence of mental health apps. We looked at them based on the companies that we thought had the best chance of making it work, like high-funded digital companies. We looked at their scientific reports to see if they were demonstrating a positive impact on patient health. At the time, those companies weren’t necessarily producing the highest level of evidence showing clinical impact.
We are suggesting that impact through rigorous study and evaluation is really the way to go here. The field is rife with claims of benefit without actual demonstration of it. I do think that is changing a little bit though.
We have reviewed companies that were in the top 20 of those funded in their industry, based on our database. We reviewed their company material, mostly from their website, to get a sense of what their claims were. It might have been a wearable that said it created some kind of functional benefit in breast cancer or some other wellness-based wearable. We looked at the literature for those products or companies to see what was produced.
If you were to go to any big wearable company, their website will likely have tons of pages and subpages, and you can find all sorts of information based on the potential conditions it can treat, like what it would do for someone who is a smoker or who has heart disease and so forth. Payers can look at this kind of information to give them an idea of whether it will impact patients or how.
What we found when we looked at the peer-reviewed literature is there weren’t a ton of studies. Then [of] most that were, there weren’t clinical effectiveness studies. There weren’t a lot of these studies that look at clinical outcomes or cost of care.
Most studies were about validation and use, which essentially asks if this particular technology can be used in some population. We wondered: if there was this wearable to monitor activity, how does that measure against medical-grade technology that’s available? Because of the pure volume of validation and use studies we found, we think most of the research on digital technology is still in this phase overall.
The sweet spot for an insurer is going to be to find out where their costs are the greatest now and which digital health products they can employ in some way, like to a patient directly. They will need to find ones that are proven to lower cost and have an impact on their health.
They can prioritize problems and see where there are digital health solutions for their problem list. Going back to the hammer and nail analogy, they might determine they need something to manage heart failure or to track the existence and measurement of cancer outcomes.
It needs to be an impact-focused, priority-driven approach. I think now how it happens is someone sees an app that comes out and says, “Here’s a new app. Let’s try it.” Instead, they need to ask how they could deploy a certain technology in their population. Even with telemedicine, there’s not a one-size-fits-all approach. Trying to analyze telemedicine to see if it’s cost effective or improves health outcomes misses the point. They need to have a specific problem they want to solve and then select the tool for that problem.
I would have them unleash the digital health community by providing [vendors] with problems you want solved and measures they can help you move on. It could be quality or access or whatever. They should look for products developed with certain requirements to manage these measures you have already defined.
That helps them avoid launching a blunt tool across a population that won’t help them move their metrics in the way they want to move them.
In 2019, we did some research and devised some evaluation framework for digital technologies. Many of us in the research group are part of engineering organizations, so we like to think about what the requirements are for technology.
Requirements essentially mean what is trying to be done with the technology. For instance, if you are trying to improve access to care for oncology patients, there needs to be some measure of access. It’s like a set of requirements for building an automobile or an airplane that might say the vehicle shall seat 100 people. That’s its requirement.
Employers and payers should want to be a part of that creation and evaluation of these technologies. There needs to be some kind of consortium on the approach for evaluating digital health, because there isn’t one yet. There is no standard requirements generator for digital health.