What’s Behind All These Digital Health Assessments? – The Health Blog - Latest Global News

What’s Behind All These Digital Health Assessments? – The Health Blog

By MATTHEW HOLT

A lot of time has been spent in recent weeks trying to resolve the data conflict. Who can access it? Who can use it for what? What do the new AI tools and analysis functions enable us to do? Of course, this is about using data to improve patient care. Everyone who cares, from John Halamka at the Mayo Clinic to the two men with a dog in a garage building clinical workflows on ChatGPT, believes they can use AI to improve patient experience and outcomes at a lower cost .

However, when we look at recent changes in patient care, particularly those brought about by digital health companies founded in the last decade and a half, the answer is not so clear. Several of these companies, whether trying to reinvent primary care (Oak, Iora, One Medical) or changing the way diabetes care is delivered (Livongo, Vida, Virta, et al.), now have significant numbers of users, and their impact begins to be assessed.

A cottage industry of organizations dedicated to these interventions is emerging. Of course, the companies in question have their own studies, some of which last several years. Their logic always goes something like this: “XY% of patients have used our solution, most like it, and after using it, hospitalizations and emergency room visits go down and clinical metrics get better.” But organizations like the Validation Institute, ICER , RAND, and more recently the Peterson Health Technology Institute have declared themselves neutral arbiters and have begun conducting their own studies or meta-analyses. (FD: I was on the advisory board of the Validation Institute for a short time). In general, the answers are that digital health solutions are not all they seem.

Of course there is a longer story here. Since the 1970s, policy experts have tried to determine whether new health care technologies are cost-effective. The discipline is called Health Technology Assessment and even has its own magazine and society, at whose meeting in 1996 I gave a keynote address on the impact of the Internet on health care. I ended my talk by telling them that the Internet would have little impact on healthcare and would be used primarily for downloading color video clips and that I would show them one. I think the audience was relieved when I pulled up a video of Alan Shearer scoring for England against the Netherlands at Euro 96, rather than certain other videos that the internet was used for then (and now)!

The point, however, is that evaluating the cost-effectiveness of new technologies in healthcare is a sideline, particularly in the United States. So much so that when the Congressional Office of Technology Assessment was shut down by Gingrich’s Republicans in 1995, hardly anyone noticed. In general, we’ve done clinical trials designed to show whether drugs work, but we’ve never really bothered to find out whether they work better than the drugs we already had, or whether they cover the huge cost increase that led to them , worth were Come with them. That doesn’t seem to stop Ozempic from making Denmark rich.

Likewise, new surgical procedures are introduced and tested long before anyone figures out whether we should perform them systematically or not. My favorite story here is that of general surgeon Eddie Jo Riddick, who discovered some French surgeons performing laparoscopic gallbladder removal in the 1980s and imported them to the United States. He traveled around the country charging a pretty penny to teach other surgeons how to do it (and how to charge more for it than for the standard open surgery technique). It’s not like there was a big NIH-funded study behind it. Instead, an entrepreneurial surgeon changed an entire, very common procedure in less than five years. The end of the story was that Riddick made so much money teaching surgeons the lap chole that he retired and became a country and western singer.

Eric Bricker also points out in his very entertaining video that we create more than twice as much footage as is common in European countries. In 2008, Shannon Brownlee spent much of her great book Overtreated This explains how imaging rates skyrocketed while our diagnosis or outcome rates did not improve. By the way, Shannon declared her defeat and also left health insurance, even though she is a potter and not a country singer.

One can look at virtually any aspect of healthcare and find ineffective uses of technology that do not appear to be cost effective and yet are widely available and costly.

So why are the knives used specifically for digital health?

And they are outside. ICER helped kill the digital therapeutics movement by declaring several solutions for opioid use disorder ineffective and letting several health insurance companies use this as an excuse not to pay for them. Now Peterson, using a framework from ICER, has basically said the same thing about diabetes solutions and is moving on to MSK, with presumably more categories to be debunked on deck.

One of the most colorful players in this entire arena is Al Lewis, who is the worst type of true believer – a convert. In the 1990s, Al Lewis was the chief cheerleader for something called Disease Management, which was something like “Digital Health 0.5.” In the mid-2000s, CMS included a number of these disease management programs in a study called Medicare Health Support. The unpleasant answer was that disease management didn’t work and cost more than it saved. The biggest problem was that these programs were largely telephone-based and were not integrated into the patient’s medical care. Meanwhile, Al Lewis (I’m using his full name so you don’t think Al is an AI!) has brought his analytical sword to disease management, prevention and wellness programs and now to several digital health companies, proving that many of They don’t do this Don’t save the money they claim. He usually does this in a very funny way, along with lots of $100,000 bets where he never pays out (and never wins)!

Which leads me to another skeptical player who looks at this from a slightly different angle. Brian Dolan, in his excellent Outputs and results Newsletter, pointed out that there was something pretty strange about the Peterson study. Dolan noted that Peterson selected a study on Livongo on A1c lowering (not the one the company had conducted itself and which was well criticized by Al Lewis) and extrapolated the clinical impact of that one study as being the same for all the companies’ solutions – Livongo had also carried out very few studies compared to Omada Health.

Peterson then used another random study from the literature to extrapolate the financial impact of this A1c reduction. What it hasn’t done is retrieve the claims data of patients who actually use these solutions, even though Peterson’s advisory board is a who’s who list of health insurers. Of course we could get better real-world data, but why bother when we can effectively guess and extrapolate? It’s also worth noting that many of these insurers, including Aetna and United, also offer competitive diabetes products.

So one might think that the very well-funded Peterson Institute could or should have done a little more and certainly would have included some of the solutions marketed by health insurers on its advisory board.

That’s not to say that digital health companies have done great studies. Like everyone else in healthcare, their reporting and studies are ubiquitous and many of them make claims that push the boundaries, obviously because they have commercial reasons for doing so.

But it’s also true that many didn’t need these studies to grow commercially. The poster child here is Livongo, which grew its number of employer customers and members from zero in 2015 to over 500 employers and 350,000 patients by its IPO in 2019 – all while publishing just one study at the end of the period. The reason for this growth was that Livongo cost as much as employers were already paying for diabetes strips (which it included as a loss leader), that it made favorable commercial arrangements with Mercer and CVS to get to employers, and generally the patients liked it. Al Lewis disagrees with the last part (and points out some bad Amazon reviews), but Peterson actually noted many positive user reviews of the diabetes solutions in his “Patient Perspective” section – which had no bearing on the overall negative rating .

My assessment is that although the individual health researchers in Peterson et al. well mean, experience another power struggle. The current incumbents have done things one way. Several of these new digital health approaches offer new, more continuous and comprehensive approaches to patient care – which some patients seem to like. Of course, established providers and insurers could have tried these approaches over the decades. It’s not like we have data to show that everything has been fine for the last 40 years. But America’s hospitals, doctors and insurers did what they always did and continued to get rich.

Now there is a new group of technology-enabled players and there may be a decision that could be made. Should we move to a system of comprehensive, constant monitoring of chronically ill patients and see how we can improve that? Or should we let incumbents dictate the pace of this change? I think we all know the answer from the incumbents, and to me that puts all of this digital health analysis into perspective.

After all, would these incumbents be happy if their current activities were similarly judged?

Sharing Is Caring:

Leave a Comment