This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Remarks by Dan Varga, M.D.
Town Hall Meeting at the AHRQ 2007 Annual Meeting
September 27, 2007
Thank you all for having me. Thanks, Carolyn, for the invitation. Since I'm the last person between Q&A, and Q&A is the last thing between us and lunch, I'll try to overcome my Kentucky environmental conditioning and talk a little faster than I normally do.
I'm going to talk to you, and Carolyn had asked me to talk a little bit about... I'm currently the Chief Medical Officer at SSM Healthcare at St. Louis and had previously been the Chief Medical Officer at Norton Healthcare in Louisville, Kentucky... and talk a little bit about the two approaches that we've kind of taken to clinical accountability, which is really where I've spent the bulk of my time, in this arena, and talk a little bit about what's different, what's the same. And I think what you'll find is that, unfortunately, a lot of it's very much the same.
Just by way of introduction, in 2004 at Norton Healthcare, we began an aggressive approach at trying to come up with a way to become clinically transparent. This is at an era when we didn't have the gun to our head at the time. Hospital Compare hadn't even been launched at that particular point in time, and the question is: "How do you get that going? How do you do it?" And the big question always starts off with: "Why would you do that in the first place?" And I think, at that particular point in time, it was a fundamental question of stewardship. You have stewardship of your missions, so you always put out these clinical, your community reports—talk about all the wonderful things you do for your community, and you have to report if you're a not-for-profit and your 990's, tell everybody what you're doing with your assets and your money and how much you pay your executives, etc. There was no real vehicle for talking about the stewardship of your work, which is our widget, right? I mean it's health care. So there really wasn't that tool, and what we had hoped to try to start with, as we started to build this, was the clinical equivalent of the financial statements that people look at every day and which we live by in health care at the hospital level and the health system level particularly.
And one of the big resistance points early on was, I think, a fundamental assumption that clinicians were less intellectually dexterous than accountants, because everybody kind of argued that you've got to get something small and simple that you can look at. I mean, I used to get a 400-page book every month of all the financial statements and all of our operating statements, etc. But the concept that clinicians couldn't deal with the big clinical report seemed to be fundamentally flawed as an argument.
So, starting off with stewardship and getting that thing that could actually capture your environment of care, because even as Hospital Compare started to roll out as a public report, a small set of indicators, three diagnoses, I mean you can kind of tweak yourself to glory on that one. And it didn't really capture the environment of care that you dealt with.
A lot of health systems have ambulatory systems. How are you going to account for those? Very broadly distributed networks and very broadly distributed components of care. So how do you do that? And, as we tried to build that, one of the questions was, first of all, as I just mentioned, how big would it be? But at the end, what would be your risk? I mean, are you going to expose yourself to undue risk as you try to do this? And this was in 2004—probably a question that, hopefully, you wouldn't be asking today. But in 2004, that question was real, and the only way I could really answer that was, you know, in "Animal House"—I do this when I make this presentation—in "Animal House," they're sitting there and the Deltas are ready to get kicked off campus for good, you know, John Belushi goes, "What's required is that a really stupid gesture be done on somebody's part, and we're just the guys to do it." And we actually convinced the organization and, in fact, that was what we needed to do, which left me with the other quote I use in my presentation, usually is, coming from Kentucky, we appreciate the blue collar comedy to when Ron White talks about being drunk in public, and he's being interrogated by the police, and he says, "At that moment in time, I had the right to remain silent, but I didn't have the ability." And, (laughter), to a certain extent, that's kind of where we left ourselves. And so, once we made the decision to do the report, to have a clinical accountability mechanism that was there for all to see, the question is: "What should it look like?" or "Where do you populate it? Where do you get your indicators?" And, as difficult as that may have seemed in 2004, it was really very easy. Which, you know, again, some of you would say, "Well then why aren't more people doing it?" But it really was very easy. All you had to do, this is at Norton in 2004, was we were already collecting the STS database that you just heard about. We were already collecting American College of Cardiology database. We were already doing core measures. And with that, and we were also doing Vermont Oxford and NACRE for Pediatric Hospital, and between the AHRQ patient safety indicators, pediatric indicators, and inpatient quality indicators, which you can pull from administrative data, you had a 200-indicator report that basically captured your entire environment of care, and you didn't have to add any burden of reporting. It's not to say that the burden of reporting for STS, ACC, core measures, etc., are not significant, but there really wasn't a huge, there wasn't a big barrier there to what actually you should do, but we did want to try to create some clarity. And I think this is something that we've tried to adopt as we're starting to roll this out at SSM, which is, as people look at public accountability—for good reason, I think, to a large extent—consumers, employers, and the public in general, and also particularly the media folks, always suspect us, in the hospital, health system world, that we're reporting stuff that's good. That we look good on or it's some sort of little niche component of what we actually do.
So, our argument in this was very simple: "a"—We weren't going to report any indicator that we invented. So it was going to be somebody else's indicator. It was going to be an evidence-based consensus indicator that had been endorsed by somebody else. It was going to have to be something that had transparent rules of evidence. So anybody—it's open source—anybody could come in and calculate the data, etcetera. It was going to be something we did. And the other piece of it was going to be that, once we identified this universe of indicators, as this universe of indicators changed, we were going to change with the universe of indicators without voting on it. So, essentially, board policy arose at that time that said when NQF endorses a new body of indicators, when AHRQ expands their indicator set, if we do it, it becomes part of the public report. We don't take it to the board and go, "Let's vote on whether we're going to include this or not." So there's no cherry picking in that environment.
I think that where we ended up, just to try to keep this relatively brief, where we ended up with that particular initiative is we went public with about a 200-indicator report. We were, on an initial release, gloriously mediocre. I mean it was really, we had a little, on the red, yellow, green, we were kind of lemon yellow all over. The little red and green at either end.
The public, looking at that, basically gave us a whole lot of credit for being palms up. You know, "You guys must have nothing to hide. We don't really know whether you're good or not, but at least you have nothing to hide." The big advantage for us, at that particular juncture, was internal. All of a sudden, we now knew what we didn't know before. And it created, as anybody who knows clinical people, anybody who's ever gotten a call from a nurse in the middle of the night because a hemodynamically stable patient has a hemoglobin of nine is all about, "it's not within normal limits, so, I want it to be within normal limits." That's the way clinicians are hard-wired. So, clinical people started working on this right away. But, I would tell you that the real dilemma of the public report, and I think I'll transition now to my time at SSM—we're about to launch the same type of quality report; it's going to be a clinical accountability tool for us. There's probably a little bit more external pressure to be transparent, to be clinically accountable than there was in 2004. But the same sort of resistance still lives. And I can tell you right now that I keep a pretty close eye on quality reports. I can tell you, it's even easier to build one today than it was before. It can be even more comprehensive today than it was before. There's a lot more to compare it to, as we've heard about regional coalitions, national reporting initiatives, State reporting initiatives; should be easier. Not very many people are really doing it. The Norton Report is still out there. SSM's Report will come up by the end of the year.
But the things that people always point to in terms of resistance is, first of all, the fact the doctors won't live with this, the clinicians won't live with this information. And that is true to a certain extent. I always try to mention the fact that doctors and other clinicians go through this Kübler Ross deal with data. You know, they'll tell you, first of all, you have no data. Then they'll tell you your data stinks. Then they'll tell you that you have data, your data doesn't stink, but it doesn't apply to me. Then they'll tell you that you have data, your data doesn't stink, it does apply to me, but the reason the numbers are bad is because you stink. And then finally, finally, once you get past that, once you get past that stage, it gets to the place where, as you've seen in Virginia, you actually get really robust and productive collaboration. I think clinicians get past that obstacle a lot faster than people give them credit for. And so I don't really see that as a big obstacle.
What I do see as the big obstacle, though, is this, and I think it's why Carolyn mentioned earlier that if you were a CEO and you had an IT initiative that you should get your CV out there. I don't want to leave you with the impression that if you're a Chief Medical Officer and you start a clinical transparency initiative that you should get your CV out there, but it is a very culturally disruptive enterprise to become clinically accountable. And here's the big gig. The thing that's really problematic in all this is that when you become transparent, what you really do, both internally and externally, is expose just how huge the gap between what you internally expect of yourself and what the public externally expects of you and what you're delivering today, and the cost, both in terms of workforce, competencies, technology, infrastructure, whatever it happens to be, is enormous.
But, my argument in all of this is that clinicians are really smart people, and they'll figure out how to bridge the gap if they just know what the gap is. I think we've proven that over and over and over and over and over again. And I'm not talking about Hawthorne Effecting ourselves to glory here, I'm talking about actually letting people see what the gap is and being able to problem solve for that gap, because that's what we do.
But I think that, fundamentally, that's still a major, major barrier, and probably the major barrier to why you don't see more hospitals' health systems being very proactively out in the marketplace talking about clinical accountability. It's easier than it's ever been to do—the real dilemma here is just, simply, once you know what you didn't know before, you're going to find very quickly that it's a big gap, and you have to have a very collaborative, very efficient way of bridging that gap. And you've heard some great examples today of how you can actually get that. I think what we all really need to do more than anything, though, is get to a place where we actually know more than we know right now. Thanks.
Current as of July 2008