Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

July 22, 2009: Morning Session (continued)

Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs

Jeffrey Schiff: Do you want to comment? [Cross-talking]

Barbara Dailey: Just very briefly—as we mentioned with our CHIP experience, States did basically have to take different approaches in terms of how they were reporting their measures. They were not always necessarily the exact specifications that have been outlined, particularly through the National Committee for Quality Assurance (NCQA). Similarly, in our experience in working with States, particularly in the managed care quality reporting requirements, the same thing. They are basically using—I love the term that was just provided to me in working with some of the NCQA staff, "SHMEDIS"—and States are taking their own information, either their health information systems needs, or if there is a population-specific need or an eligibility criteria change, they are changing specifications in order to get information that will help them identify where they need to address improvement opportunities. It has not been consistent, we do not have a national understanding in terms of exactly what States are doing and which ones are actually completely following specifications. We know it is a mixture at this point.

Jeffrey Schiff: Let's do Mary, Paul, Doreen, and then these guys over here.

Mary McIntyre: From the standpoint of a State trying to—we actually started with the National Clearinghouse. We are trying to identify specifications, and the problem we ran into is the detail needed to be able to get consistency in how we were pooling them. And then a lot of times if you click on the specific measure and it will take you to, like with the specs, it will take you to HEDIS [Health Plan Employer Data and Information Set], which is how we ended up ordering HEDIS and trying to get something to get the specifications to be specific. We use like the American Medical Association (AMA) Consortium. We use the DOQ-IT (Doctor's Office Quality Information Technology), the DOQ-IT analytical narrative, to try to get some consistency, but the reality is, and some of the stuff when we looked at it, we could not identify specifically how that was being pooled as far as variations and population age occurred, and there were other areas, so that is the issue.

And one of the things Tricia MacTaggart—I think they are sending out a survey to States now. One of the things that we kind of put down and emphasized was the need to have technical specifications so that we could get consistency. That in doing this, that even if we have so-called the same measure, it is not really the same. We maybe have for instance hemoglobin A1c measures, it seems pretty specific, but if just a change in the population age is included, it changes the measure.

So those are things that you need to think about. Something like asthma medications, what medications are involved? There is a list there, consistency as far as coverage. I mean, those are all issues that need to be addressed, and so we really need specs in order to be able to get consistent measures across the board. If we do not, we are going to still be comparing apples to oranges. It will not be consistent.

Male Voice: A comment and a question. It seems to me this is an evolutionary process. Even though we would like there to be great specificity as to how measures are assessed, I do not think we are going to be there at the end of 2 days. I mean I think we are going to have to accept the fact that we may come up with some measures that we think are valid and feasible, and we still need to do some specificity. I mean there still needs to be some work done as to how to make it more specific.

The question I have is in differentiating between validity and feasibility, we have under validity, reliability, but in fact, we usually put it under feasibility, and I think in terms of us—I think there ought to be some agreement as to how we score these that reliability is really not part of validity even though I think it is extremely important; I think it is part of feasibility.

Rita Mangione-Smith: I mean that is normally the way I think of it, too, but I'm happy to—

Male Voice: I mean we could do it either way. I just think there needs to be some agreement [cross-talking] amongst all of us because when I—following the directions that were sent out initially to do the Delphi process, I put reliability under the feasibility category.

Rita Mangione-Smith: Are there—just related to that comment, we will get together—people have their cards up. But related to that comment, are there any—Cathy, do you want to respond?

Cathy Caldwell: Well, it is sort of like if we come to agreement, fine. The measurement community absolutely puts reliability actually separate from validity. It is neither of these things, and it is very carefully defined about what it means, certainly from traditional measurement. Reliability has a very specific meaning. If we do not want to go there, that is probably okay as long as everybody understands that whether it is feasibility or validity, reliability is a controller, and if you cannot show that a measure is reliable, it does not matter what you imagine about feasibility or validity, it just—

Male Voice: I agree.

Cathy Caldwell:—puts up a ceiling on what you can say about a measure.

Rita Mangione-Smith: Cathy, can we put into words what that formal measurement community's definition of reliability would be just so that—it is where we kind of agree.

Cathy Caldwell: I'm not going to come up with one in words to be honest, but if you were using traditional measurement, psychometric techniques for things like scales and so forth, you would have to be able to show it was reproducible at some percentage. And for making comparisons across groups, it might be 70 percent. For saying anything about individual people, it is closer to 90 percent. Now, I can come back to it, or I can send everybody where we come from in either traditional measurement or what is more contemporary which does not rely completely on, not old fashion, but standard psychometric techniques and maybe that would be helpful. I can see if I can find it.

Denise Dougherty: If you can find the link, we can print it out from the link.

Jeffrey Schiff: Would it be possible, as we score this, to call this feasibility and reliability? Would that make people feel more comfortable? I mean we would be scoring for the minimum because I guess I'm sort of worried about the process of—

Cathy Caldwell: Sure, fine with me.

Jeffrey Schiff:—basically saying we need to score another core set as another filter.

Rita Mangione-Smith: Yeah, so one thing, and I'm totally sensitive to what you are saying. When you are looking at scales, when you are looking at things like CAHPS® [Consumer Assessment of Healthcare Providers & Systems] measures or PedsQL scores, this type of reliability you are talking about I think is very, very salient to scales, scale psychometrics. But I think there is another piece to reliability which is can I take these measure specifications and reliably go from one health plan to another and get the same—?

Female Voice: Or reproducible.

Rita Mangione-Smith: Reproducible, exactly. So—

Female Voice: Fair enough.

Rita Mangione-Smith: Yeah. So I think in the HEDIS world, reliability is taken out as a separate thing. In the Delphi process, it is usually folded into feasibility. I think it is semantics, frankly. I think we all believe that a measure has to have higher reliability, so if we can just agree that when you grade, let's say when you grade feasibility with the recognition that is kind of in the middle, right, that you are going to be thinking about reliability. Can we agree to that? Just sticking it in the feasibility bin so that we do not need to do the Delphi on yet another criterion? That would be great. Okay, thank you.

Jeffrey Schiff: Okay, I think Doreen was next and then we will try to finish up here.

Doreen Cavanaugh: I just wanted to support having the specifications for the measures. It is very difficult for me to conceptualize, putting a measure forward without knowing what the specifications are. That is really where you finally all know what you are all talking about and seeing if you all agree on it. I just cannot think of moving forward not doing that. Perhaps we certainly cannot do it by the end of tomorrow but we also have September, maybe we could say the measures we finally send forward, we will have a set of specifications for.

Jeffrey Schiff: Lynn?

Lynn Olson: Hearing very clearly what the States were saying about their very limited capacity as they are facing the next year, a practical question on our list here, it is different methodologies from administrative records, claims data, to chart audits, to surveys, and I noticed on the comments made there were certainly some comments made regarding, well, that would be a chart audit kind of measure. So I'm putting on the table here, is this something we need to consider as we—?

Rita Mangione-Smith: The source?

Doreen Cavanaugh: Yes, as we drive further down in terms of making our choices.

Rita Mangione-Smith: Yes. We said the data has to be available, but at this point in this core set, do we say we are not going to include measures and include that neat [cross-talking] chart? Do we say that we are not going to include measures that require surveying populations?

Jeffrey Schiff: Or is this a transparency issue that we recommend things but with the acknowledgement—?

Rita Mangione-Smith: But the fact that this will be harder to collect.

Jeffrey Schiff: Right.

Female Voice: I think we just speak to [indiscernible]. I mean [indiscernible] is not just a data source.

Rita Mangione-Smith: Right. Are people comfortable with that, including all three kinds of data but just being transparent about that some of these measures will be converted instead of others?

Jeffrey Schiff: We are kind of into the lunch hour here, so I just want to make sure that we get to the folks whose cards are supposed to be up. Mary, is your card up on purpose? So I got just Marlene and George, and then I think we will wrap up.

Marlene Miller: I was just going to say quickly this is a very rich discussion. Can we have a working lunch and continue it? Do we have to cut it short? That is all.

Rita Mangione-Smith: That is not lunch, and there are tables in the back so you can continue with [inaudible].

Jeffrey Schiff: Why do we not give us a chance to huddle about that, and maybe we could take some of the syntheses over lunch for a couple of minutes, but we will have a lot to do.

Rita Mangione-Smith: So I'm going to just quickly summarize feasibility the same way we did with validity so make sure again [cross-talking].

Jeffrey Schiff: We had one more quick—

Rita Mangione-Smith: We had one more, sorry.

George Oestreich: The only thing that I was going to add is that we are bumping up against another issue that we are going to be concerned about. Is the feasibility related to right today, or is it feasibility related to capacity that we gave with some of the health information exchange [HIE] issues that are pending? So I'm not sure how we weight that, but we need to—

Rita Mangione-Smith: To me, that is part of the structure piece. You know, when you put some up there that States may not be able to do January 1st but knowing that the whole health IT piece is coming, that they might be able to do them within a year or two.

George Oestreich: Yeah, that is especially related to—certainly we want to focus on clinical data and the transmission and integration of clinical data in any of the reporting, it is limited by the lack of an HIE opportunity.

Rita Mangione-Smith: So just to summarize, I heard very strongly from the group that we are going to require that we have specifications for measures that we recommend for the core that we will do. Take data or information we get from this environmental scan that is going on and go back to the clearinghouse if we need to. If we are going to recommend a measure, it has to have specifications that we as a group have looked at and said, yes, these are the specifications we will recommend using with this measure. Is that right? Okay.

The data, obviously, to score the measure, have to be available, and we will include administrative data, medical records data, and survey-based data. That we will include measures that require data from all of these data sources, and that we recognize the burden and the transparency about which ones are more burdensome. And we all agreed reliability is important in terms of assessing whether a measure is valid, but it is also important in assessing whether a measure is feasible, so it is sitting here in the middle but when we do our Delphi process, we will think of it when we are scoring feasibility.

Female Voice: Can I ask a question just for clarification?

Rita Mangione-Smith: Yes.

Cathy Caldwell: Now, on the available data, what I was ranking for feasibility first round, I thought in terms of could we get this stuff because not all of them [indiscernible] path but they are out—I mean we could in this [indiscernible] survey if we need it. So is that coming from an okay [sounds like] framework [cross-talking]?

Rita Mangione-Smith: Yeah, so any corrections, additions, or subtractions to that summary?

Male Voice: It seems to me that there were kind of two opinions about specificity. One was that we needed to have it and one was that we could not have it, so I'm not sure there was an agreement about whether we need to have this, to go forward we have to have specificity with regard to the measure. I think what we agreed upon was that it would be desirable to have specificity, but we still may go further with measures that the exact specifications need to be developed. That was my understanding but—

Female Voice: [inaudible]

Rita Mangione-Smith: I would like to ask for a show of hands. People who feel that we can go forward with the idea that specifications may not be present but could be developed or—

Female Voice: Before we vote, available when? I mean I do not think we can get the specifications today or tomorrow so we can work and go forward over these 2 days, so are you saying that at some point before we send the—?

Rita Mangione-Smith: I think before September 30th through environmental scanning, we are going to the Clearinghouse, looking at adult measures, do our best, and the measures that we think are really important, valid, and feasible but to score them as feasible then we have to actually find the specification. And for the final assessment, the final core, everything that goes in there would have a specification.

Female Voice: I do not know how—

Rita Mangione-Smith: So everybody is nodding, so I think we are [inaudible] not by tomorrow.

Male Voice: Yeah.

Rita Mangione-Smith: You can sleep tonight [inaudible] it is okay. All right, so should we have lunch?

Jeffrey Schiff: Okay, [adjourn for lunch].

Return to Contents
Proceed to Next Section

Page last reviewed October 2009
Internet Citation: July 22, 2009: Morning Session (continued): Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs. October 2009. Agency for Healthcare Research and Quality, Rockville, MD. https://archive.ahrq.gov/policymakers/chipra/chipraarch/snac072209/sesstranscre.html

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care