Skip Navigation Archive: U.S. Department of Health and Human Services U.S. Department of Health and Human Services
Archive: Agency for Healthcare Research Quality www.ahrq.gov
Archival print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

July 23, 2009: Morning Session (continued)

Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs

Helen Burstin: Sure. I did not unfortunately grab my list of measures in front of me, but I think the first most logical choice would be the AHRQ composite, I mean the newly endorsed AHRQ composite around patient safety for kids seems like a logical one. It is completely based on administrative data. If you are going to pick one, it is robust; it captures a lot of different elements. It gets at the safety dimension on the hospital side. And then I think if you look at the two major inpatient conditions, certainly there is a set of measures for inpatient asthma, and then I would also argue just given the broader landscape around health care-associated infections for adults, it is not unique to adults by any means, I assume.

Marlene, correct me if I'm wrong. But that certainly would seem logical that if you think about prioritizing and I would go for we know we have good AHI (American Healthcare Institute) measures, CLABSI or others, just to pick a couple, and then maybe just not surprising, given your orientation, Marina, I think picking some of the perinatal measures would make sense as well just to get that breadth of the population, and we could certainly share those as well.

Nikki Highsmith: Maybe I'll just actually take it from a process perspective and not a measurement perspective. My colleagues do that but I do wonder if there is a way that AHRQ and the Centers for Medicare & Medicaid Services (CMS) and the National Association of State Medicaid Directors (NASMD) could, if you get to the point where you have a preliminary list of measures, ask the States in terms of what they are currently collecting so you have a little bit more information, certainly have the data from the National Committee for Quality Assurance (NCQA) which is at the health plan level but there maybe could be a way that you can go back with a preliminary set of measures and really understand from the States what they are currently collecting to inform the process. That may help you with the measures that are outside of HEDIS [Health Plan Employer Data and Information Set].

Helen Burstin: Nikki, if I could add to that, one point, when we—in my job, CMS did fund us to do a database of HEDIS measures collected from all the Medicaid programs.

Rita Mangione-Smith: Thank you all very much.

Jeffrey Schiff: One quick process thing—our public comment today is at 11:00, and there should be a signup sheet outside at the registration table. We are asking public commenters to sign up. We will ask Denise just to explain sort of the logic and the scope of the AHRQ-supported papers before we get started. So—sorry.

Denise Dougherty: [Cross-talking] seeing if we can get the eight authors who are going to be talking around the table somehow.

Jeffrey Schiff: So these would be very brief presentations, but one of the reasons why we all started at 8 o'clock instead of 9 o'clock is so we had more time for this discussion afterwards, so thanks for getting up early, and our reward will be that we will have more time to discuss with the authors.

Denise Dougherty: Great. So I was asked to provide a little overview of why AHRQ and CMS have asked for these papers to be done as opposed to something else that we could do. So we have a limited amount of money and a limited amount of time to get work done so it can be useful for this identification. And so, everybody here has been asked to get their final drafts in or their almost final drafts of their papers in by no later than September 1st or September 15th since they only started work about maybe a week ago, and some even have not had a chance to start working. That is quite a challenge.

So here is the rationale. As you know from looking at Title IV, and this is for the first four papers mentioned on the agenda by Len Bickman, Charles Homer, Jenny Kenney, and Scot Sternberg, and these are about specific measures themselves or measure types, and these are mentioned in the legislation—for example, duration of enrollment measures; for example, most integrated health care setting measures. So those two are mentioned as quality measures; they have not been thought about as quality measures before, so we are asking people with expertise in those areas to do kind of a conceptual framework, a logic model for what you might include in those measures and look to see if there are any measures in use now that are reliable and valid and should be considered by this group and the Secretary. So that is those measures.

There are two other kinds of specific measures—mental and behavioral health care measures and family experience of care measures—that I thought would be among the most controversial and lead to a lot of discussion around this table about whether they should be in or out and valid and so forth. So I was wrong about the family experience of care measures because everybody seems to want them in, but there is this issue about who gets asked what questions, what do you really want to collect from family experiences of care, should you be collecting from kids themselves and at what age, what is the reliability and feasibility of these measures and validity, so to people, those two measures are being addressed.

There is also another paper in the specific measure topic that Karen Kuhlthau at the Harvard Child and Adolescent Policy Center is going to be taking on; she just agreed to this Monday, and it is on availability of care. So right now, what we have in use are utilization measures which do not tell you about what is available, and we discussed this yesterday. So those are the first four that you are going to hear about.

The others are people who had data, and we asked them to identify data on what measures are in use. So people have done surveys, interviews, and so forth and so on. NASHP has done it, Health Management Associates (HMA) has done these things, and everybody asked the questions in a slightly different way of all the Medicaid and CHIP directors and get maybe a slightly different set of measures. So this is part of our effort to find out what is really being used out there.

And then Rich Hilton is going to talk about his project to actually do an environmental scan starting with Web sites and informal interviews and maybe going to the NASHP meeting and that kind of thing. And Trish MacTaggart is actually going to talk about the Medicaid Statistical Information Statistics (MSIS), which is a data set that sits somewhere, but it is all the—she is going to talk about it. But anyway, it is a nationally aggregated dataset of Medicaid data from which we think we might be able to get some quality measures, and you can actually get race and ethnicity data from that for some quality measures. So she is going to look at that to see how far we can go, and that is very important because racial and ethnic disparities and SES disparities are a critical part.

So with the time we have available, the expertise that is out there and these controversial or possibly controversial topics and the need for more information about what is being used, that was the concept behind these papers. And I would love to have ideas for additional papers, but they pulled the plug; we cannot ask for any more contract papers. So anyway, let's hear from the folks about what it is they are doing.

Jeffrey Schiff: All right, so this will be fairly rapid fire. Okay, so I think we are going in order, and Leonard is right there, perfect.

Leonard Bickman: My experience in the last 20 years has been in developing measurement and feedback systems at the client level. So this level is rather new to me. The work I have done is not administrative—or we did one medical records audit in this; it was one too many in our prior practitioner offices, and so that is not I think a successful way.

Let me say that not only do I not see a desire for more and better measures and increased accountability, but somebody here thought they saw it at some level, but I see outright resistance to any measurement in mental health. And certainly, no volunteers—well, very few volunteers stepping up and saying I want to be held more accountable. I can show you how effective we are. I think that is what my experience has been. I do not know if anybody else feels differently, so it seems to me that leadership at the national and State levels—coercion, rewards—is absolutely necessary and that this will not happen spontaneously out of the goodwill of our commission's hearts.

There is excellent work being done in the field, but I'm going to narrowly focus just on the mental health measures which are very few. For example, the 7- and 30-day followup after hospitalization, depression medication which may be for adults or children—I cannot figure it out. ADHD follow-up—this is a small area, but it is a very costly area, and it is an area where the evidence base is poor to begin with.

Now I'm very frustrated before I came here, and actually I'm more frustrated after coming here, although now I feel I have compatriots here who share my frustration, so I feel better about it overall, if that makes anybody happy. The quality chasm idea struck me as interesting, and when I heard that term it occurred to me that I have read somewhere a quote that I thought was neat, which was "you cannot jump over a chasm in two small jumps." Now it was not Evel Knievel who said that. It was Lloyd George, British prime minister. And so rather than radical incrementalism, I'm in favor of just radical movement here because I have not seen any in the last dozen years to be quite honest.

So I see measurement quality as a major priority for mental health because it is one of the few services in which there is zero accountability as far as I can tell for performance, there is not any, but we do not even know what the service is; we cannot describe it. So we have this service that we're spending billions of dollars on. There is no evidence of the fact in this—the few studies that have been done in the field do not show much to support it. There were no even correlational studies that say that all the things that we build in to improve quality—such as accreditation, education, training, and years of experience—are related to outcomes. Some of the things that I heard here, a little bit of concern, the concept of validity and—thank you very much.

Well, let me skip over to my last point. I think what we have in front of us is a strategic issue, not a science one in some sense. It is the inclusion of some measure even one for which there was no validity data better than no measure and follow this logic along that. Second is the crude non-specific measures, just a screening or followup and that is it, better than no measure, even though we know that the type of screening or followup is critical in determining its validity. Is a valid screening of followup measure that is specific worth collecting if there are no services available or if there are services available but they are ineffective, why go through the whole process? So are we starting at the wrong end in some ways? That is one of my concerns.

The bottom line is, well, I'm hearing also that States cannot afford current measures, let alone new ones. Is it ethical? Because I think there is an ethical and moral issue involved in measurement to ask them to continue to collect the information at some cost that we cannot defend as good measures of quality. I think we should tell them to stop on those measures that we think are not valid measures of quality.

Is it not cynical to ask them to continue to collect data for which there is no validity and of no use? The HEDIS measures for mental health have been around 12 years. I do not hear anybody saying, "Oh, we have evidence that it improves outcomes." So even the process is somewhat suspect. Can we show current systems have improved quality? Are accredited organizations delivering more effective services than nonaccredited organizations? I'll leave you with those questions.

Jeffrey Schiff: Dr. Homer, are you speaking about the family experience of care measures?

Charles Homer: Yeah. Okay, so this will be quite brief. Again, I'm actually channeling John Patrick Co, who is an investigator at Massachusetts General, who is working with me and with Scot Sternberg who will be presenting the most integrated care setting work on this measuring patient and family experience of health care for children.

As you know, we were asked to develop a conceptual framework to identify what is valid and what is reliable, make recommendations for how things are being measured now, what alternatives could be used and make recommendations going forward.We have just started, as Denise said, about 2 weeks ago. To some extent, in this area, as you have discussed yesterday, it is really more clear than I think maybe we thought when we chartered this, and so the question is, what really do you want from this report?

Conceptual framework, from our perspective, we think as people have for the last decade or so patient and family reports provide a critical perspective on care. And the two areas that I think I would focus on most are: one, family centeredness, that it is one of the IOM's six dimensions of quality—they say patient centeredness, but we say family centeredness in pediatrics—and outcomes and experiences that parents or patients are uniquely able to report on. So for example, if pain is an important criterion, there is really no other way to assess whether something causes pain or not other than asking the patient whether there was pain—the same thing about satisfaction or ratings of care understanding.

There has been oodles of work, and we think it would be foolish to replicate or expand on it, about what families care about, what is important and what families are able to report on. I mean AHRQ spent gazillions of dollars, not enough, but lots of money supporting the CAHPS® [Consumer Assessment of Healthcare Providers & Systems] work. They built on lots of wonderful work that other people such as the Picker Group had done, and I think that really is wonderful foundational work.

And then the last point is, is there anything different or whether distinctive elements of child health, and everyone in this room knows that because you all wrote most of the papers about it—child versus parent report, the pace of development disadvantage and diversity, the pediatric population, the different epidemiology of child health, and I think something else that has been talked about a lot today and yesterday, which is the importance of community issues and prevention for child health compared to our adult colleagues, not that any of these are absolute distinctions, they are relative distinctions.

So given that and given, again, assuming some fundamental assessment here that people are generally familiar with what the CAHPS® tools are, I think the question is in child health. How specific do we need to be looking at specific issues within child health across the different elements of the child health spectrum, and those are neonatal areas, early childhood, and adolescent issues? Are we well-covered in both ambulatory community areas and in the inpatient areas? So for the H-CAHPS® concept, do we need to be thinking about any condition-specific areas in addition to the general CAHPS-type measures that are out there? Do the existing measures address some of those unique elements of child health that I mentioned, particularly things like prevention in community?

Those are sort of conceptual issues that we should think about in thinking about are the measures that are out there adequate. And then there are the whole issues that are broadly in the area of usability and feasibility, so lots of questions about mode of administration, lots of questions about feasibility. I certainly heard, particularly from the Medicaid leaders yesterday, a fair amount of groaning quite appropriately about the costs associated with the administration of surveys and any idea that we might expand those.

And then a concern, I guess, maybe echoing some of Paul's comments on this is we have been doing or some States and many plans, both private and public, have been doing CAHPS. Do people use that data to actually drive change and improvement? And I now speak—in my previous career when I was at Boston Children's Hospital, I spent 10 years doing the hospital surveys and finally towards the end of that stay, some clever board member when I presented my data said, "Charlie, do you not get depressed when you look at the data, and they are flat?" And the reality was nobody at the hospital was actually held accountable for changing the results, and therefore, we just surveyed them, and people were not terribly happy with the various aspects, happy with other aspects and it never changed. So the question here is, are we holding States or plans accountable for patient experience and how will that be used?

Quickly, current measures—again, the CAHPS® measures are there particularly at the plan level. There is, as Helen Burstin mentioned, an ambulatory CAHPS® measure that has been approved by NQF. I do not believe there is actually in any of this substantial wide use by any Medicaid agencies, so we were actually surprised to see it was NQF. I was surprised to see it was NQF-endorsed because their developers really, when we spoke with Paul Cleary, he said, "Yeah, we got a paper coming out on it," but he did not really say it was in wide use.

There is the children with special health care needs supplement to CAHPS, which includes both the screener element and then the additional questions. There are, as you know, and Helen mentioned, already approved measures developed by the Child and Adolescent Health Measurement Initiative which focus on early childhood, the developmental issues, the PHDS (Promoting Healthy Development Survey), and a variety of variations of that survey, and something called the YACHS (Young Adult Healthcare Survey), which is reported by teens, used in New York state as we learned yesterday.

So those are the CAHPS® measures that are out there. They cover the spectrum. There are other measures that are in wide use, and the most widely known commercial one is the Press Ganey Survey. Obviously, that has a cost associated with its administration but thought it was important for completeness to measure. Barriers for use—I see the stop. Thank you. I did not see the 1-minute one.

The cost of administration of the survey, we heard yesterday a 20 percent response rate on measurement in Medicaid populations, that is a concern. There is a gap. There is not as far as I know a hospital CAHPS® in the pediatric market. There is clear need for that, and there is interest in what is being done. Is there a CAHPS® that is behavioral health applied for pediatrics? There is not, as far as I know, a child database allowing comparable data as there is for an adult side, and I think there needs to be a link. Improvement—the questions I would ask you for later discussion are: do we go beyond CAHPS, and how do we encourage use for improvement? Thank you very much.

Denise Dougherty: Jenny Kenney?

Genevieve Kenney: As Denise said, we are going to be writing a background piece on duration. No, wrong Jenny. Okay, we are going to be focusing on, I would say, a new quality of measurement area, duration of coverage in our background paper. We are just really at the beginning of this work so we have more questions than answers.

Well, why duration of coverage? Why did the legislation explicitly require that you guys include coverage duration as a quality measure? First, I think, fundamentally, it is a critical measure of program performance. Fundamentally, how well are Medicaid and CHIP programs doing at enrolling eligible children and keeping them enrolled and keeping them from being uninsured? And we have a lot of evidence that program characteristics related to enrollment systems and retention systems and outreach actually do affect both take-up and renewal. At the same time, as Cathy Caldwell said yesterday, take-up and renewal are not exclusively in the hands of programs. It is family decisions that affect whether they take up coverage on behalf of their children or maintain that coverage, whether they want to pay premiums. And their economic and family circumstances change in ways that introduce a dynamic dimension to who is eligible and who needs coverage.

Second, and this was mentioned at several points yesterday, coverage duration often affects—it seems like it always affects—the population that is used as the basis for devising all these other access and quality measures that you guys are going to be considering, which means it is really critical to look at the share of the kids who are in the programs to whom you can generalize based on those access and quality measures. You may be doing a bang-up job, but a very small share of all the kids who are touched by the program, so it is a really important dimension from that point of view. And ultimately, I think we have a very strong research base that suggests that coverage increases timely access to needed care for children and, as Marina suggested yesterday, that timely enrollment in Medicaid affects timely access to prenatal care for pregnant women, although I'm not going to talk about that today.

What do we know in terms of how well programs are doing nationally? Our best estimate is that despite increases in participation in Medicaid and CHIP over the last 10 years, about 5 million children are uninsured at any point in time despite being eligible. Two really important facts to keep in mind about that is that 70 percent of those who are uninsured but eligible actually are eligible for Medicaid, so the Medicaid measurement piece is critical here. And second, it is not just a matter of getting, tapping all the kids, the eligible kids and getting them in; retention is a fundamental issue here and some analysis. We did suggest that if you look at a slice of uninsured kids at a point in time, over a third had been enrolled in Medicaid or CHIP in the prior year or 2 years, which speaks to retention and churning issues.

What would we do ideally? In the ideal world, we would have a longitudinal household survey that interviewed families at regular intervals over the course of the year and asked them detailed questions about their health insurance coverage that did so using a State representative sample frame and then have large State samples. But we are not in that world; we do not have such a survey. But I do want to point out we do have a new survey vehicle, the American Community Survey, that for the first time included health insurance questions in 2008. It is substantially larger than the current population survey and offers the potential for us to track uninsured rates and coverage rates for key segments of the population at the State level that we have not had to do in a consistent way across the country before.

Administrative data—our enrollment files can also be used to track the narrower but important question of how well States are doing at retaining the kids they enroll. Sarah deLone is going to describe for you this morning, from NASHP, it turns out that a number of CHIP programs are actually measuring coverage, duration, retention, and some are actually trying to look at churning. So the whole feasibility question that seemed to be in question yesterday, I think these measures really are in play.

And Leighton Ku and colleagues from George Washington University recently put out a report that came up with the duration measure for all 50 States using the MSIS data that were mentioned earlier today, and I think Tricia will talk more about this. I think we are at a point where we have the beginnings of a basis for these measures. But let me just list some of the problems that I can see already. We do not seem to have common specifications for how States are defining these different measures that are in play. The MSIS data are really coming along beautifully, but the most recent data we have available are 2006; that is not really timely. The data systems between Medicaid and CHIP are not integrated in many States, so you do not get a complete picture of what is going on. We do not know whether the kids who are disenrolling are gaining coverage, and fundamentally, we do not know how well States are doing from the administrative data at reaching the target population.

So bottom line, while there may be a strong relationship between these duration measures or retention measures and churning measures and the extent to which uninsured rates among the eligible population are changing, that research has not been done. As I said, we have to really think about the denominator for your measures and who you are leaving out. And in closing, I'll say since what is available in this area has not really been assessed for validity or reliability, we will be following up with some key States to find out what specs they are using, how useful they are finding their measures to be in terms of program measurement, and what key researchers around the country have been dealing with admin data systems. And I would love input from you on how to take the criteria that you are using for your other measures and adopt them to this very different domain, and to emphasize that I think this may be an area where we really do need to support some methodological work so that we have a stronger base. Thank you.

Denise Dougherty: Scot, I think you are next.

Scot Sternberg: Thank you for this opportunity. NICHQ (National Initiative for Children's Healthcare Quality) has been asked to develop a paper on identifying valid and reliable, feasible quality measures of the most integrated health care setting with the goal of establishing the conceptual framework, identifying current measures, ultimate quality measures of it, reliability, validity and feasibility, and recommend a set for the core measure group.

It is very interesting to think about the definition of the most integrated health care setting for children. In speaking with folks and in dealing with key informants and in terms of the literature review, there is no single definition or unitary concept. A variety of terms are in use—integrated care, care coordination, collaborative care, collocation, integration with physical health and mental health, patient-centered medical home, chronic care model. If you think of looking at integration you need to look at the levels of integration—a number of papers talk about that—organizational, financial, functional, individual and population.

In thinking about a framework within which to look at this, I thought about it in terms of two key dimensions: one, in terms of a health care delivery setting, and then in the broader context of well child and family orientation as other areas and other measures that have been talked about.

In terms of the health care delivery setting, Leif Solberg has done work around this—organizational and looking at organizational integration, financial arrangements, and functional systems that support integration; registries, decision support, clinical reminders, test and referral tracking, care coordination, information exchange.

In terms of the well child and family orientation, looking at the comprehensiveness of care; prevention, developmental screening, integration of mental health and physical health, family involvement, family-centered class education, and self-management, integration with community, and linkages with community and social systems, and obviously what we have talked about, continuity of care longitudinally.

In terms of current measures as the initial environmental scan came up, CAHPS® for chronic conditions is really the measure that is being used by a number of States. Thinking about it from the measures of comprehensiveness, the PHDS and YACHS and then thinking about followup measures that speak to some continuity of care, the HEDIS measures which were spoken about earlier; followup to hospitalizations, treatment, engagement.

There are a variety of alternate measures that are out there being used, both in the National Survey of Children with Special Healthcare Needs and the National Child Health Survey, that include elements, components of medical home, family functioning, neighborhood and community characteristics, and transition care. NCQA's work around PPC-PCMH (Physician Practice Connections-Patient-Centered Medical Home) in certifying physician practices as medical homes. In that context, we could take a step of looking at what is the percent of pediatricians that are within a State that are certified or the percent of Medicaid and CHIP children who have a pediatrician who is certified, particularly as that gets further—more and more pediatricians go through this and as the initiatives within the State continue; the Medical Home Index and the Medical Home Family Index; Barbara Starfield's primary care assessment tool. IPRO has a management plan for people with asthma measure. And Sarah talked about a lot of the exciting work going on at NCQA around measures that are in development but not ready for primetime but with care coordination in terms of looking at individualized care plan, referral coordination and timeliness, the new well-child visits.

Questions and input that we would greatly welcome from the subcommittee are, should we address the most integrated health care setting as a two-dimensional concept, health care delivery and whole child orientation? Within health care delivery, should we focus on the broader concepts or just narrow it in on medical home and care coordination, and then our survey methods to be considered?

Return to Contents
Proceed to Next Section

Page last reviewed October 2009
Internet Citation: July 23, 2009: Morning Session (continued): Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs. October 2009. Agency for Healthcare Research and Quality, Rockville, MD. https://archive.ahrq.gov/policymakers/chipra/chipraarch/snac072209/sesstranscrm.html

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care