July 23, 2009: Morning Session (continued)
Transcript: First Meeting of the Subcommittee on Quality Measures for Children in Medicaid and Children's Health Insurance Programs
Jeffrey Schiff: We are going to go a little bit into our discussion time, so we are rounding up here. Thanks. We appreciate this.
Sarah deLone: Thanks. Good morning. We would like to keep you on your toes at NASHP, so yesterday Cathy was not Allan, and today I am not Cathy who is identified on your agenda. My name is Sarah deLone. I'm a program director at the National Academy for State Health Policy. I'm also really technologically impaired, but there we go. I do not really want to waste my precious time on who we are—Alan is sitting on the subcommittee—but we will just emphasize that we are by and for legislative and executive branch State policymakers and administrators who work in health care. We both derive our expertise from them, and then they are a prime audience for much of our work.
We have been asked to produce four papers for the process that will draw on a national survey that we fielded in December, which I'll talk about in a second, to address four different areas. The papers are really to provide sort of that overarching context about where States are in terms of these activities that are important to the work that you all are considering, in particular where State programs are whose almost exclusive focus is on coverage of children and youth.
The first paper will look at activities in which States are currently engaged to improve the quality of care in their programs. What kinds of data do they collect, and how do they use it? Do they produce performance measures? Or the HEDIS [Health Plan Employer Data and Information Set] measures, have they produced their own performance measures? Enrollee surveys, do they do them, and if so, which do they use, et cetera? The second paper will look at current State efforts. Jenny referenced this to track data on duration and continuity of coverage. The third paper will look at what States—there are not a lot of them that are doing this according to our survey—are attempting, how they are attempting to use the data that they collect to look at health disparities. And finally will be the States' own perspectives on the issues of provider capacity and access that they face.
I mean each case will describe the range and frequency of activities that States are engaged in and also what they perceive to be the barriers to sort of doing a better job both at the collection and the use of the data and the measures and what they would find to be—what sort of data and measures they think would be most useful to them in terms of moving forward with their quality programs.
Just to give you a little bit of a sense of the survey that we will be drawing on, NASHP has served as sort of a de facto home for CHIP directors since the program's inception. And every couple of years, we have fielded a very comprehensive survey looking at all aspects or virtually all aspects of States' programs. We survey all 50 States, including the District of Columbia. We look at both Medicaid and expansion programs. As I mentioned, the most recent survey was fielded this past December so to date, we have received 42 responses from 42 States including the District of Columbia. In the past, we have had a 100 percent participation rate, and we are still optimistic that we will get close to that, if not all the way to that goal this year. There are some sort of extra challenging circumstances for States in terms of having the resources to do survey responses this year.
Also, I want to just mention that where relevant for each of these papers, we will look at the differences between what States are doing based on delivery system and fee for service versus managed care of PCCM. Jenny and I sort of thought we would do a little bit of a tag team in terms of what preliminary data we would share from our survey because measures of duration and enrollment which are specifically mentioned in the legislation as being important is one area in which there are currently no identified measures. We thought this would be the most useful to share with you.
And as Jenny mentioned, you can see that some States are beginning to collect this kind of data and look at it. The specific measure mentioned in the CHIPRA statute has to do with the duration of enrollment, and you can see 10 separate CHIP programs in the States that we had responding to our survey, and 12 Medicaid expansion programs do the average length of enrollment, fewer do the absolute length of enrollment, and there is considerable overlap as you can imagine in those States.
As I think a number of people—Cathy and Jenny and Cindy yesterday—mentioned, really what we are—the CHIPRA legislation speaks to duration of enrollment, but really, what we are interested in is continuity of coverage, which is definitely different. So we also asked in our survey about some of the other enrollment and retention measures that States are looking at. You can see that 12 and 3 States respectively, 14 and 3, and then 3 and 1 look first at the absolute sort of rates of retention at the point of renewal for most States; that is 12 months after the initial eligibility determination was made.
I also think it is important, as Cathy had mentioned yesterday, there are good reasons and bad reasons for States, for children to lose coverage at the point of renewal, so we thought it was important to look at which States are tracking the reasons that children lose eligibility. Because if you are going to develop a measure, you want to sort of take out the good reasons; family income goes up or it goes down. In the case of the separate programs, if the children move out of State, et cetera, you want to know why it is that the rate is what it is. And finally, a really important measure but one that really very few States so far are looking at is the churning, and that would be looking at children who have obtained—become newly enrolled but who had been enrolled within the previous X months, and States choose a different X for that criteria.
Just a couple of overarching points, just to mention these are preliminary results, I think it is important to note that we do not capture all of what Medicaid does, the Medicaid expansions. The States that have a Medicaid expansion program, I think that that is probably reflective of what they do in Medicaid generally, but what we do not look at in our survey re the Medicaid programs is States that only use their CHIP dollars in a separate CHIP program, so that it does not capture those Medicaid programs, and I think that is an important limitation to note.
And before you put up a stop, do I get like extra time? No? Okay. In that case I will leave it at that and just—well, no, I will not. I'll mention one more thing. NASHP is the program office for the Maximizing Enrollment Program, and we are working very intensively with States that are attempting to maximize enrollment of children who are eligible and retain coverage of children who are eligible for CHIP or Medicaid but not enrolled. And there is a large data collection aspect to that program and work, and we also can draw on what we learned from those States and what those States are doing. So as you are thinking about these measures, we would be happy to engage with you in a conversation based on what we have learned from those. And I think Cathy or Alan, they are signaling me, so I think they may have something to say.
Female Voice: [inaudible]
Sarah deLone: Oh, our contact information, I'm sorry. So here is our contact information. The input that we have gotten in terms of what we will draw out of our data, and we will also be doing some followup interviews with States, hearing this discussion has been enormously helpful to us in figuring out what we should most emphasize, but we would welcome additional thoughts, and we would certainly welcome a dialogue and conversation with you as you sort of consider what types of measures to recommend in this area. Okay, thank you very much for your time.
Jeffrey Schiff: Jenny?
Jennifer Edwards: So we have some overlapping names, and we have some overlapping papers. What I'm going to present about is a survey that was funded by the Commonwealth Fund carried out by the Health Management Associates to find out what Medicaid and CHIP programs are doing or were doing as of earlier this year to measure quality for kids. And I just want to mention in case you do not know HMA, we are a national consulting firm. We have 10 very small offices around the country. I'm from the New York office. My collaborators on this are from the Lansing office—Vernon Smith and Esther Reagan and Dennis Roberts.
So our survey was designed to identify what current practices are, and what is most relevant to you all is to describe the breadth of current use of quality measures. Part of the Commonwealth Fund's goal in doing this survey was to find out what else States would need in order to achieve their goals around quality for kids. So we also mailed a survey to those poor Medicaid and CHIP directors. We also did a 50-State survey in the District of Columbia. We were in the field in February and March of 2009. We received 53 responses because States could reply separately for the Medicaid and CHIP programs, and then we went back, and for purposes of this subcommittee, we tried to link some of our responses from this survey to an annual survey that HMA does for Kaiser that looks at program characteristics because we know it is important that you understand which States are doing this for managed care plans, fee-for-service, and PCCM. We did not realize exactly how hard that would be but we tried. We have gotten some information about that.
I guess that there are two things I want to tell you about: one is what the data actually show States are doing now, but also as you think later on about the term "importance," I think the directors that responded to our survey really could provide some useful information.
So what you see here is that 91 percent of States—of programs because this is combining CHIP and Medicaid programs—are currently measuring access in HEDIS for at least part of their population; 35 States use managed care organizations to do that; 89 percent are using the effectiveness of care measures; 70 percent are using State-developed measures. State-developed measures tie more closely to their goals for improvement in any given year that they are working on something specific. Sixty-six percent are using the CAHPS® [Consumer Assessment of Healthcare Providers & Systems] patient experience survey, some of which are dental CAHPS®, some of which are children with special health care needs CAHPS; and then about 15 percent are using additional NQF-endorsed measures.
We split out here, I'm not going to go into this, but you have the data, Denise has the data, and she can share it with you depending on plan type, but as you know, it is easier to ask you managed care organizations to collect data than it is to dedicate limited State staff to collect the data for a fee-for-service or PCCM population. So CHIP programs tend to use more managed care; they tend to have more of the mandated measures. And States with stronger PCCM and fee for service tend to use more of the State-developed measures, but all the specifics are provided to you.
Then when we tried to weigh it by who are these—the big States or the small States? If you were to use some of these measures, what percent of kids in the country would you get covered by that measure? And so we weighted the data here by program size. It is an imprecise process, but what you see is that the larger States are the ones doing more measurements so these measures get you more kids than if you just count the number of States doing them. What we found is that—we also looked at high managed care-penetration States and low managed care-penetration States. And so the higher managed care penetration States are obviously more hooked into the HEDIS-type measures, and there is still a good number doing the State-developed measures.
We asked States to tell us what those measures were, and we have provided Denise the table by State of each measure. The problem is that this was a write-in, so people only wrote in the ones they thought of to tell us. They did not try to be comprehensive. If we are to do it again, we would go back and say which of these measures do you do, and maybe in your environmental scans, and Denise, you could use this and ask them specifically about them because I feel as if this is very likely underreporting of each of these measures.
So we asked these respondents about how it is going. We asked the survey folks, and then we held team meetings with them to actually discuss how it is going. And honestly, as much as they are doing to use these measures right now, 75 percent of respondents said that Medicaid and CHIP could do a better job improving care if there were additional better measures. So they are not lobbying what they are doing. They are doing it because it is sanctioned, because it had an evidence base, because it is the best thing out there, but does it meet their needs? No. So when you think about importance, there are other things that they also say would be important. Coordination of care came up very high, and these were prompted categories, so coordination of care, mental health, dental, adolescent access to family planning, not so much patient safety. So States want measures that are aligned to their priorities, and so I think it would be very important when you think about importance to make sure that you have your finger on the pulse of what the States' priorities are.
They did talk at length about measures that focus on outcomes rather than process of care. They do not want to know whether the kid got the right drug; they want to know whether kids stayed in school because they got the right drug; much harder to measure than what we are talking about here. They also were sympathetic to the fact that this is costly for others to measure, but many of them have research partners. Many of them have access to universities or other groups that can help them with this, and they did notice the methodological challenges.
Overall, when we asked them a series of questions about what else they could find helpful—this gets to Barbara's job—they really want that national benchmarking database for quality improvement. They want information on what other States are doing. They want to know about disparities, and they want technical assistance in providing this.
So just to sum up here, there is a full report available, though, online at the Commonwealth Fund Web site if you want to read all the details. There are these lengthy tables that Denise will probably share with you if she has not done so already. Thank you.
Denise Dougherty: Trish MacTaggart is going to go inside MSIS, but I hope with not too much detail.
Patricia MacTaggart: Yes, that is me. First is the framework, second is the limitations, and third is the process for doing it.
MSIS is Medicaid statistical information. It is what the States do from claims data. They have been doing it by mandate since 1999 for the fee-for-service world. It is optional for the encounter data, but most States are still moving and doing that. The reason for the discussion started out with, what could States do by claims data? Not all the performance measures you want, but what are some core sets we could do tomorrow that do not cost States more money that could actually have statistically sound because what you talked about yesterday, it is the specifications? What is the data definition? What is the format that it is going to come in? That is all done in the MSIS data.
It shows how States draw down their Federal dollars as well, so we know it is accurate without a separate audit of issues you are going to get. And it is also reported quarterly. There is even a delay in that. If you do not get it in by the due date, you have up to 2 months later. Yes, it is true that the statistical reports that are out were in 2006; they are 2 years old. It does not mean it has to be that way because the data is at the Centers for Medicare & Medicaid Services (CMS) on a quarterly basis. There is a difference between an analysis for a research project and the fact that this could be a rolling year and the process for doing that.
My main point for doing this is it is already out there. It is a standardized national dataset. States already have it. States would not have to do more work and could emphasize their time on where we all want to go, which is the performance measures of the future. It has to be claims data, but it also has an eligibility dataset that I think people also seem to forget about. That does not answer the question of why, but it does deal with some of the churning questions, so it really does allow you to deal with things that we have all struggled with. When you do a HEDIS measure, and you do a managed care, if your managed care contracts do not match your contract years, you have a delay, and you have an issue that they do not match up. And if a plan goes in and out, it does not work. Many States do 1 month on a fee-for-service basis before you go to managed care. Those children get lost. Some of the special needs kids are not in managed care. This comes in in the data that comes in to the MSIS data. So that is kind of the basis of what it is.
Why we are looking at it, the States could look at all their claims data and do this; however, we have our sources coming into a national dataset that actually has the data definitions that means we have consistency between States from day 1 for a subset of a subset of information. CMS could take this data quarterly, do the analysis, feed it back to the States within the quarter. We could have a rolling year or at least a quarterly or 6-month analysis of performance measures that are, again, would build into the specifications of whichever measures that could be done by claims data and eligibility that you would decide what they were built on the specifications that are so designed.
And again, the States would win. Nationally, we would win; we would get it done tomorrow. The limitations—there is the issue of encounter data because we pay on fee-for-service data; we do not pay on encounter data. There is the issue of how clean or dirty it is. Most States are saying they have their own methods to validate how clean it is. I think it is also important to know that because it is claims data, when we do bundle payments, we have some limitations in bundle payments, but that is going to be true in anything you are doing. And there has to be an agreement of what is the baseline with this first round is indeed—kind of the baseline moving forward. And then the limitations of, again, doing it—are the resources basically at CMS and Federal because we cannot dump it this way, or we will be 2 years waiting without having the adequate resources for doing this? All doable, all very quickly said, but I can give you a sense of why it is worth having a conversation. And the paper will get into all this stuff in detail.
I actually sent out e-mails already to all the State people. I'm a former Medicaid director, so I know at least somebody almost in every State. It is amazing how quickly it is coming back. There are limitations, but almost everybody said this is doable, which is the most important thing to know. That is all I'll say for now. We will come to questions later.
Denise Dougherty: Do we have one more? Rich Hilton? Thank you. While he is walking to the mic, I do want to thank all of these authors who not only responded so quickly to this request, but this was an iterative process, so I want to thank people who came to me and said, "Hey, here is what I have, maybe you can use this in your CHIP work." So really, it is great and you can see that they are talking to each other, so I think the papers will be terrific and helpful. Thank you all.
Richard A. Hilton: I'm Rich Hilton, I'm with Econometrica, and I'll be brief. Our task is basically to build on some of the original work here, and it probably relates most to one of your selection criteria, which will be availability. There was a question earlier in the session about what are States actually doing now, and our task up until September, we will try to get the best current information of what the State programs are actually collecting. We will look for possible quality measures in all the categories of interest that were mentioned in the legislation and have been mentioned previously—duration, outpatient care, prevention care, integration of care, family-centered care—and we will obviously have a variety of sub-categories within each of those categories of interest.
If you want to think of what our end product should be, you can probably think a series of spreadsheets where we will go start with the different categories and then subcategories of interest, say, which States and within each State which programs are collecting data on this? What are the specifications to the greatest degree possible? How are the data aggregated? What are the population strata for which you can get data? And there is another item that we have been asked to do which has been mentioned a lot. We also want to find out if for any of these data they are collecting race, ethnicity, socioeconomic status data, special needs data; all of which will eventually be required and subsequent CHIPRA requirements in a year or two.
The work task and the reason I passed out this, the most important piece of information is the contact information because we have a short timeframe, and any assistance and suggestions for getting information will be greatly appreciated. We have three steps. We are going to try to do another environmental scan as complete as possible with all the publicly accessible Web sites and other data sources to see what the States are doing. We will also obviously be going to some of the previous research and surveys and looking at that data, but then also because you want to get the most complete information for current usage at the State level, we are going to try to go to program contacts in each State. The most likely way we will handle it is go to all the publicly accessible sites for a given State once we have seen what that information gives us. We will give that as baseline, try to go to the contact, and say, "Is this correct? Is this complete? Are there a series of other types of categories of interest that you may be collecting some type of data?"
Our job is not going to be to prioritize or recommend; it is simply going to be to document. So when you come back, obviously, the criteria will be all 42 States are using a certain quality measure, and they have some variation in definitions and specifications but we could refine that in the recommendations. We just want to give you a tool at your next meeting that you can use and is helpful in your decisions in that area. So that is our task, and I'll be open to any questions and particularly any suggestions on how we might get this information within a very short timeframe. Thank you.
Rita Mangione-Smith: Thank you all very much. I think some of the people here we are going to ask to move over to the side so we can get the authors all up here for our question-and-answer period. It has become really clear watching these great presentations that we may be fed a lot more measures to look at which is great, but with that in mind, we are going to set a deadline for feeding us those new measures of August 24th because for us to have time to evaluate them based on our criteria and have them ready for discussion at the September meeting, we have to have at least that much time. So I just wanted to put that out there.
Male Voice: This question actually might more probably be addressed to one of our earlier folks, but I'm interested in the interplay between this denominator issue around sharing and enrollment and the 11-month continuous eligibility around managed care data collection. I guess the question to me is, do any of you know of work that moves us beyond the question of how many States or how many plans or how many people in States and plans are using these metrics to actually what percentage of that denominator is actually represented in the denominator of the measures that are being used, if that made any sense. Has anyone even tried to answer that question that you know?
Female Voice: Sarah has an answer back there.
Sarah Hudson Scholle: I know that there is research that has looked at this. In California, there is a group that has been doing research to look at measures and the percentage of kids eligible for measures and then how that relates to performance. What they found is that a lot of people are not represented, and that performance on the measures does vary so that children who are enrolled for a shorter period of time have lower performance, and that the enrollment varies by race and ethnicity as well so I can link you to those.
Rita Mangione-Smith: We are going to just go around.
Female Voice: A followup on the churning and the continuity of care, we actually looked at this, and there is a piece that I have not heard mentioned, and that is the continuity of a provider. We know about the churning moving on and off Medicaid, moving between Medicaid and the standalone CHIP program, gaps in coverage, but even in a particular payer source like Medicaid when there is a change in location or there for whatever reason is a temporary loss of coverage and then they get re-enrolled where you have mandatory assignment States. What happens is that the child may actually get enrolled with either another plan or another provider, and there is a discontinuity of providers, and this is particularly problematic with disadvantaged populations, particularly children in foster care.
Female Voice: A couple of questions around particularly that might be helpful for Econometrica. Obviously, we have gotten a lot of information from various efforts about what State Medicaid and CHIP programs are doing, but those are not the only entities and States collecting information. So I hope that the Econometrica paper could capture what State hospital reporting initiatives are going on. One other group of State activities that we never talked about that might be relevant, we have never looked at it is State employee programs because they cover a lot of children as well. So what do State employee programs do in terms of measurement? So those are two other State-driven reporting initiatives that might be worth looking at.
The final, just a small comment which was relevant to yesterday is that we have been focusing as we should very much on quality measures, but that legislation, as Denise has pointed out, also talks about use of services and so some of the—when looking at these other inventories like the hospital, several States are reporting utilization of hospital services more than quality measures so I just did not know if that would be useful as well because the legislation does get at that. And certainly, when we worked on utilization of hospital services in Florida, we found big differences between community hospitals and children's hospitals, and that again helps inform certainly parents when they see that a hospital has an X because they are not reporting because they only have 25 cases of pediatric tumors that might inform the parent they might not want to take their child with the brain tumor to that hospital.
Male Voice: Our mandate from ARHQ was to be as comprehensive as possible within the limits of time to see what measures are out there. We might find some measures that a few States or organizations are doing that might be very attractive so probably within a week, we will give our draft specifications or search specifications to these, and I'm sure she will share those, and we are going to try to be as comprehensive as possible certainly.
Female Voice: And public health, too, because they have the BRFSS (Behavioral Risk Factor Surveillance System).
Female Voice: Thank you.
Female Voice: I can [indiscernible] that we also administered the surveys [inaudible] State agencies a few years ago around a number of things they were doing to improve health systems. So we did ask about measurement, and one of the agencies was taking [indiscernible]. It is not in the level of detail that it is going to help measure distribution. I think we can get back to you with some information from that.
Jeffrey Schiff: Xavier?
Xavier Sevilla: Okay, this is a question for Scot and Charlie about the medical home. I was just curious if when you were looking at different measures you found any existing measures that actually rate how the medical home relates to the community, not just the internal process of the medical home and care coordination with other health care providers but actually how it relates to the community.
Scot Sternberg: The only thing that I have really seen that specifically addresses that has been in the National Survey of Child Health and the National Survey of Children with Special Needs.
Charles Homer: The Medical Home Index also, as you know, which is part of the self-assessment tool of Carl Cooley and Jeanne McAllister, does talk about to what extent the practice engages with the community in multiple levels. So there are dimensions within that that is not in wide use in States.
Male Voice: Are there any surveys that have looked at patients' experience in terms of turnover of care throughout a year? I know in the rural practices they are kind of blind to what payment systems people are on because you have to write the care, and so there are all these gaps in the changes that occur that sometimes do not necessarily result in gaps and sometimes the practices pick those up. Then just to comment, it is really a strange system if you look at a place like Norway how much time and effort we are spending on this issue that if we had a single plan for kids. How many more kids could we cover with the amount of time and energy we are spending on this issue?
Lisa Simpson: Three points, so I'll be brief. First of all, I completely agree with this issue about continuity of provider. There are clearly issues there. There is not a lot of data, but we know for example continuity with one person might be very different than continuity of place, so big important issues there.
On the duration measure, I'm a big constituent for that. It also seems to me that this is really not a choice on this one. It is clearly outlined in the legislation that this is a measure that is to be there. And also, I think it will be important to look at Section 402, and I cannot really interpret all the legislative language and what it all means, but there are clear specifications in there further. They talk about adding things that will be required, such as State reports related to eligibility, continuity of coverage, and the like and then, of course, the MSIS and that $5 million to improve it. I'm sure it does not go far enough, but maybe that will help move us in that direction.
And then the third point, I think the whole issue of family-centered care and patient report is really very important, and Charlie outlined a number of good points. One that was not mentioned I think was the issue of translation issues. When we are doing survey research, I think we can see that from other national surveys. It is a little bit of a pink elephant in the room. There may be some translation issues there with, for example, Spanish-speaking households. We end up with some pretty counterintuitive findings sometimes so—
Female Voice: I just wanted to echo Lisa's comment but also take it up further, predominantly for Econometrica. I mean in the whole product description, you are sort of limiting yourself in that first sentence to just Medicaid and SCHIP measures. Not only are there lots of State-based efforts that need to be catalogued because it is a great opportunity for this Medicaid quality effort to not have to replicate or use their own services but use what States are already collecting. But the national efforts, the Joint Commission pediatric asthma measures are across the country and mandated, and so these need to be rolled in there because there are opportunities for cost savings for measuring things in Medicaid.
Female Voice: Rich, I do not know if you want to talk about it, but do Medicaid and CHIP focus essentially on [indiscernible] of the larger contract that you are doing to look basically at what quality measures are out there, and who is using them? I mean the focus on Medicaid and CHIP is because Carolyn has guidance on that, and they are doing all of these every month. So if people have ideas that is okay [inaudible]. People have ideas about working out to send us the data here that you are talking about. It would be terrific because the data [inaudible] and we all have a problem so—okay? Is that [inaudible]? It is not that we are not running this information; it is data [inaudible].


5600 Fishers Lane Rockville, MD 20857