This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Session 2: Safety Net Data Collection Strategies (continued)
Cindy DiBiasi: OK, Vickie, thank you. We will be back to you.
In a moment we are going to open up the discussion to questions from our listening audience. There are two ways you can send in your questions. We encourage you to ask your question by phone. If you are already listening by phone, press "*1" to indicate that you have a question. If you are listening through your computer and want to call in with questions, dial 1-888-469-5316 and then press "*1". Again that number is 1-888-469-5316. While asking your question on the air, please do not use a speakerphone to ask your question. If you are listening to the audio through your computer, please turn down your computer volume after speaking with the operator. There is a significant time delay between the Web and telephone audio.
If you want to send a question via the Internet, simply click on the button marked "Q&A" on the event window on your computer screen, type in your question and then click the "Send" button. One important thing, if you prefer not to use your name when you communicate with us, that is fine, but we would like to know what state you are from and the name of your department or organization. So please provide those details regardless of the way in which you transmit your question.
As you are formulating your questions or queuing up on the phone lines, I would like to say a few words about our sponsors. The mission of AHRQ is to support and conduct health services research designed to improve the outcomes and quality of healthcare, reduce its cost, address patient safety and medical errors and broaden access to effective services. AHRQ's User Liaison Program serves as a bridge between researchers and state and local policymakers. ULP not only brings research-based information to policymakers so you are better informed, but we also take your questions back to AHRQ's researchers so they are aware of priorities at the state and local levels. Hundreds of state and local officials participate in ULP workshops every year.
The audio conferences are being co-sponsored by the Center for Health Services Financing and Managed Care and the Department of Health Resources and Services Administration or HRSA. HRSA is the Department of Health and Human Services Access Agency. It assures the availability of quality healthcare to low-income, uninsured, isolated, vulnerable and special needs populations. Its mission is to improve and expand access to healthcare quality for all Americans.
I'd like to take a quick moment to thank Rhoda Abrams, the director of HRSA's Center for Health Services Financing and Managed Care. Rhoda has been instrumental in helping to develop and produce these safety net products.
We'd appreciate any feedback you have on this Web-assisted audio conference and at the end of today's broadcast a brief evaluation form will appear on your screen. There are easy-to-follow instructions included on how to fill it out. Be sure to take the time to complete the form. For those of you who have been listening by telephone only and not using your computer, we ask that you stay on the line. The operator will ask you to respond to the same evaluation questions using the telephone keypad.
Your comments on this audio conference will provide us with a valuable tool in planning future events that better suit your needs. You could also E-mail your comment to the AHRQ User Liaison Program at firstname.lastname@example.org.
Now for our first question. This one comes from David Keys. He would like to know, "What is considered a good response rate for mail surveys?"
Joel Cantor: Unfortunately my answer is going to be "it depends." I will tell you better than that. It depends on many things. It depends on the population you are surveying. It depends on the weight of scientific strength you want in your estimates. I know I am not satisfying you. Typically, if you are doing a mail survey of a general population and they are not expecting to get the survey, if you do a single mailing you will get a very low response rate. If you are lucky you will get 10%. It is virtually impossible to say you represent anything when your response rate is that low. A rule of thumb is with each successive mailing, you will add roughly half a percentage point. So with the example I gave, a second mailing will get you up to 15%. Still not good enough. To really get to the range of representative sample sizes, typically in mail surveys of general populations, you would also need to do some telephone follow up or perhaps offer financial incentives or other kinds of incentives to get folks to respond. I think you would want to get your response rate at or above 50%. There is no general consensus in the field what is adequate. One standard is what would a scientific peer-reviewed journal accept in surveys? I rarely see anything published under 50%. Things in 60%, 65% are more common. I hope that, at least that is the best I can do.
Cindy DiBiasi: We have a caller. Brenda from Illinois is on the phone. Hello?
Brenda: Yes. Hi. My question is there was mention made of the importance of receiving training for interviewing skills. How important is it for individuals to acquire training on analyzing data prior to attempting to do so?
Joel Cantor: That is a great question. There are software packages available that require very little training to analyze data. Those prevent grave risks for the novice data analyst because it is fairly easy to do the analysis and get the wrong answer and not know it. I would strongly recommend that survey developers use experienced analysts. It doesn't mean you need to find a PhD. statistician. But go to your local university. There may be grad students with sufficient experience and they generally will work fairly cheaply. But it does take some training and experience to do the data analytics.
Brenda: Thank you.
Cindy DiBiasi: Tim, do you want to add something?
Timothy Clouse: Yes. It has been my experience the packages are easy to use and will give you all sorts of wonderful numbers. The difficulty arises in trying to basically explain to somebody else what those numbers mean. Like Joel said, that is where your graduate student intern becomes very useful because they need this for their own training so they become a very good useful source of labor.
Vickie Gates: Cindy, I would also like to say that we really do encourage folks to use unique resources that are available and Lynn Blewett's State Health Access Data, it is certainly one of those resources for all of the states that we have worked with in the HRSA program. One of our first recommendations is take advantage of this unique resource and Lynn may want to comment on some of the things that they actually have been able to do in helping people do a higher level of analysis and know that they have got a much more credible product.
Lynn Blewett: The State Health Access Data Assistance Center was funded by the Robert Wood Johnson Foundation and we are funded to provide technical assistance and policy analysis to states who are looking at surveys and developing surveys. So we can help you with your survey design or your sampling strategy and provide some technical input as you are developing your plans.
Cindy DiBiasi: Lynn, how do they get in touch with you?
Lynn Blewett: Well, they can, I think the contact information is on my slides, but it is www.shadac.org.
Cindy DiBiasi: OK. A caller wants to know, "Can anyone in the group discuss trends in the area of the underinsured? Are you starting to monitor underinsurance and would you consider Medicare's lack of drug coverage an underinsurance issue?"
Joel Cantor: Lynn? (Laughter.) There have been a couple of good studies of underinsurance, one by Pamela Short; I guess she has updated it at least once. It is a difficult thing to define. It can get technically complex. To my knowledge, and maybe some of the other panelists know differently, there is no standard, generally accepted way of measuring underinsurance. It is in the eye of the beholder in some ways. To answer the last part of the question I think probably I will speak for myself. Certainly lack of prescription drug coverage for a big segment of the elderly population is a huge underinsurance problem.
Lynn Blewett: I think there is more interest in trying to define and measure underinsurance. There are like three different ways to look at it. One is what we call the economic approach. That is how much out of pocket do you pay for your health insurance coverage? As Joel said, that is what Pamela Short has looked at and the range there again is almost subjective in terms of what value you select. But if you pay 10% or 20% more of your income out of pocket to pay for coverage or services, that that may be considered underinsurance. So that is one way to look at it. If you have income information and out-of-pocket cost information. Again, it is very difficult to do to get that level of detail in a survey.
Another approach to look at is what benefits are covered. So your mention of the Medicare lack of prescription drug coverage, it could be considered underinsurance. Another approach is to ask people is there something that you need a coverage for that you weren't able to get? That is sort of an attitude of what the perception of people are. Do you have coverage for a needed care or not? So there are three kind of different ways to approach underinsurance. As Joel said, very difficult to measure and monitor it, but again as employers are shifting a lot more costs to the employees in terms of out-of-pocket co-pays and deductibles, premiums, I think there is going to be more research and interest in developing more refined measurements.
Cindy DiBiasi: And a question on uninsured from Shari Isaac from Denver Health wants to know, "What is the standard definition being used for uninsured? Is it not having insurance for six months or twelve months? When do we draw the line on uninsured?"
Joel Cantor: Well if this were a videoconference you would see us all rolling our eyes. This is a perennial problem. The different surveys measure uninsurance in different ways. It gets technical quickly, although I think the way most people think about it and refer to it is at a given point in time, last month or today, how many people lack coverage. The main national survey, the current population survey that tracks this annually measures it a little bit differently. But if you are doing your own survey, I would recommend using the point-in-time measure. You might then also ask questions about when you last had coverage to be able to measure duration. Because obviously it is a bigger problem for people who are uninsured for longer.
Cindy DiBiasi: Tim?
Timothy Clouse: A second option, which is used by national health examination surveys, actually asks two questions. First one, do you currently have health insurance and then secondly, did you have health insurance any time within the preceding twelve months?
Cindy DiBiasi: Lynn?
Lynn Blewett: SHADAC has a tool that might be useful which basically summarizes the different ways of asking about health insurance coverage and we look at the CPS in different ways to do that and make recommendations. We call them our survey guidelines and we have one on how to ask income questions. We have one on how to ask health insurance coverage and we, I can't think of the third one. Those again are available online and I think might be useful as you are exploring these different tools, what are pros and cons of asking different questions.
Cindy DiBiasi: I am just going to repeat your Website since I know what is going to happen is we are going to be inundated with calls saying how to get in touch with her. It is www.shadac.org.
A question from Laurie Olson. She said, "Lynn mentioned that 37 states have done surveys. Is Nevada one of them?"
Lynn Blewett: I think Nevada is not one of them. I don't have the list in front of me, but I am pretty sure Nevada is not one of those states.
Cindy DiBiasi: OK. A question from William Mogg. "Safety net providers have limited ability to cost shift to the uninsured population. What do you think about modifying financial statistics to measure the ability of the safety net's provider's ability to cost shift?"
Timothy Clouse: Well, cost shift is a little, not really a good one in this context because that kind of implies that you are basically jacking up your billing rates to people who have got insurance, which is really not what you should be doing. In theory, what you should be doing is getting access to a federal or state grant program, which would pay for that. In fact, there is a HRSA third-party reimbursement Website which actually covers some of that where there the focus is on trying to make sure that HRSA-funded programs for grantees are maximizing use of their grants. The Website for that is www.hrsa.gov/tpr.
Cindy DiBiasi: Joel?
Joel Cantor: One thing I would add is that it may be useful to present some what I will call the markers of the ability to cover one's cost. I won't call it cost shifting for the reasons Tim mentioned, but payer mix, simply put, can be very useful. What proportion of a given provider's revenue is from private sources? How much from Medicaid? How much from Medicare and how much from self-pay or uninsured? That will tell you, that speaks volumes about the ability of the provider to cover its costs with its revenue stream. Which I think is what you are trying to get at.
Cindy DiBiasi: Question from Nancy Wilbur. She wants to know, "What are some ways to address the issue of people excluded from direct-dial phone surveys due to a lack of phones or use of cell phones only if you are primarily conducting your survey by phone?"
Joel Cantor: Good question. The number of households without phones, without landline phones has grown and is becoming a problem. There is also the problem of people using technology such as answering machines and caller ID to screen out calls, so response rates are going down. It is very difficult. To deal with the problem of under coverage by phone, there are really two techniques. One is to supplement the sample with an in-person interview. Typically what is done is to select census tracts with low telephone coverage that is, there may be 10% of households or 15% of households that don't have phones. Those would be considered high non-phone tracts. To send interviewers in and enumerate the households, conduct a sampling of those households. Knock on doors and conduct interviews.
We are conducting a survey right now in New Brunswick, New Jersey where we are using that technique. We are actually enumerating the households, recruiting the families to participate and then handing them our own cell phone so that they can do the interview by phone like the other 90% of the sample.
Cindy DiBiasi: What is the advantage of that?
Joel Cantor: The advantage is you are using the same methodology to collect the data so you don't end up with a method-effect bias.
Cindy DiBiasi: OK.
Joel Cantor: You also can use the computerized interviewing technology without carrying laptops around in very poor neighborhoods with high crime rates. So there are some real advantages to using the cell phone method.
The second methodology involves conducting a telephone-only sample, but then asking as part of your questionnaire for a telephone coverage history. In other words, measure for each household in your sample. Obviously they have a phone on the day of the survey, but did they not have a phone last month? For how many of those months in the last year did they not have a phone? Then statisticians can use weighting strategies, effectively up weighting households that have a history of lacking of phones to balance the survey and reduce the telephone coverage bias. It turns out that households that lacked a phone last month but have a phone this month are a lot like households that lack a phone on the day of the survey. So the weighting strategy is effective by its reduction strategy.
Cindy DiBiasi: From a pure consumer point of view here, I can't imagine with all of this overabundance of telemarketers and everything else that as soon as somebody picks up the phone and hears a voice they don't recognize that they either hang up or say I'm sorry or they are on the data, the new call registry which is blocking all of this. It must be very difficult now.
Joel Cantor: The Do Not Call Registries don't block legitimate surveys. They block telemarketing. So that is not a problem. But you are right, there is a problem of people screening calls and being very reluctant to take calls from strangers. This is especially true in the big media market. Response rates in places like New York, Philadelphia, L.A. are really abysmal. They are much better in places like North Dakota and so on where there is just not the same inundation of telemarketers. Perhaps folks in North Dakota would disagree with me, I'm not sure.
But there are a number of things that survey sponsors can do to minimize this problem. One is if the survey is sponsored by a government agency, that can help or if you can get a letter of endorsement from a government agency that legitimizes the survey. A university-based study generally gets better response rates than a no-name commercial vendor survey or even a brand name commercial vendor survey. Sending out letters in advance of calling can help. As I mentioned earlier, paying people even a modest incentive tells them that you are serious and you take them seriously. That can help your response rate.
Cindy DiBiasi: Tim?
Timothy Clouse: Just a comment on sending it out basically on government letterhead is depending upon the population you are surveying, they may view the government regardless of what level, as all being the same and if they see something that has government letterhead on, that may have some negative connotations.
Cindy DiBiasi: I was going to ask you, does that increase your chances that it is going to be answered or decrease them?
Timothy Clouse: It depends on the population you are surveying. In my experience, I have had it both ways. Actually like Joel said, I think for most people if it comes from the University of South Florida or whatever, that is a little more neutral, a little bit, well, less threatening. That might work better. My experience is that public sponsorship actually helps and there are populations that are, for example, immigrants who may not be here legally who would really be concerned. But they are going to be concerned regardless. But in general, government endorsement helps I think, as a general rule.
Cindy DiBiasi: Lynn, why haven't we heard more about the State and Local Area Integrated Telephone Survey called SLAITS? When is that data going to be released?
Lynn Blewett: Well, that survey was initially, it is a survey of children with special health needs so it was funded by the Maternal and Child Health Bureau, the Department of Health and Human Services, and it was targeted for children with special health needs. Another federal agency actually funded an additional part of that survey to look at health insurance coverage for children in general. So it was targeted for one population; they added a different funding source to get children without special health needs and so I think it has been a complicated health survey that has taken a while. We expect the data to be released really within the next month or so. We are working with the, SHADAC is working with the SLAITS people to try to make it more visible and have people access to the data and information because it will provide some new information on levels of estimates of uninsured children for state and local areas.
It is not a survey; it is a very interesting mechanism because it is not a survey that has ongoing consistent funding every year like CPS or MEPS. It is what they call a survey module that you can fund, if they can find other agencies that are interested in funding, they will be able to field the survey. But it is not an ongoing (unclear) which may be related to why more people don't know about it. It is the first time that this information will be released.
Cindy DiBiasi: OK. A question from Beth Baliot. "We were reminded again how important data quality is as you present it to the community to gain interest and support. What factors should we pay attention to protect ourselves from folks questioning quality?"
Vickie Gates: Well, I think we could make, all make a couple of comments about that, but I think one of the things that is really key is the source of the data and thinking through who actually is going to be responsible for this data, the quality of the provider that you have worked with to produce the data and your ability to present it as objective and neutral. So if you in fact are an advocate on an issue, it is even more important that you work with a contractor who has credibility and who can bring that objectivity and neutrality to the issue.
It also helps sometimes when you are working with policymakers to think about who on their particular group of advisors has the time and energy and is worth the investment for you spending time and energy on really going through where this data came from, how this data was put together, what is was designed to do.
The other thing that I think I want to reiterate is worry about many of these issues on the front end, not the back end. So as you begin to contemplate what the questions are and what the issues are that you are trying to design data for, that is when in fact you want to go and think very carefully about who will be part of that process, how do they understand the issue, how do you get them involved? It is sometimes more difficult. It will take you a little longer, but when it comes to credibility issues I believe it will pay off for you.
Cindy DiBiasi: When it comes to being realistic about surveys and Joel I have a feeling I am going to get a "that depends" answer to this question, but is there a range of how much time you should expect it to take and how much money you should expect it to cost? Just to be realistic, you are saying think it through before you start, but experience is often our best teacher and often we don't know what is going to happen once we are in the process.
Joel Cantor: Cindy, the answer is it depends. (Laughter.) Another obnoxious answer is make your best estimate and then double it. The truth is of course it depends on many factors. If you are, for example, trying to conduct a survey in a local area, say 600 interviews, 30-minute interview, this is the kind of parameters that we are talking about in these sorts of surveys. You are talking about spending perhaps $100,000 just to have a vendor collect the data for you. That doesn't count the time you put in developing the questionnaire, which should be counted in months, not weeks. The time you spend testing and fielding. One of the best ways to get a decent response rate is to keep your survey in the field for a long time so that you don't go back to people repeatedly in one week, then they will turn you down. But if they are reluctant to participate because they are busy, don't call them again for three weeks. Then maybe they won't be so busy. So our surveys generally stay in the field between four and six months. Then you need several months to clean and prepare the data. Another several months to do the analysis and then spend as much time as you can presenting the findings and publishing the findings as you possibly can. So it is kind of a cobbled-together answer, but those are general parameters.
Cindy DiBiasi: Robin?
Robin Weinick: I want to ask a follow-up question for both Joel and Lynn. Given the long time horizon for collecting data, and particularly when you look at some of these national surveys, you have talked about uninsurance, can often be giving you data from a year or two or even three ago. Given the long time horizon, how do we make these data really relevant to policymakers today because they need the most current information that they can possibly get?
Lynn Blewett: That is a very good question. I think the current population survey actually is released every fall and it is probably the most up to date and it is a year lag time. So that is kind of what you are having to deal with. The HRSA state planning grant states who have conducted surveys, they have been under a timeframe that they are required to get their surveys in the field and the analysis done within a year. So that data is really timely and relevant and having that sort of time pressure has actually helped make the data more relevant. So if you are able to do your own survey and you have got a tight timeframe and you have got enough people helping you to get it done, you have used their existing resources, you can get more timely information. But most of the national data that we, 18 months out. It is just something that you sort of have to live with.
Joel Cantor: I think that is the best you can do. Survey collections such as the Current Population Survey, such as the Behavioral Risk Factor Surveillance surveys, which are sponsored by the Centers for Disease Control, and conducted by the states, those surveys are done routinely. They are kind of a well oiled a machine as there is going to be. So you are never going to beat their timeframes. So don't think that doing your own survey is a way of shortcutting the timeframes for available data.
Cindy DiBiasi: You are possibly sacrificing the quality of the results, right?
Joel Cantor: that is right.
Cindy DiBiasi: Tim?
Timothy Clouse: I will just put on my economist hat for a moment and say that when you are dealing with these types of issues, you are dealing with time series data and it is certainly that they do change over time. But for policymaking purposes, they don't change very much. Government is basically kind of a blunt instrument, so the fact that it is a year old is probably unfortunately close enough for government work.
Cindy DiBiasi: So that is where that phrase came from. (Laughs.) Joel, a question for you. "Are there good sources of information about the care that is available in the local safety net?"
Joel Cantor: Sure, we have talked a lot about general population surveys and family surveys, household surveys. Tim talked about financial data from healthcare providers. Of course that is one important piece of understanding the safety nets. There are some data sources available about what resources are available in a local community. For the HRSA-funded community health centers, there is the UDS Data System about the federally qualified health (unclear). The American Hospital Association conducts an annual survey, which collects information about services available through hospitals including emergency departments and outpatient departments. In the data book, some of these measures are summarized, aggregated up to the local level. There is also physician data available through the area resource file, through the AMA, so there are some existing sources.
Sometimes, again using my rule of thumb of use existing data whenever possible, do that. But when it is not possible, you can construct a survey of providers. Individual private practice physicians are notoriously difficult to survey, very hard to get a good response rate. They are very busy folks and they are inundated, given interest of pharmaceutical companies and others with marketing surveys and other things.
Institutional providers, if you can motivate them to work with you, might be a more ready source of information for the safety nets since many safety net services are delivered through institutions and my chapter in the tool kit does go into a little bit about what kind of information you can get from surveys of institutional providers. Things like understanding enabling services, physical plant capacity, and other sorts of measures that can be very valuable in assessing capacity.
Cindy DiBiasi: We talk so much about people being busy and everything else. How long does a typical survey, if there is such thing as a typical survey time, how long does it take to actually fill out a survey or answer questions posed to you in a survey?
Joel Cantor: You can collect an enormous amount of information in a 15-minute interview. You would be really surprised how much data you can collect. One rule of thumb is you can ask six or seven questions in a minute. It depends on the question, of course. It depends on the audience and so on. So do the math. In 15 minutes you can ask a lot of questions. Some surveys, such as the National Health Interview Survey, such as the Medical Expenditure Panel Survey, use much longer instruments. There is a real depth of information there.
For telephone surveys, typically they run between 20 and 40 minutes. After that, people can't stand to be on the phone and you really lose data quality.
Cindy DiBiasi: That is a long time.
Joel Cantor: That is a long time, but people generally seem willing. In our surveys, which run that long, people don't hang up on us. They complete the interview. If you can motivate them, it is an important topic, a subject they want to talk about, people like to talk about themselves, and they are willing to stay on.
Cindy DiBiasi: Do you explain upfront why you are doing this? The importance of it and putting it into some context?
Joel Cantor: Yes and I think as researchers we are ethically bound to tell people what we are doing, why we are doing it, how long it will take and then ask their permission to proceed.
Cindy DiBiasi: Robin?
Robin Weinick: I actually wanted to come back to a slightly bigger picture. We have been talking a lot about the technical details about data collection and how to really get into the nitty-gritty. I wanted to pose a question for Vickie which is the policymaking process is so complex. How is it that people can actually start to use database information collected from the kinds of things that Tim and Lynn and Joel have been talking about today, to really inform that process?
Vickie Gates: Part of, I think, the thing that you do is you think about where your policy levers are in a given environment. What people often forget to think about is that your levers are not simply political leadership; they are often also the administrative leadership that is part of problem solving. They are, in many cases, groups that may have a specific interest in an issue and who may be a good voice. So I think one of the classic things if you have good data, if you have information that you feel will add value to the policy process, then begin to catalog the environment, the players and the way they interact. Then begin to think about what it is that they need and how they like to access information. Because it is going to vary. There are certain types of tools that you will want to develop for legislators. There are other types of tools that you will develop for administrative leaders. There are other types of tools that you may develop for key stakeholder groups around an issue.
I think Joel made a comment about when you think about putting out information on a survey, to think about white papers. To think about one-page recaps. It is the same situation. Think about your audience. Get it split into groups. Think about the way they like to access information, how to get that information across and be willing to use a diversity of tools.
Cindy DiBiasi: A question from Kim Barrick wants to know how we can access a copy of the full worksheet. I believe Tim that is your worksheet.
Robin Weinick: Yes it is, but let me answer that question. (Laughter.) We actually have all three of these chapters that we have been discussing today from the school kit are available on our Website, which is www.ahrq.gov/data/safetynet. That is www.ahrq.gov/data/safetynet. What you will find there right now are copies of these three papers as well as a few others in addition. You will find the interactive version of the worksheet that Tim is describing.
Cindy DiBiasi: We have Rene from Arizona on the telephone. Hello?
Rene: Hi. We have a question. What advice can you give about connecting with human subjects committees and institutional review boards before doing surveys?
Joel Cantor: You have to. (Laughter.) Of course it depends on your institutional setting. But typically each IRB will have a formal process that you have to go through and send them your research protocol. They will often want to look at your questionnaire. Am I answering your question?
Lynn Blewett: Yes. I was thinking this is why it is good to partner with a university that has that infrastructure already in place. Most local universities will have an institutional review board. Many hospitals will have an institutional review board or health plans who do research will have a research arm. If you are not familiar with those, you need to find one. I think there are actually some health departments that have their own review structures as well. It is an important part of the process and can provide you with a lot of information that you will need to make sure that you meet the privacy and consent requirements.
Proceed to Next Section