This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.
Please go to www.ahrq.gov for current information.
Local Data Collection Strategies for Safety Net Assessment (continued)
How to Ask It
Principles for Writing and Asking Questions
Preparing questionnaires is labor intensive, but the extra effort will pay off in the long run. Box 8 outlines some basic rules of thumb for questionnaire writing. Questionnaire writing is a complex skill, and this chapter can outline only general principles. If many new questions are needed for a local survey, it is advisable for its developers to refer to more detailed sources of guidance (e.g., Fink, 1995b or Sudman and Bradburn, 1988). Many of the rules in Box 8 are intuitive, and some have been discussed throughout the chapter (e.g., using previously tested questions and sticking to your survey objectives). Some of the other rules require more explanation.
Box 8. Twelve Rules for Question Writing
|1. Use previously tested questions whenever possible.
2. Avoid asking unnecessary questions.
3. Ask only concrete questions.
4. Avoid asking about more than one concept at a time.
5. Avoid using double negatives.
6. Keep questions succinct.
7. Avoid biased wording.
8. Use question formats that are appropriate and consistent.
9. If items use the same response categories, group them together.
10. Use open-ended questions sparingly.
11. Avoid long recall periods.
12. Use language appropriate for the survey population.
Asking about multiple concepts in survey questions (rule 4) is a common pitfall. An example can illustrate this point:
- Do you have difficulty understanding or speaking to your doctor?
Somewhat better questions:
- Does your doctor communicate to you clearly and in ways you can understand?
- Do you feel comfortable talking to your doctor and asking your doctor questions
about your health or medical condition?
Rule 6 asks you to keep question wording to the point. Long, wordy questions can fatigue or confuse respondents, potentially reducing the quality of the data collected. The questions in the example above suffer from wordiness; here is improved wording, with redundant wording struck out:
Still better questions:
- Does your doctor communicate to you in ways you can understand?
- Do you feel comfortable talking to your doctor about your health or medical condition?
Rule 7 can be applied to these questions as well. Note that they are one-sided, which may lead respondents to believe that you are expecting them to report problems. A simple fix might be:
Even better questions:
- Does your doctor communicate to you in ways you can understand, or are there problems in
the way he or she communicates to you?
- Do you feel uncomfortable talking to your doctor about your health or medical condition,
or are you comfortable talking to him or her?
As Rule 8 implies, question format can affect their value. Simple yes/no questions may be easy to answer, but they do not permit respondents to express degrees of feeling and may limit the amount of variability in answers. Many respondents will be reluctant to criticize their doctor, so most would likely answer "yes" to the questions in the example in this section. Allowing respondents to express their answer on a graded or ordinal scale is likely to yield much more useful information:
- Would you say that your doctor communicates with you very clearly, somewhat clearly,
somewhat unclearly, or very unclearly?
- When talking with your doctor about your health or medical condition, do you feel
very comfortable, somewhat comfortable, somewhat uncomfortable, or very uncomfortable?
The last three rules outline some other key principles. Open-ended questions can be of great value, as they can reveal what respondents are thinking without presupposing their answers. However, it takes a great deal of interviewing skill to elicit valuable responses to open-ended questions while avoiding leading respondents to provide particular answers. Moreover, except where sample sizes are very small, classifying or coding open-ended responses into valuable categories can be very time consuming. If survey developers are unsure what closed categories to use for a given question, it may be useful to refine the question using small pre-test samples or focus groups, then developing the closed-ended responses for a larger survey.
The problems with long recall periods are intuitive: respondents may over- or underreport their experiences or may simply refuse to answer if they cannot accurately recall their experiences. When deciding how long a recall period to use when asking about experiences with care (e.g., number of doctor visits or asthma attacks), consider the saliency of the events. For example, the recall period for something as serious as a broken bone can be a good deal longer than for visits to the school nurse.
Following rule 12 can be the most difficult, especially if a survey population is diverse. Understandability of survey questions involves using common language instead of health system or medical jargon and it involves asking questions in the language used by non-English-speaking populations. Keep in mind that even populations that use a common language may use it differently. For instance, Mexican, Puerto Rican, and Central American populations may interpret a given Spanish-language question very differently. Likewise, different subgroups of English-speaking populations may use different idioms. The key to getting the language right is careful testing. Also, when translating questionnaires to different languages, it is helpful to employ professional translators and to have a second person back translate the questionnaire to English to test its understandability and comparability with the intent of the English version.
Question Development and Pre-testing
As discussed, pre-testing a questionnaire is an essential part of survey development. Even if a local survey has adopted a previously used and tested questionnaire, a great deal can be learned from pre-testing. Such testing can reveal errors in question wording and question skip patterns. It can also document the understandability of the questionnaire for local populations, and measure the survey length. Pre-tests need not involve large samples, as a huge amount can be learned from 10 or 20 interviews. More important than conducting a large number of pre-test cases is conducting several pre-tests, especially when many survey questions are new, the questionnaire is complex, or revisions are made following earlier pre-tests. Three or even four waves of pre-testing are not uncommon in survey efforts.
Before even getting to a testable questionnaire, it is often advisable to do focus groups or laboratory or cognitive testing of questions. These techniques can be more or less intensive, but when surveys involve new or complex questions, this sort of preliminary research can be of great value. Describing these techniques is beyond the scope of this chapter, but good guidance is available from other sources (e.g., Morgan and Kruger, 1998).
Writing a good questionnaire is only the first step in assuring successful interviews. Other important steps include preparing materials for respondents and training and rehearsing interviewers. Exactly what materials are needed and how to best prepare interviewers depend on the survey mode, i.e., whether it is to be conducted via a mailed questionnaire, in-person, or over the telephone. This section provides some general guidance for conducting interviews, but it would be wise for most survey developers to consult other, more detailed sources as they proceed with survey planning (e.g., Frey, 1995).
The initial approach to a potential respondent is perhaps the most important determinant of whether he or she will agree to participate. It is becoming increasingly difficult to rise above the noise of telephone and direct mail marketing in conducting research surveys, but a few simple measures can help:
- The survey should be sponsored or endorsed by a trusted entity. People are more likely to respond to surveys sponsored by government agencies, universities, or other well-known and trusted organizations. If direct sponsorship or endorsement by such an agency is not possible, mentioning that the sponsor is a not-for-profit entity (if appropriate) may help.
- Early on, make it clear that nothing is being sold and that the survey is not part of a solicitation.
- Make it clear that the survey is confidential (i.e., the sponsor will not reveal respondents' identities) or that participants will be anonymous (i.e., even the sponsor will not know the identify of respondents). It may help to inform respondents that only group results will be published, not individual responses, and that their decision whether or not to participate will not affect their health services or program enrollment. Always be truthful.
- Let the respondent know what is involved in participation, such as the expected length of the survey. Be sure to tell the respondent that their participation is voluntary, but that it will be a major contribution for an important cause. This can help establish an honest rapport with the respondent, and it meets the sponsor's ethical obligation to be honest.
- If possible, mail a letter in advance of the survey visit or call, explaining the purpose and context of the survey. Provide a telephone number to call for information about the survey.
- Also, if possible, provide response incentives, such as a small cash payment (e.g., $10) or a chance at winning a valued prize such as a gift certificate.
- At the outset of the interview, clearly describe the purpose of the study, provide the information recommended in the bullets above, and ask the respondent's permission to proceed. Be willing to schedule an appointment, call back,or revisit for the interview, if the respondent so wishes.
- Whenever possible, offer the survey in the native language of the study population.
- Prepare answers for expected questions (e.g., purpose of the survey, nature of the sponsor, how confidentiality will be assured, who will see the responses, and how the results will be used).
- If the respondent clearly states that he or she does not want to participate, thank them, end the contact, and do not call them back. This is your ethical obligation.
- If the respondent is reluctant to participate, but does not outright refuse, then wait a week or two and contact the respondent again. Perhaps you caught them at a bad time. It is a good idea to use your most effective interviewer for such refusal conversions.
In addition to good preparation for the initial approach to potential respondents, solid interviewer preparation can pay dividends in high response rates and good data completeness and quality. Good interviewer preparation starts with the selection of high quality interviewers and involves careful training and supervision.
If a professional survey firm is hired, it will have a staff of trained interviewers, and the firm will take responsibility for study-specific training. When selecting a firm, be sure to ask about the qualifications of the interviewer staff and the length and content of the training. If it is not possible to hire professionals, even if interviewers are volunteers, it is vital to approach interviewer selection and training in a business-like matter. First, create a job description or written list of interviewer expectations; this should include guidelines for confidentiality and respect for respondents as well as a description of the skills and tasks involved.
Plan a formal interviewer training session. The session should run at least a half-day (longer for complex surveys) and should include:
- A detailed description of the survey and its purposes.
- A detailed walk-through of the questionnaire. Define terms. Answer interviewer questions. It is a good idea to give the interviewers an annotated version of the questionnaire stating the purpose and providing definitions for each question.
- Guidelines for effective interviewing, including how to gently persuade reluctant respondents to participate, how to obtain responses to questions without leading the respondent, how to keep the interview moving without being too pushy, confidentiality, and so forth.
- Have interviewers observe mock interviews and critique them.
- Have interviewers practice interviewing each other.
- Walk through interview scheduling and logistics.
- Demonstrate and practice post-interviewing questionnaire editing, such as cleaning up errors. Provide guidelines for when to re-contact respondents to fill in missing or vague information.
- Discuss supervision.
Finally, experienced interviewer supervisor(s) should be employed if possible. Supervisors should work with interviewers one-on-one to help them hone their skills. They can also sit in or listen to taped interviews to monitor interview quality and then make any mid-course corrections that may be needed.
Return to Contents
Other Considerations for Survey Development
Mode of Administration
Throughout, this chapter has referred to telephone, in-person, and mailed surveys. Which modality should be adopted depends on the specific demands of a given survey as well as the resources that are available. Box 9 provides some guidance for selecting the right survey mode; other sources can provide more detailed guidance (e.g., Fowler, 1993).
Box 9. Selecting the Survey Modality
|Few resources are available.
|The survey must be done quickly.
||Telephone or mail
|Only a few people are available to help.
||Telephone or mail
|The respondent will have to look-up answers to many of the questions.
||Mail or in-person
|The survey is short, exploratory, and response rate is not key.
|The questionnaire is long, includes many open-ended questions, or it is important to probe for good responses.
||In-person or telephone
|It is important to achieve a high response rate.
|The questionnaire includes many sensitive questions for which respondents may give "socially desirable" responses.
||Telephone or mail
|The questionnaire is complex, and standardizing interviewing will be difficult.
|Visual aids, such as lists of potential responses will be very helpful.
Preparing Survey Data, Analyzing Results, and Presenting Findings
Full discussions of data entry, preparation, analysis, and presentation of findings are beyond the scope of this chapter, and ample guidance for these tasks can be found in other publications (for example, Fink, 1995a and Fowler, 1993), but a few key points for preparing findings for community-based audiences are worthy of emphasis here.
For the survey novice, the excitement that accompanies the long-awaited end of data collection often gives way to feelings of being overwhelmed at the first look at the initial raw survey data. Where to start? Today, data analysis software packages, such as SPSS or SAS, are widely available and fairly easily to use. Most analysts would not think of analyzing data without such tools. Gaining access to a computer and analysis software is, then, the place to start. Even small local groups may be able to find a graduate student with access to analysis software who is willing to volunteer some analysis time. Once the data are in the computer, work should proceed in several steps, including data cleaning, variable construction, and documentation; data analysis; and reporting on findings. Each of these topics is touched on below.
Data Cleaning and Editing
Data right out of the interview are not ready to use because interviewers sometimes make errors and respondents sometimes give illogical or conflicting responses. Examining simple frequency distributions of all variables is a good place to start to prepare the data. Examine missing cases and look for mistakes in coding. Sometimes verbatim responses to "Other, Specify" categories of closed-ended questions can be easily recoded into logical groupings (and often into the closed-ended categories in the questionnaire). Other times, missing data can be filled in from other information in the survey. For example, if data on insurance coverage for a child is missing, but a parent holds family coverage, then it is safe to fill in the missing information for the child. Also, if employment data are missing, but a respondent reported having employer-sponsored health coverage in their own name, employment can be inferred or imputed.
Data editing and cleaning can be quite time consuming, but it can pay dividends by reducing missing data and correcting errors. A few pieces of advice are in order. First, be systematic; clean one variable (or related groups of variables) at a time. Second, do not become a fiction writer. It is tempting to make up scenarios in which certain configurations of answers are possible, and filling in data to match the scenarios. This is dangerous and can lead to results that are not honest. Finally, keep good records, such as a logbook of data edits that includes computer edits or listings of cases changed. This audit trail can be invaluable should analysts want to "undo" edits or replicate them in future surveys.
Answers to survey questions do not always produce analysis-ready variables. Insurance coverage is an example of such a variable. Recall that the best way to ask about coverage is to ask about each possible source. This might lead to a half-dozen survey questions or more, covering each possible source. Also, many batteries of questions about coverage ask in whose name the policy is held (the policyholder) and who else in the family is covered by that policy (dependents), leading to still more survey questions. Despite all of these questions, the variable of interest might be a simple three-level categorical variable indicating whether each household member has public coverage (e.g., Medicare or Medicaid), private coverage, or no coverage at all. A set of decision rules is needed to turn the responses to multiple questions into the single variable. For example, many analysts classify individuals with both public and private coverage in the "private" category. Examining publications from national surveys can be very helpful in making decisions in the creation of analysis variables.
At each step of the data editing, cleaning, and variable construction process, it is vital to keep detailed documentation. One part of that documentation is the codebook that lists each variable and defines each possible response category (including "don't know," refused, and skipped).
As with every phase of the local survey effort, referring back to articulated survey objectives is the first step in data analysis. Based on these objectives, the population subgroups that are of greatest interest will be clarified. These will likely include socio-economic and demographic subgroups, defined by, for example, race/ethnicity, sex, age, immigration status, and employment and income groupings. To create classifications of these variables, first observe simple frequency distributions of each (e.g., the number and percentage of persons with the characteristics), then decide on groupings or cut-off points that are both logical (for example, it is wise to have an age group break at 65 years old, the age of eligibility for Medicare for most people) and assure adequate numbers of respondents in each categories. As mentioned in the sample size discussion above, there are no hard-and-fast minimum group sizes, but not permitting the smallest subgroup to drop below about 50 cases would be prudent.
Once the set of "standard" classification variables are created, arraying other population characteristics in groupings according to these variables comes next. Categorical variables (e.g., respondent reports of fair or poor health status, respondents with a usual source of care) should be expressed as proportions or percentages, and continuous variables (e.g., number of doctor visits in the past year) can be expressed as means or medians. It is a good idea to create a complete set of tables that shows percentages or averages (as the case may be) of each population characteristic for each subgroup according to the classification variables.
Examining the initial basic tabulations generally gives rise to other questions (e.g., why does the low-income Hispanic population have a higher rate than the low-income non-Hispanic black population?). Indeed, the number of questions can proliferate quickly. Control the impulse to dredge your data! Once again, use the original survey project objectives to help set analysis priorities. Also, sample size limitations should be taken into account when deciding which secondary analysis questions to pursue.
Any report of survey findings, whether in oral or written form, should include a few essentials:
- Survey objectives.
- Mode of data collection.
- Sampling strategy.
- Number of respondents and response rate.
- Questionnaire length (in time or number of questions).
- Topics covered and basic demographics of respondents
(compared to the population represented, if possible).
Beyond these basics, reports present and interpret the findings.
For oral reports, it is useful to prepare transparencies or slides (with apologies to Kodak, presentations, these days, generally use computer LCD projectors and presentation graphics software such as Microsoft® PowerPoint®). Box 10 summarizes a few rules of thumb for creating presentations. Beyond these basic tips, make use of the huge array of publications on health access available on the Internet. (Visit, for example, the Kaiser Commission on Medicaid and the Uninsured at http://www.kff.org/ or any of the survey Web sites listed in Tables 1 and 2).
Box 10. Tips for Successful Survey Presentations
- Keep text density low and white space high; brief text bullets work well.
- Keep text brief; there is no need to use complete sentences.
- Use a large, simple typeface.
- Avoid using clashing patterns in charts; they can be dizzying.
- When presenting tables, do not include more than two or three columns or rows per slide.
- Use colors creatively to convey concepts (e.g., show the same population group in the same color on all slides) not just to make slides pretty.
- Avoid graphic noise. Software packages today come with all manner of background graphics. These can be distracting. Keep it simple but attractive.
- Convey one concept per graphic or table slide.
- Allow one to two minutes per slide for a presentation.
- In addition to projecting the presentation slides on a screen, give each member of the audience a copy so they can follow along and take notes.
Using data for stimulating discussions among local groups does not require advanced statistical analysis. However, questions will unavoidably come up about whether differences among groups are statistically significant. Since surveys are based on samples, summary statistics (e.g., proportions or means) are estimates of these measures in the full population, and estimates are subject to variability. Statistical significance is an indication of whether differences found in samples can be safely extrapolated to entire populations. While it is advisable to conduct statistical tests, not all local groups will have the expertise to do so. In the absence of statistical testing (and even in the presence of such testing!), common sense goes a long way. Take care, for instance, not to extrapolate from small subgroup sizes or from variables with a high proportion of missing data. Being conservative about what findings are presented and highlighted is advisable, as presenting findings supported by thin data will only call the credibility of the analyst into question.
Return to Contents
The purpose of this chapter is to provide realistic guidance to local groups interested in conducting surveys to assess their local safety-net resources. Survey research methods are highly developed, and a professional discipline has grown up around designing and managing surveys. An expansive survey research industry is accessible to potential survey sponsors that have the resources to purchase professional services. But the level of funding needed to engage the professionals is rarely available to local health access coalitions and others interested in the health care safety net. Thus, this chapter outlines what local groups need to know to get their efforts off on the right foot and avoid common pitfalls. Anyone sponsoring a health access survey will need to use materials other than those provided here. Throughout this chapter, additional sources are provided. If a survey firm can be engaged, sponsors may not need to learn a great deal more than is provided here (although more knowledge is always better!), but groups that plan to conduct their own survey will most certainly need to consult other materials. This chapter focuses on issues that are specific to surveys related to the health care safety net, and among these, local population surveys are emphasized. General survey research methods, not specific to health access surveys, are discussed only briefly, so the rich resources on survey methods available from other publications should be consulted.
This chapter also focuses on helping local survey sponsors create valid and valuable surveys. As discussed, there are many possible sources of bias and inaccuracy in any survey, and great care is needed at every stage to minimize bias and error. A few basic steps can help maximize the value of locally developed surveys. Most importantly, carefully develop a clear set of survey objectives, and use those objectives as a guide throughout the survey process. Second, do not reinvent the wheel; draw from the many excellent sources of survey materials available in the public domain. Third, follow the basic common sense guide to carrying out each step of the survey process provided in this chapter, and draw on the wealth of other resources available in the field as your survey progresses. Surveys can be valuable tools for enhancing the health care safety net, particularly if their sponsors are ready to make the substantial effort required to do their survey right.
Return to Contents
The valuable comments of Susan Brownlee, Ph.D., are gratefully acknowledged. The author is solely responsible for any errors or omissions.
Return to Contents
Aday LA. Designing and conducting health surveys: a comprehensive guide. San Francisco (CA): Jossey-Bass; 1996.
Berk ML, Albers LA, Schur CL. The growth in the U.S. uninsured population: trends in Hispanic subgroups, 1977 to 1992. Am J Public Health 1996 Apr;86(4):572-6.
Berk ML, Schur CL. A review of national access-to-care surveys. In: Isaacs SL, Knickman JR, editors. To improve health and health care 1997: The Robert Wood Johnson Foundation anthology. San Francisco (CA): Jossey-Bass; 1997. p. 53-77.
Berk ML, Schur CL, Cantor JC. Ability to obtain health care: recent estimates from the Robert Wood Johnson Foundation National Access to Care Survey. Health Aff (Millwood) 1995 Fall;14(3):139-46.
Cantor JC. Asking about access: Challenges for surveys in a changing healthcare environment, by Kasper JD (Discussion). Health Serv Res 1998 Aug;33(3 Part II):761-2.
Fink A. How to analyze survey data. Vol. 8. In: Fink A, editor. The survey kit. Thousand Oaks (CA): Sage Publications; 1995a.
Fink A. How to ask survey questions. Vol. 2. In: Fink A, editor. The survey kit. Thousand Oaks (CA): Sage Publications; 1995b.
Fowler FJ. Survey research methods. Thousand Oaks (CA): Sage; 1993.
Frey JH, Oishi SM. How to conduct interviews by telephone and in person. Vol. 4. In: Fink A, editor. The survey kit. Thousand Oaks (CA): Sage Publications; 1995.
Hayward RA, Bernard AM, Freeman HE, et al. Regular source of ambulatory care and access to health services. Am J Public Health 1991 Apr;81(4):434-8.
Idler EL, Benyamini Y. Self-rated health and mortality: a review of twenty-seven community studies. J Health Soc Behav 1997 Mar;38(1):21-37.
Idler EL, Kasl SV. Self-ratings of health: do they also predict change in functional ability? J Gerontol Series B Psychol Sciand Soc Sci 1995 Nov;50(6):S344-53.
Kuder JM, Levitz GS. Visits to the physician: an evaluation of the usual-source effect. Health Serv Res 1985 Dec;20(5):579-96.
Lewin EL, Altman S, editors. America's health care safety net: intact but endangered. Committee on the Changing Market, Managed Care, and the Future Viability of Safety Net Providers. Washington (DC): National Academy Press; 2000.
Millman ML, editor. Access to health care in America. Washington (DC): National Academy Press; 1993.
Morgan DL, Kruger RA, editors. The focus group kit. Vols. 1-6. Thousand Oaks (CA): Sage; 1998.
National Center for Health Statistics. Clearinghouse on Health Indexes. Available at:
http://www.cdc.gov/nchs/products/pubs/pubd/other/clrhouse/clrhouse.htm. Accessed April 8, 2003.
State Health Access Data Center. Questions to ask your survey vender. Minneapolis: University of Minnesota. Available at: http://www.shadac.org/publications/docs/q_survy_v.pdf. Accessed April 8, 2003.
Sudman S, Bradburn NM. Asking questions: a practical guide to questionnaire design. San Francisco (CA): Jossey-Bass; 1988.
Swartz K. Interpreting the estimates from four national surveys of the number of people without health insurance. J Econ Social Measurement 1986 Oct;14(3):233-42.
Ware JE. The status of health assessment 1994. Ann Rev Public Health 1995;16:327-54.
Ware JE, Brook RH, Davies AR, et al. Choosing measures of health status for individuals in general populations. Am J Public Health 1981 June;71(6):620-5.
Zuvekas SH, Weinick RM. Changes in access to care, 1977-1996: The role of health insurance. Health Serv Res 1999 Apr;34 (1):271-9.
Current as of September 2003
Return to Contents
Return to Tools for Monitoring the Safety Net