Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
Archive print banner

CAHPS® Hospital Survey

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

Researchers describe the development and testing of the CAHPS® Hospital Survey

The Consumer Assessment of Healthcare Providers and Systems (CAHPS®) program was first launched by the Agency for Healthcare Research and Quality in October 1995. The CAHPS® program is funded and administered by AHRQ (HS00092), which works closely with the public and private organizations that comprise the CAHPS® Consortium. The program develops and supports the use of a comprehensive and evolving family of standardized surveys that ask consumers and patients to report on and evaluate their experiences with health care. For more information, please visit the CAHPS® Web site at https://www.cahps.ahrq.gov

The CAHPS® Hospital survey (H-CAHPS) was designed to enable patients, physicians, and payers to compare quality among hospitals and to facilitate quality improvement in hospitals. A special December 2005 issue of Health Services Research 40(6, part 2) includes several papers that detail the development and testing of H-CAHPS, as well as decisions that shaped the final product. Following are brief summaries of the editorial and articles that appear in the issue.

Darby, C., Hays, R.D., and Kletke, P. "Development and evaluation of the CAHPS® hospital survey," pp. 1973-1976.

The authors of this editorial point out that the development of the CAHPS® surveys must incorporate input from a wide range of stakeholders to ensure the final product will meet the needs of those decisionmakers. In addition, rigorous scientific methods need to be applied in the development and evaluation of CAHPS® data to ensure that the products are credible and useful.

Goldstein, E., Farquhar, M., Crofton, C., and others. "Measuring hospital care from the patients' perspective: An overview of the CAHPS® hospital survey development process," pp. 1977-1995.

This article provides an overview of the solicitation of input on the content of the survey and the methods for sampling, data collection, and analysis. Input from stakeholders was obtained from cognitive interviews and focus groups with recently hospitalized patients, stakeholder meetings, and issuance of multiple Federal Register notices and a call for measures. After conducting a pilot test of the survey in three States, the researchers conclude that H-CAHPS can be administered as a standalone survey or integrated with the existing hospital surveys, and provide sufficient standardization to ensure valid comparisons among hospitals.

Castle, N.G., Brown, J., Hepner, K.A., and Hays, R.D. "Review of the literature on survey instruments used to collect data on hospital patients' perceptions of care," pp. 1996-2017.

This paper describes the review of scientific literature on hospital patient surveys, which began the survey development process. The researchers supplemented the review with a call for measures from survey vendors and others. The many diverse hospital survey instruments found underscore the benefit of using a standardized survey, along with standardization of the sampling, administration protocol, and mode of administration.

Levine, R.F., Fowler Jr., F.J., and Brown, J.A. "Role of cognitive testing in the development of the CAHPS® hospital survey," pp. 2037-2056.

Following development of an early draft of the survey, but prior to field testing, the team conducted a series of one-on-one interviews with recently hospitalized patients to direct revisions to the survey, after assessing how well draft items measured what the team had intended. Many survey items required modification because respondents lacked the information required to answer them, respondents were asked to make distinctions that were too fine for them to make, the items were not measuring the constructs they were intended to measure, and other reasons.

Sofaer, S., Crofton, C., Goldstein, E., and others. "What do consumers want to know about the quality of care in hospitals?", pp. 2018-2036.

Researchers conducted 16 focus groups in 4 cities of people who had been recently hospitalized or had a close loved one recently hospitalized, and they found that consumers and patients have a high degree of interest in hospital quality. Participants were most interested in survey items relating to doctor communication with patients, nurse and hospital staff communication with patients, responsiveness to patient needs, and cleanliness of the hospital room and bathroom. These findings were consistent across focus groups and participant characteristics.

Keller, S., O'Malley, A.J., Hays, R.D., and others. "Methods used to streamline the CAHPS® hospital survey," pp. 2057-2077.

To streamline the survey, the researchers used standard psychometric methods to assess the reliability and construct validity of the survey's 33 items, in conjunction with the importance assigned to each item by focus group participants. Sixteen questions (half the original survey) that measured seven aspects of hospital care (communication with nurses, communication with doctors, responsiveness to patient needs, physical environment, pain control, communication about medication, and discharge information) demonstrated excellent fit to the data.

O'Malley, A.J., Zaslavsky, A.M., Hays, R.D., and others. "Exploratory factor analyses of the CAHPS® hospital pilot survey responses across and within medical, surgical, and obstetric services," pp. 2078-2095.

This paper details the results of factor analysis used to evaluate survey data from 132 hospitals in 3 States for hospital-level, hospital-service level and patient-level differences. Three factors best described hospital-level differences: physician factors, nurse factors, and environment. Three factors explained much of the inter-unit variability: pain control, medication, and discharge information. Six factors best described inter-item differences at the patient level, varying somewhat from those found at the hospital level.

Elliott, M.N., Edwards, C., Angeles, J., and others. "Patterns of unit and item nonresponse in the CAHPS® hospital survey," pp. 2096-2119.

In this study, the researchers used a common set of 11 administrative variables to predict unit (the person sampled) nonresponse and the rate of item nonresponse to the survey. Unit response was highest for younger patients and patients other than non-Hispanic whites; item nonresponse increased steadily with age. Nonresponse weights did not improve overall precision below sample sizes of 300-1,000, and are unlikely to improve the precision of hospital comparisons.

DeVries, H., Elliott, M.N., Hepner, K.A., and others. "Equivalence of mail and telephone responses to the CAHPS® hospital survey," pp. 2120-2139.

To estimate the effect of mail versus telephone survey methods on reports and ratings of hospital care, the researchers collected CAHPS® data in 2003 by mail and telephone from 9,504 patients; 39 percent responded by telephone and 61 percent by mail. They found significant mode effects for 13 of the 21 survey questions examined in the study. Telephone respondents were more likely to rate care positively and health status negatively compared with mail respondents. This suggests that mode of survey administration should be standardized or carefully adjusted for.

Hurtado, M.P., Angeles, J., Blahut, S.A., and Hays, R.D. "Assessment of the equivalence of the Spanish and English versions of the CAHPS® hospital survey on the quality of inpatient care," pp. 2140-2161.

The H-CAHPS was developed and tested in both English and Spanish. A forward-backward translation procedure followed by committee review and cognitive testing was used to ensure a translation that was both culturally and linguistically appropriate. The researchers compared responses to the two language versions to evaluate equivalence and assess the reliability and validity of both versions. The results provide preliminary evidence of the equivalence between the Spanish and English versions of H-CAHPS.

O'Malley, A.J., Zaslavsky, A.M., Elliott, M.N., and others. "Case-mix adjustment of the CAHPS® hospital survey," pp. 2162-2181.

These authors developed a model to adjust for patient case mix on H-CAHPS responses, and to assess the impact of adjustment on comparisons of hospital quality. The most important case-mix variables were hospital service (surgery, obstetric, medical), age, race, education, general health status, speaking Spanish at home, having a circulatory disorder, and interactions of each of these variables with hospital service area. The authors conclude that case-mix adjustment has a small impact on hospital ratings, but can reduce bias in comparisons between hospitals.

Return to Contents
Proceed to Next Article

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care