Data Issues for Measuring Quality for Persons with Disabilities
Developing Quality of Care Measures for People with Disabilities: Summary of Expert Meeting
As noted previously, to focus the meeting because of time constraints, the discussion concentrated on quality measures applicable to individual health care providers (e.g., to help them improve or track their performance). The data discussion therefore also concentrated on information that could be aggregated for individual health care providers. Federal surveys provide data that support population-based quality measurement for persons with disabilities, but the group did not discuss this data source.
Administrative Data Sets
Administrative data include claims, encounter records, and other information generated during routine administration of health care services. Major sources of administrative data include Medicare, Medicaid, Veterans Affairs (VA), and private health insurance plans. As discussed earlier, participants agreed that all research involving Medicare administrative data should include persons who qualify because of disability, unless there are strong and compelling reasons to exclude them. Given current eligibility requirements (i.e., before implementation of eligibility changes mandated by the Patient Protection and Affordable Care Act16), Medicaid databases contain substantial numbers of individuals with disabilities.
Typically, administrative data include routine demographic information (e.g., age, sex, race and ethnicity, indicators of eligibility status) and basic information relating to a particular claim or heath care encounter, including International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM)17 diagnosis codes, codes for procedures, and service dates. Some administrative data may contain information pertinent to impairment-related services, such as Medicare data generated by inpatient rehabilitation facilities (IRF), skilled nursing facilities (SNF), and home health agencies (HHA). (Beyond administrative information, VA facilities also generate considerable electronic health information relevant to the discussion below.)
The group agreed that ICD-9-CM codes provide extremely limited information relating to disability. Thus, administrative databases relying for most of their clinical insight on ICD-9-CM codes have minimal utility for disability-related research. Although it is possible to identify diseases and disorders that can be disabling, administrative data do not contain those additional multidimensional pieces of information about disability. Health care providers are scheduled to move to the updated ICD-10-CM for morbidity reporting and other administrative purposes on October 1, 2013.
Research Question (RQ): Will ICD-10-CM provide any additional insight into disability beyond the limited information in ICD-9-CM? How could ICD-10-CM be used to its maximum advantage in disability-related research?
Despite bleak assessments of the value of standard administrative data, it is possible that data generated by IRFs, SNFs, and HHAs might hold greater promise for research addressing persons with disabilities. The obvious problem involves tying patient populations to care provided within specific settings (IRF, SNF, or HHA), and potential biases related to that selection process. Section 723 of the Medicare Prescription Drug, Improvement, and Modernization Act of 200318 required HHS to make Medicare data about beneficiaries with chronic conditions readily available to researchers. The resultant database, the Chronic Condition Warehouse (CCW),19 selected its longitudinal cohort using the 5 percent national Medicare sample from 1999-2004, with all beneficiaries within that cohort tracked continually over time. From 2005 forward, CCW contains information for 100 percent of enrolled Medicare beneficiaries with the targeted conditions. CCW uses diagnosis and procedure data on Medicare claims to identify 21 chronic conditions (e.g., acute myocardial infarction, Alzheimer's disease, breast cancer, depression, diabetes, glaucoma, heart failure, hip fracture, osteoporosis, and stroke). Most importantly, all information gathered by IRFs, SNFs, and HHAs about these beneficiaries is merged onto the CCW data. Although this database offers a rich source of functional information, all of these data are derived during provision of specific services, raising the potential for bias relating to differences in service availability or use by individual patients.
RQ: What quality measurement research can be done with Medicare IRF, SNF, and HHA specifically for persons with disabilities? How can the CCW be used to examine quality of care for persons with disabilities? Which quality dimensions relating to persons with disabilities do these Medicare data sources address?
The potential of using prescription drug data to examine quality of care for people with disabilities was also raised. Dr. Weinrich raised the possibility of using Medicare Part D data, and Dr. Himmelstein described the value of pharmacy data within Medicaid data sets. In particular, Dr. Himmelstein noted the utility of prescription data for case finding (i.e., identifying individuals with specific conditions). The VA also has prescription medication data online.
RQ: What quality measurement research can be done with prescription drug data specifically for persons with disabilities? Do different data sets containing medication information present different strengths and weaknesses for this purpose? Which quality dimensions relating to persons with disabilities might prescription medication data address?
Electronic Medical Records and Health Information Technology
The potential value of EMRs and health IT had come up throughout the day, as noted above. During the afternoon session, the discussion became more specific. With some exceptions (e.g., the VA health system), information on disability is largely lacking within EMRs, just as it is in paper records. "The electronic medical record is only as good as the information that the clinicians and providers are putting in there," observed Dr. Sandel, "and they don't routinely assess functional status. It's just not part of the routine assessment." Other dimensions of disability are even less likely to be documented in EMRs.
Participants suggested approaches for enriching the disability data within EMRs. Dr. Sandel suggested identifying tools designed specifically for different conditions that capture relevant functional status and disability information. These tools could be embedded within EMRs, facilitating collection of these data about patients with the conditions for clinical uses, quality measurement, and research. More generic (i.e., diagnosis-independent) tools could also be useful, such as the method developed by Alan Jette and colleagues at Boston University, the Activity Measure for Post-Acute Care (AM-PAC).20 Dr. Himmelstein speculated about creating an electronic disability record that would be longitudinal and track patient status over time. Dr. Campbell described developing health IT systems for specialized clinical settings focusing on rare conditions, such as the hemophilia clinic network. He is also working with other stakeholders interested in rare conditions (e.g., spinal bifida, Duchene muscular dystrophy) to develop health IT tools to address specific goals.
Recalling the multidimensional aspects of disability, Dr. Clark recommended that any such systematic data gathering approach include a full range of information, including information about emotional health, pain, and fatigue. CMS is apparently working on new data gathering tools for hospital discharge evaluations and for post-acute care settings. EMR disability information templates could be structured to integrate seamlessly with CMS's data tools.
RQ: What data collection tools to gather multidimensional information about disability could be embedded within EMRs? How should these tools integrate with data gathering tools CMS is developing? How would the data generated by these tools be used for research and for quality measurement? What would be the feasibility and cost implications of routinely gathering disability data using such tools? Which clinicians would collect the data, how would they be trained, and how would data gathering costs be best compensated to provide incentives for complete and accurate reporting?
Dr. Saliba raised concerns about some negative consequences of the practice of "copying and pasting" information within an EMR. Clinician participants raised situations where the extent or severity of functional impairments or other disability dimensions changes over time, thus necessitating the continuous reassessment and updating of records to reflect changes. "We see problem lists getting perpetuated from record to record," recounted Dr. Saliba. "People support a goal of auto-populating a lot of the fields in electronic health records based on prior diagnoses or prior history. But that earlier information may no longer be current."
RQ: How do "copy and paste" and "auto-populate" practices of clinicians using EMRs affect the accuracy of information in EMRs concerning patients' level of disability? How can accuracy be tracked and errors resulting from these EMR practices be identified? How often should entirely new data be entered into EMRs to ensure that information about disability is up-to-date and accurate?
Dr. Sandel also raised the general concern about conducting research or generating quality measures using EMRs: the fact that much of the crucial information is contained in narrative texts. The data are therefore difficult to extract and put into an analyzable format. Defining specific data fields for information about disability dimensions could assist the process of extracting relevant data. Dr. Himmelstein suggested that AHRQ is well-positioned to address this issue of specifying the format and content of disability-related information in EMRs to ensure that the data are easily identified for analysis.
RQ: How should data concerning disability be entered into EMRs to ensure the data are easy to identify and can be extracted for analysis? How can disability data considerations be included in current AHRQ initiatives surrounding the design, content, and technological specifications of EMRs?
John Hough, DrPH, suggested alternative approaches for resolving concerns about extracting information from medical records that have implications for research and quality measurement data. Extracts of his comments are as follows:
Hough: Between 2004 and 2006, the National Committee on Vital and Health Statistics (NCVHS), which advises the [HHS] Secretary, received recommendations on a variety of standards for setting up EMRs. NCVHS received those recommendations from various domain-specific working groups operating under the auspices of the Consolidated Health Informatics (CHI) Initiative, which had been an "eGov" initiative throughout all departments. The functions of the CHI Initiative, including continuing responsiveness to all its recommendations, now is primarily conducted by the HHS Office of the National Coordinator for Health Information Technology (ONC).
One of the CHI domain-specific groups had been assigned to work in the Functioning and Disability Domain. The CHI Functioning and Disability Working Group recommended to NCVHS that ICF be the language in which terms reflecting functional status or disability should be transmitted. Their recommendations, though quite valuable, primarily referred to ICF coding at the so-called "code stem level," namely, one letter representing one of the four ICF domains followed by up to five digits before a decimal point. The ICF coding scheme involves a straightforward but nevertheless complicated system of post-decimal modifying digits, which reflect the judged or assessed level of severity associated with the impairment represented by the ICF code stem.
According to the World Health Organization (WHO), ICF codes have no inherent meaning without qualifiers, and by default, WHO interprets incomplete codes as signifying the absence of a problem. Thus, the qualifiers impart most of the meaningful interpretation in any ICF code. Therefore they are valuable and should be components of the standards associated with moving data represented by discrete ICF codes: all codes should be qualifier-modified.
As a secondary component of their recommendations, the CHI Working Group and NCVHS invoked the usefulness of "LOINC coding" for "question and answer sets." LOINC stands for "Logical Observation Identifiers Names and Codes," and LOINC codes are designed to accompany the codes or "answers" from other nomenclatures for easy and reusable transmittal of such discrete codes, for example, pathology laboratory values. This means that a reproducible LOINC code associated with a qualifier-modified ICF code could impart information representing any answer to any question, such as a scored value on a conventional functional assessment instrument like the Functional Independence Measure.
There are more than 100,000 qualifier modifications to those code stems, although the full complement of qualifier-modified ICF codes has not yet been calculated. This represents a shortcoming in existing ICF coding. But once catalogued, the full complement of qualifier-modified ICF codes would constitute an enormously useful tool for associating ICF-oriented concepts with ICF codes, that in turn would be associated with discrete, unique cases. In that approach, quite granular cases of functional impairment, activity limitation, or participation restriction could be identified in a standardized manner using the modified code stems. That is what the CHI Working Group on Functioning and Disability recommended, what the NCVHS advised, and now, since February, 2007, what the Secretary of Health and Human Services, has approved.
Therefore, one formative task still awaiting work by health services researchers would be to explicate the full universe of qualifier modified ICF codes. The goal would be not only to establish a catalog of codes, but also to associate them in a reproducible, reusable format with LOINC codes, which are designed to accomplish this parallel purpose. Any model of disability can be accommodated by these code stems; an investigator would not need to dispense with their preferred model of disability in order to utilize ICF coding and to associate a particular outcome within that model with a related, qualifier-modified ICF code.
The NCVHS recommendations called for us to develop this catalog of LOINC codes associated with qualifier-modified ICF codes. We call this process the "LOINC-ification" of ICF codes. A good start has been made by Regenstrief Institute scientists at Indiana University School of Medicine, who have already assigned LOINC code to a number of what are called "government forms." For example, the CMS Minimum Data Set for skilled nursing facility services, Version 3.0, is now in the electronic LOINC environment, waiting to be matched to any set of questions, and various ICF codes representing functional status among nursing home residents could be one set of such questions for which LOINC codes would be the answers. Other government forms that have already been LOINC-ified include the Social Security Administration's Residual Functional Capacity form for both physical and mental functioning and OASIS [Outcome and Assessment Information Set], used in home health care. A small collection of very widely [available] assessment instruments, like the Geriatric Depression Scale, are already in this LOINC environment.
RQ: How should ICF codes be used, to maximum advantage, within health IT systems to facilitate flexible but detailed definitions of disability? What algorithms could be designed to assist researchers with using ICF codes in an electronic data environment? How could these data resources be used for quality measurement?
Data Provided by Patients
Information in medical records has historically been generated by clinicians. However, the advent of EMRs and secure patient portals for access to online information within health care settings has raised the possibility of patients contributing information directly to EMRs. For instance, the AM-PAC tool20 mentioned above was designed for patients to self-administer. As Dr. Sandel suggested, "You could envision a world in which, for example, at Kaiser, a patient logs on to KP.org and [fills] out their health assessment form… embedded in that could be the AM-PAC." Dr. Saliba described an evaluation of obtaining information directly from nursing home residents for Medicare's SNF Minimum Data Set items. The assessment found that:
Saliba: Direct patient self-reporting was feasible. It actually saved time over staff trying to eyeball the person and guess what was going on. And it was more accurate [comparing it] to independent 'gold standard' measures of what was going on with the individual.
We began with the hypothesis that there was an absolute cognitive cut point beyond which people could not self-report.… We found that we were wrong: there is no absolute cognitive cut point below which people are not able to answer these questions and give more valid responses then staff observations or proxy observations.
RQ: How could data about various disability dimensions be gathered routinely directly from patients? What validity, reliability, and feasibility issues are raised by disability data gathering directly from patients? How might demographic characteristics—including age, sex, race, ethnicity, socioeconomic status—and cultural factors affect data gathered through these self-reports? What are the costs of obtaining self-reported information?
Dr. Saliba described one nursing home resident whom the staff had assumed, from the time of her admission 1 year previously, was cognitively impaired and incapable of meaningful communication. During the study about gathering data directly from patients, research staff provided a hearing amplification device to the woman, who "then sat up, began responding to their questions, and pointed to answers on a card that they were using to guide their interview with her. She was reading and giving answers to the questions." The woman was not cognitively limited; she had hearing loss.
Dr. Stineman raised concerns about "people in long-term care that have been misdiagnosed or mis-assumed not to have the ability to be autonomous and in control of their lives." Dr. Kirschner raised specific concerns about persons with cerebral palsy who are dysarthric and assumed not to be cognitively capable. Assistive technologies can accommodate communication needs and allow these patients to self-report information. Harvey Schwartz, PhD, MBA, raised concerns about people with manual dexterity difficulties typing information into EMRs (e.g., as in the AMPAC example above).
RQ: How might sensory and communication disabilities affect patients' self-reports? How can communication needs be accommodated so that patients can provide their own responses to questions (e.g., concerning disability dimensions, perceptions of care)? For entering data directly online into EMRs, what types of accommodations are required to ensure full participation of all patients (e.g., including those who cannot use a keyboard)?
Dr. Clark raised the point that certain types of information have inherent validity and accuracy when reported directly by patients. Examples include patients' perceptions of communication or their experiences during clinical encounters. For quality measurement purposes, it is important to recognize that whatever items are gathered from patients will then likely become the targets of efforts to improve care. Therefore, it is critical to identify topics that reflect aspects of care that patients view as important.
Although there was general enthusiasm for the Consumer Assessment of Healthcare Providers and Systems (CAHPS®) set of survey instruments, there was some concern about the implications of the questionnaire that was developed specifically for individuals with impaired mobility. The intentions were good, given widespread physical barriers throughout the health care delivery system. But given the diversity of disabling conditions, proceeding disability-by-disability to design separate CAHPS instruments seems inefficient. It might be more appropriate to take a "universal design" approach toward measurement that would consider the full range of persons with and without disabilities in constructing question banks and use technologies (e.g., computer assistance) and methodological techniques (e.g., item response theory) to streamline administration.
It will be critically important to conduct extensive cognitive testing of different word choices and question formulations to capture critical concepts for different broad categories of disabilities. For example, in designing the CAHPS survey for persons with mobility impairments, the researchers found they could not use the word "barrier" in asking about the physical impediments that individuals encounter. They did not find an alternative way of capturing that issue, which meant that a major area of concern was not included in the survey. Dr. Andresen suggested starting with ICF concepts and code categories to ensure that the broad range of disability dimensions is captured in a universally designed survey.
RQ: What aspects of care are of particular importance to individuals with disabilities? How do these topics vary across different disabling conditions? What tools could be developed for patients to report on these important aspects of care? How could a "universal design" approach be implemented in designing and administering surveys about health care experiences?