Basic Concepts and Terminology of Quality-of-Life Reserach

Quantitative strategies for assaying patients’ ‘quality-of-life’ evolved as part of the medical outcomes and quality assessment initiatives that began three decades ago. Subsequent work has
produced a well-defined set of measurement tools for collecting healthrelated quality-of-life (HRQOL) data, although the basic concepts of health that these tools measure remain perplexingly
overlapped and amorphous. Rigorous development steps during the creation and validation of these HRQOL ‘instruments’ distinguish them from ad hoc survey questionnaires. Methodological issues do
nevertheless remain about how to deploy these data collection tools within the study design of clinical trials.

Printer-friendly Version

There are no comments so far.

If you signup and login, you can post comments.

Origins of quantitative ‘quality-of-life’ assessment

During the late 1960s and early 1970s the medical literature presented various philosophical discussions about ‘quality-of- life’, typically addressing end-of-life contexts (such as hospice care) or the level of professional satisfaction experienced by physicians in training and practice. By the late 1970s and 1980s, healthcare quality assessment efforts—spearheaded in large measure by the prescient work of Avedis Donabedian (1)—spawned new attempts to objectify and quantify health outcomes. Assays of patient experience, obtained through self-reported data, form a central part of the patient-oriented outcomes section of Donabedian’s ‘structure–process– outcomes’ triptych. This figure portrays the position of HRQOL information within this larger context of healthcare delivery:

Efforts to craft reliable means of measuring patient outcomes accelerated during the late 1980s as a result of the Medical Outcome Study (MOS) (2). This large-scale, multi-year observational survey studied patients with prevalent and treatable conditions such as hypertension, heart disease, diabetes and depression. It focused on patients’ personal evaluation of their functional status, sense of well-being, treatment preferences and values through standardized patient surveys, and sought to correlate these evaluations with conventional clinical measures. The MOS produced a number of survey instruments, the best known of which is the Short Form 36 (SF36), a tool that remains in wide use today.

Subsequent published work falls into two categories: studies that apply existing instruments to new clinical populations and disease categories, and studies that propose and validate new HRQOL measures.

HRQOL terminology

Despite its many successes, the HRQOL research community has not yet arrived upon canonical definitions of the phenomena it studies (3). HRQOL constructs are implicitly personal and subjective because they derive from patients’ philosophical and ethical assumptions (4). Different investigators prefer different conceptual taxonomies. Consequently, the overlapping nature of these entities defeats attempts to make sharp distinctions, as the list of commonly used terms in this table reveals:

Concepts related to ‘quality-of-life’

Concept Meaning
Quality-of-life All personal and environmental factors related to subject’s life; may or may not include health issues
Health-related quality-of-life Physical, psychological and social aspects related to health
Health status Physical and symptomatic factors
Functional status Ability to perform desired activities, including level of symptoms while doing activities; external, observable
Well-being Psychological factors and sense of life satisfaction; internal, self-reported
Satisfaction Patient attitudes toward health and degree of approval of status

However, this epistemological uncertainty is less of a handicap for clinical research than it might seem. A study design that carefully matches the components of these concepts, whatever they are called, to the specific clinical contexts of interest and uses tools validated in relevant settings can achieve valuable insights (5).

A taxonomy of HRQOL measurement tools

Fortunately, the tools devised to collect HRQOL data divide into a clear taxonomy, which is depicted in this figure:

The majority of HRQOL studies use psychometric instruments, which query patients through multiple-choice questionnaires. Studies examining patient preferences use utility measures, which elicit patients’ views of their health status by interactively presenting them with alternative health scenarios. These two strategies yield different but complementary insights into patients’ quality-of-life characteristics (6,7).

Psychometric instruments

Using testing techniques from the field of psychology, these instruments consist of multiple-choice questions that are constructed and externally validated to measure patients’ understandings, feelings, attitudes, symptoms and capabilities. Two main types of psychometric measures are employed in clinical research: general health surveys and disease-specific instruments.

General health surveys

These probe broadly into patients’ experience, including symptoms, level of functioning, interpersonal relationships, feelings and mental status. Questions do not include factors specific to any particular condition or diagnosis. By remaining broadly applicable to all subjects, general health surveys permit universal comparisons across large populations of individuals and are more readily used to address health policy and organizational management questions. Two of the best-known general health surveys are the SF36 (8) and the Sickness Impact Profile (9).

Disease-specific instruments

These, in contrast, focus primarily on issues related to a particular health condition, although they may include some general questions. These tools apply to subsets of patient populations and yield results pertinent to other groups with similar clinical profiles. By providing detailed insights into patients’ clinical courses, disease-specific measures can be used in therapeutic management decisions much more easily than general health surveys. Examples include the Seattle Angina Questionnaire (10) and the Asthma Quality-of-Life Questionnaire (11). Investigators commonly further specialize these instruments for their research aims. Thus, there is an Adolescent Asthma Quality-of-Life Questionnaire (12), a Living with Asthma Quality-of-Life-Questionnaire (13), an Integrated Therapeutics Group Asthma Short Form (14), a Pediatric Asthma Caregiver’s Quality-of-Life Questionnaire (15) and an Asthma Quality-of-Life Questionnaire for Native American Adults (16), among others.

Debate about the relative value of general health surveys versus disease-specific measures usually resolves to the obvious recognition that they offer complementary types of information whose usefulness depends on the study’s specific research questions. HRQOL investigations commonly employ a general survey along with one or more disease-specific questionnaires.

Static and dynamic instruments

Psychometric HRQOL instruments also divide along an orthogonal axis: i.e. whether the tools have a static or dynamic structure. Nearly all current questionnaires are static and consist of a fixed set of questions presented to the patient in a prescribed order and uniform format. All patients answer the same questions each time they complete the survey.

In contrast, some investigators are evaluating dynamic questionnaires similar to the ‘computer adaptive testing’ now widely employed in educational settings. These systems present patients with questions drawn from a pool of candidate questions. Algorithms determine the question selection and order through calculations made on the responses already given by the patient. Depending on the patient’s responses, he/she may see very different questions from one session to the next. Similarly, two patients undergoing the same test may see completely different questions. Advocates of such systems claim that these dynamic instruments can increase the precision of testing, reduce the response burden (by shortening the test length) and handle missing values better (17). However, evaluation of these strategies is at an early stage and the algorithms for these tools remain undisclosed proprietary knowledge, making outside evaluation of this strategy rather difficult.

Preference-based (utility) measures

Health utility measures assay not only a patient’s quality-of-life, but also his/her attitudes towards that quality-of-life, by measuring the patient’s willingness to take on risks in order to change it. Finding a patient’s preferences is particularly useful in evaluating alternative therapeutic courses, given his/her current health status. The utility value is determined by presenting the patient with a series of alternative scenarios that change with each choice the patient makes. When the patient reaches a point where the alternatives seem equally acceptable, 0 and 1 based on that indifference point. Utilities thus reduce complex patient self-assessments to a simple numerical measure of the patient’s preference-adjusted health status.

Utilities probe patients’ preferences by three approaches:

  • Time tradeoff, which asks how much of his/her lifespan a patient would trade in order to obtain a given health outcome
  • Standard gamble, which asks a patient the highest risk of a lethal outcome he/she would accept in order to obtain a given degree of improvement in his/her current condition
  • Willingness to pay, which asks how much cash a patient would pay to obtain a given health outcome

Making such judgments is a complex mental undertaking, and patients frequently need graphical assistance to aid their understanding of the alternatives. Visual analog scales (18) and interactive computer interfaces (19) have been shown to assist patient decision-making. Computer-guided sessions are particularly helpful since the alternatives posed to the patient need to be personalized for his/her particular state, and this interactive procedure cannot readily be transacted on paper.

The primary value of utilities is that they yield continuous, interval scale data that are easy to convert into other values, notably Quality Adjusted Life Years (QALYs) for use in cost-effectiveness and cost-utility analyses (20,21). One year of perfect health is a QALY of 1.0; n years of life at a utility of r yields a QALY of n 4 r. Such calculations then allow arithmetic comparisons between health states. Coupling utilities with cost data generates the cost-per-QALY comparisons.

Despite their appealing numerical simplicity, utilities are sensitive to the search method used during preference elicitation (22) as well as the cognitive abilities, numerical skills, emotions and prejudices of the patients (23,24). They remain quite controversial within the HRQOL community, but their usefulness in cost-effectiveness assessments will likely increase their use over time.

HRQOL instrument development and deployment

When designing HRQOL studies, investigators frequently select tools from the large and accumulating set of published, validated instruments. The Medical Outcomes Trust (www.outcomestrust.org) maintains a repository of HRQOL measures that have met its stringent validation tests (25). However, investigators interested in clinical domains for which adequate tools are not yet available, or those in the business of creating new instruments, must undertake a long development and testing process.

Instrument creation

Authoring a new HRQOL instrument consists of multiple steps:

  • Define the clinical context of interest
  • Create a conceptual model of the clinical context against which ‘scales’ or ‘domains’ can be mapped
  • Select items (questions) to assay the domain scales (by writing questions de novo or selecting them from existing item pools (26))
  • Determine the appropriate length of the instrument (adjudicating between sensitivity and response burden—both increased by more questions (27))
  • Map (numerical) values to item responses (choosing between discrete/conditional values or continuous values such as with visual analog scales (28))
  • Design the questionnaire format (29) (including decisions about testing procedures such as whether subjects are shown their prior responses (30))
  • Create the statistical model linking item responses to concept traits
  • Translate the instrument for international use, if needed (a complex and subtle process (31))
  • Select a mode of data collection (e.g. paper forms, computerized system, automated telephone system)

Instrument validation

What distinguishes HRQOL instruments from arbitrary, ad hoc questionnaires is the process of rigorous validation against external metrics performed during instrument creation (32). For disease-specific measures, external validation is straightforward since, for instance, shortness-of-breath questions for asthma can be correlated with spirometry results. Validating general health survey questions about interpersonal relationships, attitude and other emotional matters requires various psychometric techniques typically used in psychology research.

Validation studies for instruments produce data about the performance attributes listed in this table:

Attribute Meaning
Validity Extent to which an HRQOL instrument measures what it is intended to measure
Reliability Extent to which repeated measures by the HRQOL instrument yield consistent values for patients who are stable
Responsiveness Extent to which any change in a patient’s status produces a detectable change in the HRQOL measure
Interpretability Extent to which HRQOL results are meaningful and applicable to questions in the domain of interest
Minimally important clinical difference Smallest change in HRQOL measure that correlates with a change in patient status that is considered significant; ‘Significance’ is domain-specific
Response burden Amount of cognitive effort, time and inconvenience required of the subject to complete the HRQOL instrument
Administrative burden Amount of logistical effort, time and expense required of staff to deploy, collect and process the HRQOL instrument
Generalizability Extent to which results from the study group can be applied to other groups

Of these, the most important are validity, reliability, responsiveness and interpretability (33,34). Reliability is crucial in cross-sectional studies where patients with equivalent status should produce similar HRQOL values. Responsiveness is crucial in longitudinal studies where changes in patient status should produce consistent, proportional changes in the HRQOL data. Ascertaining the value of the ‘minimally important clinical difference’ (35) for an HRQOL instrument requires further fine-tuned validation studies to determine the change in test result that correlates with the clinical change of interest. This value is essential for instruments intended to monitor patient responses to therapeutic interventions.

Instrument deployment

HRQOL studies employ the same types of study design used in clinical research generally, such as longitudinal versus cross-sectional and randomized versus observational. Important considerations in defining trial protocols include:

  • Coherent sampling strategies
  • Techniques for case-mix measurement and statistical control—risk adjustment is particularly crucial for scaling functional status measures
  • Meaningful assessment schedules for different diagnostic groups (including frequency of repeated measures and timing of test administration (36))
  • Proxy administration protocols when the primary subject is unable to complete the HRQOL instrument directly (e.g. in some pediatric, neurologic and psychiatric studies (4,37,38))

Data analysis issues

HRQOL data undergo the same types of biostatistical processing as other clinical research data. Typically though, the analyzed data are not the primary patient responses but rather the calculated scale scores defined by the conceptual model of the instrument. Investigators must exert considerable effort to clean HRQOL data during this process.

The self-reported nature of HRQOL data means that missing and incorrect values occur with much more frequency than in other types of studies. Imputation methods for missing responses usually involve the use of an arithmetic mean of non-missing responses if several questions map to the same scale. If the patient provides fewer responses than some minimally required number, the scale cannot be scored. Similarly, techniques for handling inappropriate duplicate responses (common when paper forms are used) must be defined to impute the patient’s presumably intended response from other question answers, or to invalidate the scale result.

A different imputation problem confronts longitudinal studies with between-group comparisons. When patients drop out of a study for any reason (e.g. death, withdrawal), several different techniques can be used to handle the patients’ missing values through the duration of the study protocol (39,40). These include:

  • ‘Last value carried forward’: continue the final ‘prior to dropout’ values for all subsequent data points
  • ‘Arbitrary substitution’: decrease by a selected, arbitrary amount the final ‘prior to dropout’ values for all subsequent data points
  • ‘Empirical Bayes’: estimate missing values from weighted slopes of the individual subject’s non-missing values and of aggregated group values
  • ‘Within-subject modeling’: estimate missing values from non-missing values by ordinary least-squares regression
  • ‘Multiple imputation’ models
  • ‘Mixed-model analysis of variance’ approaches

After creating cleaned, useful data, the investigator’s next challenge is to determine the data’s meaning. Appropriate norms—particularly with suitable risk adjustment—are currently being derived for HRQOL data, in contrast to conventional clinical data types, for which norms are well established. Meaningful ‘bins’ for patient results also remain to be established for HRQOL instrument results in various clinical domains. Since HRQOL instruments are frequently combined in study protocols, techniques determining the meaning of conjoined values from these measures must also be established (41). Similarly, how HRQOL data are combined with other clinical data, and what this means, must be addressed (42). Finally, the investigator must determine how to present and report HRQOL data (43). Various data visualization ideas have been created, but this remains an emerging field (44).

Conclusions

Despite resting on rather indistinct definitional footing, HRQOL research has evolved a robust set of measurement tools to assay patient experience and thus the outcomes of healthcare interventions. General health surveys can yield reliable, broad-based, population-wide data, useful for health policy questions, while disease-specific measures can ‘biopsy’ patient status in a non-invasive, repeated, responsive fashion. Utilities can reveal patient preferences comparable across domains and treatment strategies, and they form the basis for cost-utility and cost-effectiveness analyses. Creation and validation of new HRQOL instruments involve complex steps that slow the proliferation of available tools, although an expanding set of measures is now available. Issues concerning the proper analysis and interpretation of HRQOL information frame the major thrust for future HRQOL research.

References

  1. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q 1966;44(Suppl):166–206.
  2. Tarlov A, Ware J, Greenfield S et al. The medical outcomes study: An application of methods for monitoring the results of medical care. JAMA 1989;262:925–30.
  3. Gill TM, Feinstein AR. A critical appraisal of the quality of quality-of-life measurements. JAMA 1994;272:619–26.
  4. Dijkers M. Measuring quality of life: Methodological issues. Am J Phys Med Rehabil 1999;78:286–300.
  5. Patrick DL, Chiang YP. Measurement of health outcomes in treatment effectiveness evaluations: Conceptual and methodological challenges. Med Care 2000;38(Suppl):II14–25.
  6. Revicki D, Kaplan R. Relationship between psychometric and utility-based approaches to the measurement of health-related quality of life. Qual Life Res 1993;2:477–87.
  7. Lalonde L, Clarke AE, Joseph L et al. Comparing the psychometric properties of preference-based and nonpreference-based health-related quality of life in coronary heart disease. Canadian Collaborative Cardiac Assessment Group. Qual Life Res 1999;8:399–409.
  8. Ware J, Sherbourne C. The MOS 36-item short-form health survey (SF-36): Conceptual framework and item selection. Med Care 1992;30:473–83.
  9. Bergner M, Bobbitt RA, Carter WB et al. The Sickness Impact Profile: Development and final revision of a health status measure. Med Care 1981;19:787–805.
  10. Spertus JA, Winder JA, Dewhurst TA et al. Development and evaluation of the Seattle Angina Questionnaire: A new functional status measure for coronary artery disease. J Am Coll Cardiol 1995;25:333–41.
  11. Leidy NK, Coughlin C. Psychometric performance of the Asthma Quality of Life Questionnaire in a US sample. Qual Life Res 1998;7:127–34.
  12. Rutishauser C, Sawyer SM, Bond L et al. Development and validation of the Adolescent Asthma Quality of Life Questionnaire (AAQOL). Eur Respir J 2001;17:52–8.
  13. van der Molen T, Postma DS, Schreurs AJ et al. Discriminative aspects of two generic and two asthma-specific instruments: Relation with symptoms, bronchodilator use and lung function in patients with mild asthma. Qual Life Res 1997;6:353–61.
  14. Bayliss MS, Espindle DM, Buchner D et al. A new tool for monitoring asthma outcomes: The ITG Asthma Short Form. Qual Life Res 2000;9:451–66.
  15. Reichenber K, Broberg AG. The Paediatric Asthma Caregiver’s Quality of Life Questionnaire in Swedish parents. Acta Paediatr 2001;90:45–50.
  16. Gupchup GV, Hubbard JH, Teel MA et al. Developing a community-specific health-related quality of life (HRQOL) questionnaire for asthma: The Asthma-Specific Quality of Life Questionnaire for Native American Adults (AQLQ-NAA). J Asthma 2001;38:169–78.
  17. DynHA (Dynamic Health Assessments). QualityMetric Inc. Available from: URL: http://www.qmetric.com/innohome/index.shtml. Date accessed 5/23/2001.
  18. Badia X, Monserrat S, Roset M et al. Feasibility, validity and test-retest reliability of scaling methods for health states: The visual analogue scale and the time trade-off. Qual Life Res 1999;8:303–10.
  19. Lenert LA, Soetikno RM. Automated computer interviews to elicit utilities: Potential applications in the treatment of deep venous thrombosis. J Am Med Inform Assoc 1997;4:49–56.
  20. Ganiats TG, Browner DK, Kaplan RM. Comparison of two methods of calculating quality-adjusted life years. Qual Life Res 1996;6:162–4.
  21. Saint S, Fendrick AM. Economic Evaluation: A Brief Overview. Clinical Researcher 2001;1:36–9.
  22. Lenert LA, Cher DJ, Goldstein MK et al. The effect of search procedures on utility elicitations. Med Decis Making 1998;18:76–83.
  23. Kaplan R, Feeny D, Revicki D. Methods for assessing relative importance in preference based outcome measures. Qual Life Res 1993;2:467–75.
  24. Lenert L, Kaplan RM. Validity and interpretation of preference-based measures of health-related quality of life. Med Care 2000;38(Suppl):II138–50.
  25. Lohr KN, Aaronson NK, Alonso J et al. Evaluating quality-of-life and health status instruments: Development of scientific review criteria. Clin Ther 1996;18:979–92.
  26. Revicki DA, Cella DF. Health status assessment for the twenty-first century: Item response theory, item banking and computer adaptive testing. Qual Life Res 1997;6:595–600.
  27. Katz JN, Larson MG, Phillips CB et al. Comparative measurement sensitivity of short and longer health status instruments. Med Care 1992;30:917–25.
  28. Grunberg SM, Groshen S, Steingass S et al. Comparison of conditional quality of life terminology and visual analogue scale measurements. Qual Life Res 1996;5:65–72.
  29. Mullin PA, Lohr KN, Bresnahan BW et al. Applying cognitive design principles to formatting HRQOL instruments. Qual Life Res 2000;9:13–27.
  30. Guyatt GH, Townsend M, Keller JL et al. Should study subjects see their previous responses: Data from a randomized control trial. J Clin Epidemiol 1989;42:913–20.
  31. Herdman M, Fox-Rushby J, Badia X. ‘Equivalence’ and the translation and adaptation of health-related quality of life questionnaires. Qual Life Res 1997;6:237–47.
  32. Hanita M. Self-report measures of patient utility: Should we trust them? J Clin Epidemiol 2000;53:469–76.
  33. Guyatt GH, Feeny DH, Patrick DL. Measuring health-related quality of life. Ann Int Med 1993;118: 622–9.
  34. Elasy TA, Gaddy G. Measuring subjective outcomes: Rethinking reliability and validity. J Gen Intern Med 1998;13:757–61.
  35. Juniper EF, Guyatt GH, Willan A et al. Determining a minimal important change in a disease-specific quality of life questionnaire. J Clin Epidemiol 1994;47:81–7.
  36. Pater J, Osoba D, Zee B et al. Effects of altering the time of administration and the time frame of quality of life assessments in clinical trials: An example using the EORTC QLQ-C30 in a large anti-emetic trial. Qual Life Res 1998;7:273–8.
  37. Novella J, Ankri J, Morrone I et al. Evaluation of the quality of life in dementia with a generic quality of life questionnaire: The Duke Health Profile. Dement Geriatr Cogn Disord 2001;12:158–66.
  38. Theunissen NC, Vogels TG, Koopman HM et al. The proxy problem: Child report versus parent report in health-related quality of life research. Qual Life Res 1998;7:387–97.
  39. Diehr P, Patrick D, Hedrick S et al. Including deaths when measuring health status over time. Med Care 1995;33(Suppl):AS164–72.
  40. Revicki DA, Gold K, Buckman D et al. Imputing physical health status scores missing owing to mortality: Results of a simulation comparing multiple techniques. Med Care 2001;39:61–71.
  41. Patrick DL, Deyo RA. Generic and disease-specific measures in assessing health status and quality of life. Med Care 1989;27:S217–31.
  42. Austin PC, Escobar M, Kopec JA. The use of the Tobit model for analyzing measures of health status. Qual Life Res 2000;9:901–10.
  43. Staquet M, Berzon R, Osoba D et al. Guidelines for reporting results of quality of life assessments in clinical trials. Qual Life Res 1996;5:496–502.
  44. Klee M, Groenvold M, Machin D. Using data from studies of health-related quality of life to describe clinical issues examples from a longitudinal study of patients with advanced stages of cervical cancer. Qual Life Res 1999;8:733–42.
  45. Spertus J, Tooley J, Poston W et al. Expanding the outcomes in clinical trials of heart failure: The quality of life and economic components of EPHESUS (Eplerenone’s Neurohormonal Efficacy and Survival Study). Am Heart J 2001 (in press).

Created: February 06, 2009 15:02; Last updated: February 06, 2009 16:07