Here, we address some of the common questions about measuring decision quality and Decision Quality Instruments (DQIs).
Why is it important to measure decision quality?
“I really would rather have had the breast off, to be honest. But that wasn’t an option I was given. They told me that they would do the lumpectomy. I am very nervous. It’s still a worry.” – Breast cancer patient reflecting on her surgical treatment decision
Too often patients are not given a voice in their medical decisions. Patient-centered care has been defined as “healthcare that establishes a partnership among practitioners, patients and their families (when appropriate) to ensure that decisions reflect patients’ wants, needs and preferences and that patients have the education and support they need to make decisions and participate in their own care” (Hurtado 2001). Despite increasing emphasis on this concept, there is still a lack of available measures to assess whether this is happening. In the absence of measures, there is no way for clinicians, consumers or payers to determine the quality of decisions made about most common medical tests and treatments.
Investigators at the HDSC have been leading efforts to develop and evaluate survey instruments to measure decision quality to fill this gap.
How are Decision Quality Instruments developed?
Over the past several years, a team of researchers, led by Dr. Sepucha at Mass General, have developed and tested an approach to measuring decision quality for common medical tests and treatment decisions. The work has been supported by and conducted in collaboration with investigators at the Informed Medical Decision Foundation, now the Informed Medical Decisions Program at MGH.
The development process followed best practices for survey research development and includes the following key steps:
- Task 1. Produce a parsimonious set of candidate facts and goals for the decision and evaluate for accuracy, importance and completeness.
- Task 2. Draft questions and conduct cognitive interviews to ensure the questions are understandable to diverse populations.
- Task 3: Conduct field tests to examine psychometric properties (such as retest reliability, construct and discriminant validity) as well as clinical sensibility (such as acceptability and feasibility) of the survey instruments.
- Task 4: Conduct studies to establish benchmarks and standards for knowledge, concordance and decision quality across settings and key populations.
What are some key findings regarding decision quality?
Insight 1: Patients and providers disagree on key facts and goals
A surprising finding from the early development work was the amount of disagreement between patients and providers on the prioritization of the key facts and the goals. For example, we asked patients and providers to select their top three facts and goals across a range of conditions and found some significant differences such as:
Patients top issue
Providers top issue
|Fact: Continuing usual activities won’t make herniated disc worse||
|Fact: Most patients have less pain when walking after knee or hip replacement.||
|Goal: Importance of keeping your breast when considering breast cancer surgery.||
|Goal: Importance of living as long as possible when considering chemotherapy.||
Across the different conditions, at least one patient or provider selected each fact and goal in their top three; however, no fact or goals was selected in top three by 100% of participants. These results emphasized the importance of engaging patients early in the survey development process and also highlight the potential problem of leaving decision making up to providers, who may not accurately diagnose their patients’ goals and preferences.
Insight 2: In general, patients have major gaps in knowledge.
The purpose of the knowledge questions that are included in the decision quality instruments is to cover content that every patient should understand before undergoing a test or treatment. Sometimes the knowledge questions get at specific facts, such as how many patients might have a serious complication if they have surgery. Other items get at the gist, such as whether patients understand that surgery will provide faster relief than non surgical options. We have found that there are significant gaps in patients’ understanding of the specifics and the gist information regarding many common tests and treatments. For example, only about half of breast cancer patients knew that about the tests and treatments that they had.
Insight 3: In a few decisions, we found no evidence that treatments reflected patients’ goals.
Most conditions had a few goals and concerns that significantly distinguished between patients who received different treatments. For example, patients who were very concerned about having surgery were more likely to choose non surgical approaches for knee osteoarthritis and patients who felt strongly about keeping their breast were more likely to have breast conserving surgery. Notable exceptions included treatment of coronary artery disease and systemic therapy for breast cancer, where none of the patients’ goals discriminated between those who received treatment and those who did not. For example, patients who felt very strongly about not being cut open and about avoiding cognitive problems were just as likely to have bypass surgery as those who did not feel as strongly. Additional studies are planned to examine whether these findings indicate a problem with the measurement approach, or appropriately reflect the lack of input of patients in these decisions.
Insight 4: Decision quality is associated with less regret and more confidence.
For total joint replacement, patients who met the criteria for high decision quality (i.e. were well informed and received treatments that matched their goals) had less regret and were more confident in their decisions. Further, patients who were more involved were more likely to have high decision quality. These findings provide good evidence of the validity for measurement of decision quality. We are continuing studies to examine whether this holds across a broader range of topics.
How are the Decision Quality Instruments (DQIs) scored?
Each instrument has its own scoring guide that includes the correct answers to the knowledge items and the parameter estimates for the concordance model. Please contact the research team at firstname.lastname@example.org with any questions about scoring the instruments.
DQI-Knowledge Score: A knowledge score is standardized by dividing the number of correct responses by the number of items, resulting in scores from 0% to 100%. A threshold for considering patients to be “well-informed” is often set using the mean knowledge score for a group of patients who have viewed a decision aid.
DQI-Concordance Score: To generate the concordance score we first develop a multivariate logistic regression model with treatment received (e.g. surgery vs. non-surgical) as the dependent variable. The goals and concerns are the independent variables and other factors that should influence treatments (e.g. stage of disease for breast cancer surgery) are included as needed. The regression model generates a predicted probability of treatment for each patient. Patients with a predicted probability >0.5 and who had treatment or those with a predicted probability ≤0.5 and who did not have the treatment, are classified as having treatments that “match” their goals. Dividing the number who matched by the total number in the sample then yields a summary concordance score (0-100%). Higher scores indicate that more patients are receiving treatments that match their goals.
Decision Quality Composite Score: a binary variable can be created with a score of 1 for patients who were well-informed and received treatments that matched their preferences and 0 for all others. As mentioned earlier, the threshold for patients to be considered “informed” is set (if available) at the mean for the group of patients who viewed a decision aid.