TORONTO -- The analytic method used to correct for selection bias in observational studies can mean the difference between a 50% treatment effect and a 16% treatment effect, researchers here reported.
TORONTO, Jan. 16 -- The analytic method used to correct for selection bias in observational studies can mean the difference between a 50% treatment effect and a 16% treatment effect, pointed out researchers here.
Moreover, in a study that assessed the benefit of invasive versus medical therapy for acute myocardial infarction, the 16% came closer to mirroring the 8% to 21% effect reported in randomized controlled trials, said Thrse A. Stukel, Ph.D., of the Institute for Clinical Evaluative Sciences in Toronto.
When compared with standard modeling such as multivariable risk adjustment and propensity scoring, a technique called instrumental variable analysis did the best job of eliminating both observed and hidden biases, Dr. Stukel and colleagues reported in the Journal of the American Medical Association.
But while instrument variable analyses are effective tools for answering policy questions such as how resources can best be deployed, they are not helpful for clinicians who are looking for information to help them make specific clinical decisions, she said.
Dr. Stukel and colleagues studied 122,124 Medicare patients ages 65 to 84, who were hospitalized with an acute MI in 1994 and 1995. A total of 73,238 patients--usually younger individuals with less severe disease--were referred for cardiac catheterization. All patients were followed for seven years to assess the association between long-term survival and cardiac catheterization within 30 days of hospital admission.
She noted that randomized controlled trials that compared percutaneous coronary interventions to medical management found an 8% to 21% improved relative survival favoring percutaneous coronary interventions.
She and her colleagues compared multivariable model risk adjustment, propensity score risk adjustment, propensity-based matching, and instrumental variable analysis.
Among the findings:
Instrumental variable analysis requires the selection of an instrumental variable that is "related to the decision to have the treatment, but not related to the outcome of the study," said Ralph B. D'Agostino, Jr., Ph.D. and Ralph B. D'Agostino, Sr., Ph.D., in an accompanying editorial.
The instrument "is used to reduce or remove the bias due to unobserved baseline characteristics."
In the analysis by Dr. Stukel and colleagues, the instrument chosen was regional catheterization rate, which the D'Agostinos said was a useful variable because it correlated to the decision to treat, but it also "relates, however, to a successful outcome because it is a system variable indicative, as the authors stated, to "more high volume hospitals with specialized staff and equipment and coronary care units."
For that reason, the "status of this variable as an instrumental variable is not completely obvious," the editorialists wrote.
Moreover, Drs. D'Agostino faulted the authors for not including the regional variable in their propensity analysis and failing to explain this omission, since inclusion might have impacted the propensity score.
Nonetheless, because the editorial writers concluded that Drs. Stukel and colleagues provided an important reminder that because "final inferences appear different depending on the method chosen, investigators must be cautious when conducting observational data analyses and must ensure that they have available what they consider to be the most important patient characteristics measured before treatment assignment."
And finally, "the analytic method for comparing treatments must be shown to properly balance these characteristics."