Skip to Main Content

Empirically Evaluating Decision-Analytic Models

2010

To augment model credibility, evaluation via comparison to independent, empirical studies is recommended. The authors developed a structured reporting format for model evaluation and conducted a structured literature review to characterize current model evaluation recommendations and practices.

As an illustration, they applied the reporting format to evaluate a microsimulation of human papillomavirus and cervical cancer. The model's outputs and uncertainty ranges were compared with multiple outcomes from a study of long-term progression from high-grade precancer (cervical intraepithelial neoplasia [CIN]) to cancer. Outcomes included 5 to 30-year cumulative cancer risk among women with and without appropriate CIN treatment. Consistency was measured by model ranges overlapping study confidence intervals.

The structured reporting format included: matching baseline characteristics and follow-up, reporting model and study uncertainty, and stating metrics of consistency for model and study results. Structured searches yielded 2963 articles with 67 meeting inclusion criteria and found variation in how current model evaluations are reported. Evaluation of the cervical cancer microsimulation, reported using the proposed format, showed a modeled cumulative risk of invasive cancer for inadequately treated women of 39.6% (30.9-49.7) at 30 years, compared with the study: 37.5% (28.4-48.3). For appropriately treated women, modeled risks were 1.0% (0.7-1.3) at 30 years, study: 1.5% (0.4-3.3).

 

Source:

Goldhaber-Fiebert JD, Stout NK, Goldie SJ. Empirically Evaluating Decision-Analytic Models. Value in Health 2010; 13 (5): 667-674.  https://doi.org/10.1111/j.1524-4733.2010.00698.x