Bayesian Methods for Non-inferiority Tests and Sample Size Determinations

November 18, 2011

Under the classical statistical framework, sample size calculations for a hypothesis test of interest maintain pre-specified Type-I and Type-II error rates. These methods often suffer from several practical limitations. For instance positing suitable values under a given hypothesis becomes more difficult when the null hypothesis is composite (e.g., hypotheses for non-inferiority tests). Additionally, classical methods often rely heavily on asymptotic (normal) approximations when testing two composite hypotheses (e.g., testing non-inferiority, bio-equivalence etc.), which may be questionable in many common situations.

This talk presents (i) a general framework for hypothesis testing and sample size determination using Bayesian average errors that does not suffer from the same limitations as those methods developed under the classical framework and provides a general approach to handling simple and complex hypotheses; (ii) Various application of the methodology to several designs common in medical studies, including but not limited to non-inferiority tests.

Through several simulation studies it will be demonstrated that these recently developed Bayesian procedures have competitive frequentist properties of controlling type I error as compared to the default frequentist test and meanwhile the newly developed criterion improves the statistical power, especially in small samples. The methods are further illustrated using the data from several clinical trials.

The presentation will mainly be based on following recent articles:

Reyes, E. and Ghosh, S. K. (2011). Bayesian Average Error Based Approach to Sample Size Calculations for Hypothesis Testing (to appear in Journal of Biopharmaceutical Statistics) Tech report available online: http://www.stat.ncsu.edu/information/library/papers/mimeo2629.pdf (an R package for the calculation of sample size will also be demonstrated)

Osman, M. and Ghosh, S. K. (2011). Semiparametric Bayesian Testing Procedure for Noninferiority Trials with Binary Endpoints http://www.tandfonline.com/doi/abs/10.1080/10543406.2010.544526 (R codes to implement the procedures will also be presented)

Sensitivity Analyses that Address Missing Data issues in Longitudinal Studies for Regulatory Submission

July 20, 2011

Regularity authorities now expect analyses of outcome measures in regulatory studies to address the issue of sensitivity to missing data. How far does the early withdrawal of certain individuals from the trial limit the conclusions? Primary analysis based on some form of missing at random (MAR) assumption is now common practice. It is important to understand the question that the trial analysis is trying to answer. We will see what question such an MAR analysis addresses and suggest that there may be alternative questions. Sensitivity analyses then have two forms. Firstly those that address departure from MAR in terms of possible dependence of withdrawal on future outcome, despite the conditioning inherent in MAR. Second in terms of asking an Intention-to-Treat (ITT) question rather than a Per-Protocol question. Both will be addressed through the medium of pattern-mixture models using Multiple Imputation (MI).

Handling Missing Data in Clinical Trials

May 18, 2011

The focus will be on dropout and withdrawal in longitudinal studies, and the approach to the topic will be a practical one. I will address conceptual issues, an area where there has been much debate. Particular consideration will be given to alternative trial aims, especially in terms of intention to treat and per protocol type estimands, and to the implication of these aims for the subsequent handling of missing data. The key role of the assumptions underlying the analysis will be emphasized. Alternative statistical analyses will then be considered in the light of the points raised, and one particularly promising route to sensitivity analysis will be introduced.

Bayesian Model-based Approaches for Single and Combination Dose Finding

April 27, 2011

Statistical contributions to phase I clinical trials are sparse. A notable exception is oncology, where statistical methods abound. We present compelling reasons for the use of Bayesian approaches within phase I cancer trials and discuss experiences with implementation of these designs in industry. We highlight the three individual components of study design, (a) statistical model, (b) inference and (c) decision making, and show that on-study decisions need both good statistics as well as clinical experience.

A focus is placed on both practical and methodological issues, covering a wide range of phase I studies. Critical aspects regarding the statistical model and an appropriate specification of prior information are discussed, along with the need to reflect the uncertainty of estimated rates of toxicity, allowing us to monitor patient risk (overdose control). Since phase I cancer trials are typically small and information is updated sequentially, the importance of and sensitivity to prior input needs special attention. Examples covering a range of prior specifications (non-informative, animal data, historical data from humans) are presented, and a case for the use of mixture priors is made.

Within oncology, the standard of care is moving towards combinations of drug therapies. As such it is important to understand the impact of combining two or more compounds on patient safety. In the second part of the presentation, we show the extension of the dose finding approach into this setting. The approach allows us to assess potential synergistic or antagonistic effects on safety profiles due to the combination treatment and adjust dosing recommendations accordingly. We reflect on the use of single-agent data to support starting combinations, and discuss experiences with implementation of such a design in real studies.

While the focus of the presentation is centered on oncology phase I, these designs are binary outcome designs and may be tailored to fit many other therapeutic area needs.

Key words: Bayesian approach, oncology, historical data, combination, dose-escalation, prior

Selected References:

  • Neuenschwander, Branson, Gsponer (2008)
    Critical aspects of the Bayesian approach to Phase I cancer trials. Statistics in Medicine, 27:2420-2439
  • Thall, Millikan, Mueller, Lee (2003)
    Dose-finding with two agents in phase I oncology trials. Biometrics 59:487-496
  • Babb, Rogatko, Zacks (1998).
    Cancer Phase I clinical trials: efficient dose escalation with overdose control . Statistics in Medicine, 17:1103-1120
  • Bailey, Neuenschwander, Laird, Branson (2009).
    A Bayesian case study in oncology phase I combination dose-finding using logistic regression with covariates. Journal of Biopharmaceutical Statistics, 19:369-484

Introduction to Bayesian Statistics

March 24, 2011

In this webinar, we will give a brief introduction to Bayesian methods.

Topics include the Bayesian paradigm, Bayes theorem, prior, posterior and predictive distributions, advantages of Bayesian methods, elicitation of priors, and Bayesian computation in SAS. Bayesian analysis of generalized linear models and survival models will also be discussed and their implementation in SAS.

Group Sequential Design Basics with Application using the gsDesign R Package and its GUI

February 23, 2011

Group sequential design is a well-accepted method of adaptive design. This course covers several essential technical aspects of the asymptotic theory and application of group sequential design. The gsDesign R package and its GUI, gsDesignExplorer, will be used to demonstrate these principles. Topics covered include: control of Type I and II error, boundary family and spending function approaches to design. Aspects of interim analysis to be discussed include confidence intervals, B-values, conditional and predictive power. Applications to trials with binomial and time-to-event outcomes will be used for illustration.

Software Installation
Related R code