Scott Patterson (Wyeth) , Byron Jones (Pfizer)
December 10, 2008

This introductory course will focus on the design and analysis of bioequivalence studies for orally administered drug products. It provides a detailed overview of the most well-established method of demonstrating bioequivalence. The following topic will be covered: 1. Drug development, clinical pharmacology and statistics. 2. History and international bioequivalence regulations. 3. 2x2 cross-over designs and average Bioequivalence with examples. The course offers a well-balanced mix of theory and applications, including regulatory considerations. Examples from real trials are used throughout the discussion
to illustrate the statistical approaches discussed in the course.

### Text book

Patterson S, Jones B. (2006). Bioequivalence and Statistics in Clinical Pharmacology (Chapters 1-2). Chapman and Hall, CRC Press, London.

### Software implementation

Software implementation of the described statistical methods may be found on the book's web site.

Devan V. Mehrotra (Merck & Co. Inc.)
October 21, 2008

The Mantel-Haenszel test and the van Elteren test, both implemented in SAS PROC FREQ, are widely used for stratified analyses of binary and ranked data, respectively. Both methods have good power properties, but only under certain restrictive assumptions; when the assumptions are violated, there can be a notable loss in power. In this tutorial, we will describe some alternatives to these popular methods, including the "minimum risk" weighting strategy for stratified binary data and an "adaptive" testing strategy for stratified rank-based analyses. We will use simulations to provide guidance on which methods might be more appropriate under given conditions. Numerical examples will be used throughout for illustration, and to reinforce the key points.

May 22, 2008

Assessment of cardiac liability of new compounds, particularly with respect to life threatening ventricular tachydysrhythmias, e.g., Torsades de Pointes (TdP), is becoming an increasingly important component of clinical drug development. Lengthening of QTc interval is commonly used as a surrogate biomarker for an increased risk of TdP. The International Conference on Harmonization (ICH) published a guidance document (ICH E14) to describe strategies for the evaluation of cardiac safety of drugs in clinical development. This document introduced a new approach to the assessment of proarrhythmic potential of new drugs (thorough QT/QTc study). Thorough QT/QTc study are now required for virtually all non-cardiac drugs with systemic bioavailability and design and analysis considerations in thorough QT/QTc studies have received much attention in clinical trial literature.

This webinar will focus on statistical issues arising in the design and analysis of thorough QT/QTc studies, including

- Key design issues (single-dose and steady-state designs, time points for ECG acquisition, number of replicate ECG recordings, etc).
- Common approaches to the analysis of QTc interval data in thorough QT/QTc studies (e.g., QT correction methods, multiplicity issues, QTc-exposure analysis, etc).

Geert Molenberghs(Center for Statistics, Universiteit Hasselt, Diepenbeek, Belgium)
April 24, 2008

The following topics will be discussed: Surrogacy in psychiatry, Longitudinal endpoints, A suite of measures, Information theoretic unification, Surrogate threshold effect, Substantive conclusions and outlook, Methodological conclusions and outlook.

Geert Molenberghs(Center for Statistics, Universiteit Hasselt, Diepenbeek, Belgium)
Martch 27, 2008

The following topics will be discussed: The framework for continuous outcomes, Issues in parameter estimation, Prediction, Binary endpoints, Survival endpoints, An ordinal surrogate for a survival true endpoint, A longitudinal surrogate for a survival true endpoint.

Geert Molenberghs(Center for Statistics, Universiteit Hasselt, Diepenbeek, Belgium)
February 21, 2008

Both humanitarian and commercial considerations have spurred intensive search for methods to reduce the time and cost required to develop new therapies. The identification and use of surrogate endpoints, i.e. measures that can replace or supplement other endpoints in evaluations of experimental treatments or other interventions, is a general strategy that has stimulated much enthusiasm. Surrogate endpoints are useful when they can be measured earlier, more conveniently, or more frequently than the "true" endpoints of primary interest (Ellenberg and Hamilton, Statistics in Medicine, 1989). Regulatory agencies around the globe, particularly in the United States, Europe, and Japan, are introducing provisions and policies relating to the use of surrogate endpoints in registration studies. But how can one establish the adequacy of a surrogate, in the sense that treatment effectiveness on the surrogate will accurately predict treatment effect on the intended, and more important, true outcome? What kind of evidence is needed, and what statistical methods portray that evidence most appropriately?

The validation of surrogate endpoints has been studied by Prentice (Statistics in Medicine, 1989), who presented a definition of validity as well as a formal set of criteria that are equivalent if both the surrogate and true endpoints are binary. Freedman, Graubard and Schatzkin (Statistics in Medicine, 1992) supplemented these criteria with the proportion explained which, conceptually, is the fraction of the treatment effect mediated by the surrogate. Noting operational difficulties with the proportion explained, Buyse and Molenberghs (Biometrics, 1998) proposed instead to use jointly the within-treatment partial association of true and surrogate responses, and the treatment effect on the surrogate relative to that on the true outcome. In a multi-center setting, these quantities can be generalized to individual-level and trial-level measures of surrogacy.

Buyse et al. (Biostatistics, 2000) therefore have therefore proposed a meta-analytic framework to study surrogacy at both the trial and individual-patient levels. A number of variations of the theme have been developed, depending on the type of endpoint for the true and surrogate endpoint, respectively, and depending on the focus of the evaluation exercise. At the same time, efforts have been made to converge to a common framework, encompassing the wide variety of settings one can encounter. This includes a so-called variance reduction factor and an information-theoretic approach. Further, work has been done to convert the evaluation methodology to sample size assessment methodology, leading to the surrogate threshold effect. These recent developments will be introduced briefly.

This course will present an overview of these developments, with illustrations predominantly from the fields of ophthalmology, oncology, and mental health.

The following topics will be discussed: The concept of surrogacy, Basic taxonomy, Key examples, Prentice's definition, Proportion of treatment effect explained, Adjusted association and relative effect, The need for trial-level replication.