A predictive approach to process optimization with applications to pharmaceutical development and manufacturing

June 23, 2010

From a univariate perspective, quality improvement can be thought of as the continuous reduction of variation about a target. However, with regard to multiple quality responses we may need a somewhat broader perspective. Such a perspective can be achieved by considering predictions in the form of conformance reliability of multiple quality responses with respect to their quality specifications. This perspective is easy for non-statisticians to understand and it provides an absolute figure of merit for decision making with regard to process optimization or validation. Historically, classical desirability function and quadratic loss function approaches have been more difficult to interpret, have only provided a relative figure of merit, and most importantly, have not done a good job of taking into account uncertainties involving unknown model parameters and multivariate predictive distributions for quality responses. This webinar will review multiple-response process optimization with particular attention to recent Bayesian predictive approaches. In addition, some possible frequentist predictive approaches will be briefly discussed as well. Applications of these approaches to pharmaceutical quality improvement issues, such as the ICH Q8 design space problem, will be addressed.

Statistical analysis of short-term studies in regulatory toxicology using R

May 20, 2010

Surprisingly, the statistical analysis of toxicological studies is described rather noncommittal, e.g. in the OECD 408 guideline "When applicable, numerical results should be evaluated by an appropriate and generally acceptable statistical method". Fortunately, the U.S. National Toxicology Program contains a more detailed description: "Two approaches are employed to assess the significance of pairwise comparisons between exposed and control groups in the analysis of continuous variables. Organ and body weight data, which historically have approximately normal distributions, are analyzed with the parametric multiple comparison procedures of Dunnett (1955) and Williams (1971, 1972). Hematology, clinical chemistry, urinalysis... data, which have typically skewed distributions, are analyzed using the nonparametric multiple comparison methods of Shirley (1977) ... and Dunn (1964). Jonckheere's test (1954) is used to assess the significance of the dose-related trends and to determine whether a trend-sensitive test (Williams or Shirley test) is more appropriate for pairwise comparisons than a test that does not assume a monotonic dose-related trend (Dunnett's or Dunn's test)..."

I.e., based on the characterization of historical data, body/organ weights are to be assumed normally distributed and hence Dunnett/Williams approach is recommended controlling the FWER (familywise error rate). All other continuous endpoints should be analyzed nonparametrically by the Dunn/Shirley procedure (again controlling FWER).

In the webinar the theory behind Dunnett/Williams and Dunn/Shirley procedures will be explained using the general concept of multiple contrast tests. Real data example will be evaluated by means of the R packages multcomp, mratios and MCPAN. Variance heterogeneity is not mentioned in the NTP methods, but it may seriously bias the analysis. Therefore, recent simultaneous confidence interval approaches for variance heterogeneity, particularly in inverse unbalanced designs will be demonstrated.

Although, the NTP recommends for the analysis of proportions the arcsine transformation to normality, more direct Dunnett/Williams-type procedures for proportions with extra-variability between the animals will be demonstrated.

Finally, the above favored proof-of-hazard approach will be compared with a proof-of-safety approach both for two-sample and k-sample designs, both for one-sided hypotheses (non-inferiority) and two-sided hypotheses (equivalence).

An introduction to the nonlinear mixed effects model and pharmacokinetic/pharmacodynamic analysis

April 29, 2010

In applications in the biopharmaceutical and biological sciences, the time course of a continuous response for an individual may be well-described by a function that is nonlinear in one or more parameters, where an "individual" may be a human or animal subject, a laboratory sample, or some other observational unit. These functions often arise from theoretical, mechanistic considerations; for example, compartmental models for pharmacokinetic behavior for a single individual lead to systems of differential equations whose solution yields nonlinear functions describing the relationship between concentration and time for a single individual. Similarly, mechanistic models for disease progression, such as those characterizing within-host HIV dynamics, give rise to such nonlinear representations. In these settings, the model parameters may reflect directly individual-specific phenomena of interest, such as drug clearance or viral replication and death rates, and one key objective is to achieve an understanding of how meaningful such parameters vary across the population of individuals and the extent to which this variation is systematically associated with individual characteristics. For instance, knowledge of the form of the association between drug clearance and covariates such as renal function, weight, and age assist pharmacokineticists in developing appropriate dosing regimens.

The nonlinear mixed effects model is a natural statistical framework in which to address such objectives. This webinar will introduce the conception and formulation of the nonlinear mixed effects model, review popular approaches to inference under the model and popular software implementations, and describe its use in pharmacokinetic/pharmacodynamic analysis.

Data monitoring in practice: Making your data monitoring committee effective

Martch 10, 2010

Many randomized clinical trials include a data monitoring committee (DMC) that is responsible for reviewing accruing data, monitoring performance of the trial, assuring safety of the participants in the trial, and assessing the efficacy of treatment. The DMC often makes recommendations about continuation of the trial or alteration of the protocol. This webinar will provide a practical overview of the role and functioning of DMCs. Specifically, we will discuss the Committee's responsibilities: We will discuss when a DMC is needed, what its charter should include, and how its meeting should run. We will also focus on some controversies surrounding DMCs: for example, how to define 'independence', who should present data to the DMC, whether the DMC should be masked or unmasked, and how the DMC should communicate its recommendations. Finally, we will address some practical issues faced by industry in setting up and managing a DMC. We will discuss such topics as how to schedule meetings, who should program the tables, what the DMC's report should include, and how to ensure that data are sufficiently current to allow the DMC to make timely relevant recommendations.

Non-parametric dose response models in Bayesian adaptive designs

February 18, 2010

There has been considerable recent interest in pharmaceutical R&D in the use of flexible, non-monotonic dose-response models. Two examples of this are the ASTIN study (Krams et al, Stroke, 2003; 34: 2543-2548) and a proof-of-concept study in neuropathic pain (Smith et al, Pharmaceutical Statistics, 2006; 5: 39-50) both of which used a flexible dose-response function based on the Normal Dynamic Linear Model(NDLM), a model whose origins are in time series. This model is potentially very important since the work of the Pharma Working Group on Adaptive Dose-ranging Designs White paper of the PhRMA PISC working group on adaptive dose-ranging designs. (Bornkamp et al, Journal of Biopharmaceutical Statistics, 2007; 17: 965-995) indicated that a Bayesian adaptive design utilising the model outperformed alternative approaches. There are, however, other possibilities including splines and kernel regression. Up to now the use of this type of model has required tailor made programs of suites of programs. In this talk I show how the different approaches are related to one another and how there are particular advantages in using one form over another and how recent research opens up different ways of fitting these models.