Awards & Recognition > Awards and Scholarships

W. J. Youden Award in Interlaboratory Testing

2015 W. J. Youden Award Winner

Alexander Franks, Harvard University
Alexander Franks, Harvard University
  Gabor Csardi, Harvard University
Gábor Csárdi, Harvard University
  D. Allan Drummond, The University of Chicago
D. Allan Drummond, The University of Chicago
  Edoardo M. Airoldi, Harvard University
Edoardo M. Airoldi, Harvard University

For their paper "Estimating a Structured Covariance Matrix from Multilab Measurements in High-Throughput Biology." The authors propose a multi-level generalized linear model, with full uncertainty quantification, for a meta-analysis study of 27 high-throughput multi-platform data sets to obtain a credible estimate of the degree of coordination between transcription and translation in yeast. They demonstrate that previous understanding of the strength of the relationship between mRNA levels and steady-state protein levels is underestimated due to analyses that relied on overly simple statistical models that did not account for collinearity, systematically missing data, and high-noise conditions. Their paper develops new theoretical results for characterization of the impact of such misspecifications on the covariance or correlation between high-dimensional responses in high-throughput biology. The paper is an outstanding piece of research that elucidates the importance of careful statistical analysis to accurately assess findings of multi-lab experimental testing.

The Award

The W. J. Youden Award in Interlaboratory Testing was established in 1985 to recognize the authors of publications that make outstanding contributions to the design and/or analysis of interlaboratory tests or describe ingenious approaches to the planning and evaluation of data from such tests.

Each recipient receives an engraved award and a cash prize that are presented at the awards ceremony during the Joint Statistical Meetings. The current amount of the cash prize is $1,000, which is divided evenly among the winners. The award is given every year if, in the opinion of the awards committee, an eligible and worthy publication is nominated.


Interlaboratory tests, sometimes called collaborative studies or round robins, constitute a broad field of statistical activity in commercial, regulatory, and industrial practice. They are used in many disciplines for comparing results produced in different laboratories, for determining consensus values, and for assessing and developing test methods. Such studies are often interactive, with the goal of reducing discrepancies among results. Determination of consensus values may include work on reconciling dissimilarities of several contributory data-generating processes by accommodating them, as in analysis of multicenter clinical trials.

The W. J. Youden Award in Interlaboratory Testing is bestowed upon distinguished individual(s) whose publications have appeared or been accepted for publication in English-language, professionally refereed journals or monograph series during the given year or the two preceding years.

Nomination and Award Dates

Nominations are due by February 15 each year. The award is presented at the Joint Statistical Meetings in August.

Responsibilities of the Award Recipient

The award recipient is responsible for providing a current photograph and general personal information the year the award is presented. The American Statistical Association uses this information to publicize the award and prepare the prize.

How to Submit a Nomination

Nomination letters, along with a copy of the publication(s) should be sent to the ASA office at or via mail to 732 N. Washington Street, Alexandria, VA 22314, attention Award Nominations.

Past Winners

2014: Ying Huang, Yunda Huang, Shuying Sue Li, Felicity Zoe Moodie, and Steven Self, all of the Fred Hutchinson Cancer Research Center, for their paper, "Comparing and Combining Data Across Multiple Sources via Integration of Paired-Sample Data to Correct for Measurement Error." The authors present an original method for adjusting data obtained from different laboratories when true values of the test materials are not observed. The adjustment is based on materials that are split for assay by each laboratory. The authors describe the method, provide sample size calculations, and illustrate the two-laboratory case using simulated data and actual data from two HIV vaccine laboratories.

2013: Lane F. Burgette, RAND Corporation and Jerome P. Reiter, Duke University. Changing measurement procedures during the data-collection process is not unusual, but such procedural modifications do result in less-than-ideal data sets. Burgette and Reiter propose three approaches for handling such scenarios and illustrate their use through an example. In addition, the authors demonstrate, via simulation, the conditions under which each approach is preferred. This allows the authors to recommend a decision making process for practitioners.

2012: David Dunson of Duke University and Garritt L. Page of Pontifica Universidad Católica de Chile for their paper, "Bayesian Local Contamination Models for Multivariate Outliers." The authors developed a simple, but flexible, hierarchical model to address multi-analyte, multi-laboratory studies with outlying observations and tested the model using both simulated and real interlaboratory study data.

2011: Ryan Browne, University of Guelph, Jock MacKay, University of Waterloo, and Stefan Steiner, University of Waterloo, in recognition of their paper, "Leveraged Gauge R&R Studies," published in Technometrics August 2010, Vol. 52, No. 3.

2010: No award.

2009: Blaza Toman, National Institute of Standards Technology, in recognition of her paper, "Bayesian Approaches to Calculating a Reference Value in Key Comparison Experiments," published in Technometrics, Vol. 49, No. 1.

2008: Andrew L. Rukhin, University of Maryland and William E. Strawderman, Rutgers University

2007: Rafael A. Irizarry of The Johns Hopkins University, Shyam Biswal, Bryan C. Frank, Edward Gabrielson, Joe G.N. Garcia, Joel Geoghegan, Gregory Germino, Constance Griffin, Sara C. Hilmer, Eric Hoffman, Anne E. Jedlicka, Ernest Kawasaki, Hannah Lee, Francisco Martinez-Murillo, Laura Morsberger, Irene F. Kim, David Petersen, John Quackenbush, Alan Scott, Forest Spencer, Daniel Warren, Michael Wilson, Yanquin Yang, Shui Qing Ye, Wayne Yu, in recognition of their paper, “Multiple Laboratory Comparison of Microarray Platforms”, published in Nature Methods, May 2005, Vol. 2, No. 5.

2006: Dulal K. Bhaumik, Center for Health Statistics and Robert D. Gibbons, University of Illinois at Chicago, in recognition of their paper, “Confidence Regions for Random-Effects Calibration Curves with Heterscedastic Errors”, published in Technometrics, Vol. 47, No. 2 (May 20905): 223-230.

2005: Hariharan K. Iyer, Colorado State University, Chih-Ming (Jack) Wang, National Institute of Standards and Technology, and Thomas Mathew, University of Maryland-Baltimore, receive the 2005 W.J. Youden Award in Interlaboratory Testing, in recognition of their paper, "Models and Confidence Intervals for True Values in Interlaboratory Trials," published in the Journal of the American Statistical Association, December 2004, Vol. 99, No. 468, Theory and Methods.

2004: Pierre Pestiaux, CReG, TOTAL FRANCE, and Christine Pic, LSTA, Pierre and Marie Curie University (Paris VI), "The Limitations of the Cochran and Grubbs Outlier Tests in Round Robin Testing," published in SAE Paper, No. 2004-01-1894, June 2004

2003: Lin, Zhichao, Federal Drug Administration and Kenneth G.W. Inn and James J. Filliben, National Institute of Standards and Technology, "An Alternative Statistical Approach for Interlaboratory Comparison Data Evaluation," published in the Journal of Radioanalytical & Nuclear Chemistry, Vol. 248, No. 1, ( 2001): pp. 163-173

2002: Gibbons, Robert D., University of Illinois at Chicago, and Dulal K. Bhaumik, University of South Alabama, "Weighted Random-Effects Regression Models with Application to Interlaboratory Calibration," published in Technometrics, Vol. 43, No. 2 (May 2001): pp.192-198.

2001: Tan, Charles Y., Merck & Company, Inc., and Boris Iglewicz, Temple University, "Measurement-Methods Comparisons and Linear Statistical Relationship," Technometrics, Vol. 41, No. 3 (August 1999): pp. 192-201.

2000: Duewer, David L., Margaret C. Kline, Katherine E. Sharpless, Jeanice Brown Thomas, and Kenneth T. Gray, National Institute of Standards and Technology, and Anne Sowell, Centers for Disease Control and Prevention, "Micronutrients Measurement Quality Assurance Program: Helping Participants Use Interlaboratory Comparison Exercise Results to Improve Their Long-Term Measurement Performance," Analytical Chemistry, Vol. 71, No. 9 (May 1, 1999): pp. 1870-1878

1999: No award.

1998: Rukhin, Andrew L., University of Maryland at Baltimore, and Mark G. Vangel, National Institute of Standards and Technology, "Estimation of a Common Mean and Weighted Mean Statistics," Journal of the American Statistical Association, Vol. 93, No. 441 (March 1998): pp.303-308, and "Maximum Likelihood Analysis for Heteroscedastic One-Way Random Effects ANOVA in Interlaboratory Studies," Biometrics, (March 1999).

1997: No award

1996: Mandel, John, National Institute of Standards and Technology, "Structure and Outliers in Interlaboratory Studies," Journal of Testing and Evaluation, Vol. 25 (1995): pp. 364-369 and

1996: Stangl, Dalene K., Duke University, "Prediction and Decision Making Using Bayesian Hierarchical Models," Statistics in Medicine, Vol. 14 (1995): pp. 2173-2190

1995: Lischer, Peter, Constat Consulting, "Robust Statistical Methods in Interlaboratory Analytical Studies," in Helmut Rieder, ed., Robust Statistics, Data Analysis, and Computer Intensive Methods, Lecture Notes in Statistics, Springer Verlag, New York, (1996): pp. 251-265

1994: Veith, Alan G., Technical Development Associates, "A New Approach to Evaluating Inter-Laboratory Testing Precision," Polymer Testing, Vol. 12, pp. 113-183

1993: Deutler, Tilmann, Universitat Mannheim, "Grubbs-Type Estimators for Reproducibility Variances in an Interlaboratory Test Study," Journal of Quality Technology, Vol. 23 (1991): pp. 324-335

1992: Crowder, Martin, Imperial College of Science, Technology, and Medicine, "Interlaboratory Comparisons: Round Robins with Random Effects," Applied Statistics, Vol. 41 (1992): pp. 409-425

1991: Eberhardt, Keith R., Charles P. Reeve, and Clifford H. Spiegelman, National Institute of Standards and Technology, "A Minimax Approach to Combining Means, with Practical Examples," Chemometrics and Intelligent Laboratory Systems, Vol. 5 (1989): pp. 129-154

1990: Davies, Paul L., University of Birmingham, "Statistical Evaluation of Interlaboratory Tests," Fresenius Zeitschrift fur Analytische Chemie, Vol. 331 (1988): pp. 513-519

1989: Elder, Robert S., U.S. Department of Agriculture, "An Evaluation of the Laboratory Ranking Test," Journal of Quality Technology, Vol. 19 (1987): pp. 197-203

1988: Mandel, John and Theodore W. Lashof, National Institute of Standards and Technology, "The Nature of Repeatability and Reproducibility," Journal of Quality Technology, Vol. 19 (1987): pp. 29-36

1987: Wernimont, Grant T., "Use of Statistics to Develop and Evaluate Analytical Methods," edited by William Spendley, Association of Official Analytical Chemists, Arlington, Virginia, xi+183 pp., 1985

1986: Margolin, Barry H., University of North Carolina, Chapel Hill, and Kenneth J. Risko, Michael D. Shelby, and Errol Zeiger, National Institute of Environmental Health Sciences, "Sources of Variability in Ames Salmonella Typhimurium Tester Strains: Analysis of the International Collaborative Study on 'Genetic Drift'," Mutation Research, Vol. 130 (1984): pp. 11-25

1985: Rocke, David M., University of California-Davis, "Robust Statistical Analysis of Interlaboratory Studies," Biometrika, Vol. 70 (1983): pp. 421-483