Survey Research Methods Section / American Association for Public Opinion Research
PRACTICAL TOOLS FOR NONRESPONSE BIAS ANALYSIS
Kristen Olson and Jill M. Montaquila
DATE AND TIME
Tuesday, April 24, 1-3 p.m. Eastern time
This webinar will give an overview of methods that may be used to help in addressing the OMB guidelines for conducting nonresponse bias studies when response rates in surveys are less than 80 percent or there is reason to suspect that estimates are biased due to nonresponse. Practical tools are described and examples are used to illustrate these methods. The advantages and disadvantages of these methods are presented, and the value of having multiple approaches is highlighted. The need to devise strategies for nonresponse and for its analysis in the planning stage, prior to completing the survey, is emphasized.
Kristen Olson is an Assistant Professor of Survey Research and Methodology and Sociology at the University of Nebraska-Lincoln. She has been at UNL since 2007. Her areas of research include nonresponse bias and nonresponse adjustments, the relationship between nonresponse and measurement errors, and interviewer effects. Kristen's research has appeared in Public Opinion Quarterly, the Journal of the Royal Statistical Society Series A, Sociological Methods and Research, Field Methods, Social Science Research, and Survey Research Methods. She is currently serving as Conference Chair for MAPOR, and has taught short courses on nonresponse bias studies for AAPOR, DC-AAPOR, SAPOR, and JPSM. Kristen is also editor of the Research Synthesis section of Public Opinion Quarterly. She earned her B.A. in Mathematical Methods in the Social Sciences and Sociology from Northwestern University, her M.S. in Survey Methodology from the Joint Program in Survey Methodology at the University of Maryland, and her Ph.D. in Survey Methodology from the University of Michigan.
Jill Montaquila is an Associate Director of the Statistical Staff and Senior Statistician at Westat, and a Research Associate Professor in the Joint Program in Survey Methodology at the University of Maryland. She is a Fellow of the American Statistical Association. Her research interests include various methods for evaluation of nonresponse bias, random digit dialing survey methodology, and address based sampling. Jill has given short courses on approaches for nonresponse bias analysis for DC-AAPOR, SAPOR, and JPSM. She has served as President of the Washington Statistical Society and is Chair-Elect of the Survey Research Methods Section of the ASA.
RECONSIDERING MAIL SURVEY METHODS IN AN INTERNET WORLD
Washington State University
DATE AND TIME
Wednesday, April 13, 2011, 1-3pm Eastern time
Coverage and response rate concerns make telephone surveys unacceptable for some uses. However, switching to the Internet is limited by the inability to use email contact for some populations and the reluctance of certain people to respond over the web. In this perplexing environment, the development of postal addressed-based sampling, which now provides better coverage than either telephone or the Internet, has generated renewed interest in mail survey methods.
Mail can be used effectively as a stand-alone data-collection mode. In fact, research has shown that mail-only surveys may now produce response rates higher than can be achieved by any other survey mode. Alternatively, mail can be used to encourage response over the Internet in mixed-mode surveys. Research also has shown that using mail to encourage response in certain surveys that use email contacts, but only allow Internet responses, may produce dramatic improvements in response rates.
Achieving positive results with mail requires thinking about it differently than in the past. Dillman will discuss why mail contact methods are effective in today's survey environment and provide examples of how they can be used in situations in which email contact is not feasible (e.g., household surveys of addressed-based samples) or a prior relationship exists (e.g., clients or students). Emphasis will be on recent tests of these new implementation concepts for mail-only and mail+web surveys. In addition, research questions in need of answers will be articulated.
Dillman is a regents professor and the Thomas S. Foley Distinguished Professor of Government and Public Policy in the departments of sociology and community and rural sociology at Washington State University. He also serves as deputy director for research and development in the Social and Economic Sciences Research Center. Dillman is recognized internationally as a major contributor to the development of modern mail, telephone, and Internet survey methods.
INTRODUCTION TO SAMPLING FOR NON-STATISTICIANS
Safaa R. Amer
Senior Statistician, NORC
DATE AND TIME
Tuesday, February 8, 2011, 1-3pm EST
Many researchers, journalists, policy makers, and educators encounter sample surveys in their research, work, reading, or everyday experience. This course will uncover the logic behind sampling. It will give an explanation of the different types of samples and the terminology used by statistician and survey researchers. It will outline and illustrate the steps needed before, during, and after selecting a sample. It will describe the types of errors faced when conducting a survey and whether they are sampling related or not. The goal of the course is to expose non-statisticians to sampling so that they are able to read and understand articles or documents describing sampling designs and communicate with statisticians about their research needs. The course may even motivate participants to design and select simple samples to illustrate concepts and procedures. The webinar also will be of interest to students taking introductory statistics courses and their instructor who want to learn more about sample surveys. Some references for easy reading will be provided. The content of the course will include the difference between a sample and a census, probability versus non-probability sampling methods, the meaning of a sampling frame or list, illustrations of sampling versus non-sampling errors, random sampling techniques, sample size considerations, and post-sampling steps.
Safaa Amer is a multi-lingual Senior Statistician and Project Director at NORC with wide-ranging experience in data analysis, survey sampling, missing data, and data mining. She has been involved in survey design; analyzing survey operations problems; conducting literature reviews and research to adapt surveys to international contexts; developing new sampling techniques and definitions for multi-cultural setting; developing and refining training material; training and building international survey capacity. She offered consulting to researchers from different fields on complex sampling problems, providing practical information on the types of analyses, limitations of the data, and strengths/weaknesses of various sampling strategies.
In addition, Dr. Amer held several academic positions with the most recent being on the faculty list for the Survey Design & Data Analysis Graduate Certificate program at George Washington University. She has offered statistics and survey research lectures in Arabic, French, and English languages. Dr. Amer has an Economic and Political Sciences background with a special interest in international work, human rights, and geographic information systems. Dr. Amer is a member of several national and international Statistical Associations. She has refereed several papers for international journals and contributed in several graduate level theses.
ADDRESS BASED SAMPLING: WHAT DO WE KNOW SO FAR?
Michael W. Link, Ph.D
Center of Excellence at The Nielsen Company
DATE AND TIME
Tuesday, November 30, 2010, 1-3pm EST
Address Based Sampling (ABS), the use of a comprehensive address database for sampling residential listings, has been the subject of intensive research efforts in recent years. The promise of ABS is that it provides high coverage of residential homes using a nearly complete sampling frame based on the U.S. Postal Service Delivery Sequence File. Because the frame is based on addresses and not landline telephone numbers, cell phone only households are included in the frame in proportion to their penetration within the sampled geography. Additionally, telephone numbers and other sample frame indicators – such as geocoded information from Census block groups or commercial databases – can be appended to the frame, providing more information for sample stratification and targeted sample treatments. While ABS provides a sample frame with high coverage, it does present other issues and challenges for researchers – some methodological, others operational. This webinar will provide participants with background on the ABS frame and potential survey design considerations that accompany its use; highlight areas where research has been conducted and where it is needed; and, provide an initial assessment of potential best practices when using and ABS approach. The course draws upon both the growing body of research in this area and many of the operational lessons learned from utilizing ABS survey designs.
Michael W. Link, Ph.D is Chief Methodologist/VP for Research Methods Center of Excellence at The Nielsen Company. He has a broad base of experience in survey research, having worked in academia (University of South Carolina, 1989-1999), not-for-profit research (RTI International, 1999-2004), and government (Centers for Disease Control and Prevention, 2004-2007) before joining Nielsen. Dr. Link’s research efforts focus on developing methodologies for confronting the most pressing issues facing measurement science, including improving participation and data quality, use of multiple modes in data collection, obtaining participation from hard-to-survey populations, and developing electronic measurement methodologies to supplement or replace self- reports. His numerous research articles have appeared in leading scientific journals, such as Public Opinion Quarterly, International Journal of Public Opinion Research, and Journal of Official Statistics.
SMALL AREA ESTIMATION
Partha Lahiri, PhD
Joint Program in Survey Methodology (JPSM) at the University of Maryland
DATE AND TIME
Tuesday, October 19, 2010, 1-3pm EST
Direct survey estimates of various socio-economic, agriculture, and health statistics for small geographic areas and small domains are generally highly imprecise due to small sample sizes in the areas. To improve on the precision of the direct survey estimates, small area estimation techniques are often employed to borrow strength from related information that can be extracted from one or more existing administrative and/or census databases. In this talk, I will first discuss the main concepts and issues in small area estimation and then illustrate the effectiveness of small area estimation techniques in different applications. The talk will be presented at a level appropriate for individuals who are new to small area estimation, but also include discussion of research topics of interest to more experienced researchers.
Partha Lahiri is a Professor of the Joint Program in Survey Methodology (JPSM) at the University of Maryland, College Park, and an Adjunct Research Professor of the Institute of Social Research, University of Michigan, Ann Arbor. Professor Lahiri’s research on small-area estimation has been widely published in leading journals such as Biometrika, the Journal of the American Statistical Association, the Annals of Statistics and Survey Methodology. Professor Lahiri has served as member, advisor, or consultant to many organizations, including the U.S. Census Advisory committee, a National Academy of Science panel, the United Nations, the World Bank, and the Gallup Organization. He has served on the Editorial Board of many international journals, including the Journal of the American Statistical Association and Survey Methodology. Dr. Lahiri has been honored by being made a Fellow of the American Statistical Association and the Institute of Mathematical Statistics and an elected member of the International Statistical Institute.
Human Resources in Science and Technology: Surveys, Data, and Indicators from the National Science Foundation
DATE AND TIME
Tuesday, April 6, 2010, 1:00 PM - 3:00 PM Eastern time
The Division of Science Resources Statistics (SRS) is a federal statistical agency housed at the National Science Foundation (NSF). SRS's role within NSF is to "provide a central clearinghouse for the collection, interpretation, and analysis of data on scientific and engineering resources, and to provide a source of information for policy formulation by other agencies of the Federal Government..." Within this mandate SRS is involved in collecting and disseminating information on R&D expenditures and activities and on human capital issues. The United States is unique among major industrialized nations in that it has directly invested in collecting detailed data from a variety of sources on the entire science and engineering pipeline. Each of the data sources came about from U.S. federal administrative needs. The sources have evolved into important elements for the study of higher education and the scientific workforce. In this webinar, these surveys and data sources are described. Key indicators regarding trends in U.S. science and engineering degree production, enrollments, and workforce are defined and described. The Science and Engineering Indicators: 2010 and Women, Minorities and Persons with Disabilities in Science and Engineering reports will be used as examples for these indicators. At the end of the webinar participants should be aware of data sources and how data are collected, indicators and reports from the NSF, and where to find more information from the NSF.
Dr. Nirmala Kannankutty is a senior analyst in the Division of Science Resources Statistics at the National Science Foundation. During her tenure at NSF, she was responsible for the coordination of NSF's three science and engineering workforce surveys, collectively known as SESTAT (Scientists and Engineers Statistical Data System). Also while at NSF, she has completed special projects at the White House Office of Science and Technology Policy on scientific workforce issues, and the White House Office of Management and Budget on the federal R&D budget. Her areas of expertise include S&T workforce, graduate education, and S&T research and development, with extensive experience in survey research techniques and the use of survey results for policy analysis. She is currently Senior Social Scientist/Senior Advisor in SRS, with responsibility for outreach and dissemination. Dr. Kannankutty earned a doctorate in Engineering and Policy from Washington University in St. Louis in 1996.
The Psychology of Survey Response
DATE AND TIME
Tuesday, February 9, 2010, 1:00 PM - 3:00 PM Eastern time ABSTRACT
This two-hour course examines survey questions from a psychological perspective. It covers the basics on how respondents answer survey questions and how problems in this response process can produce reporting errors. The class will focus on behavioral questions. The course is intended as on introduction for researchers who develop survey questionnaires or who use the data from surveys and want to understand some of the potential problems with survey data. It describes the major psychological components of the response process, including comprehension of the questions, retrieval of information from memory, combining and supplementing information from memory through judgment and inference, and the reporting of an answer. The course has no specific perquisites, though familiarity with survey methodology or questionnaire design would be helpful. INSTRUCTOR BIO
Roger Tourangeau is a Research Professor at the University of Michigan's Survey Research Center and the Director of the Joint Program in Survey Methodology (JSPM) at the University of Maryland. He has been a survey methodologist for nearly 30 years, with extensive experience in a wide range of surveys. Tourangeau is well-known for his methodological research on the impact of different modes of data collection and on the cognitive processes underlying survey responses. He is the lead author of a book on this last topic (The Psychology of Survey Response, co-authored with Lance Rips and Kenneth Rasinski and published by Cambridge University Press in 2000); this book received the 2006 Book Award from the American Association for Public Opinion Research (AAPOR). He is also one of the co-editors of a collection of papers (Cognition and Survey Research, published by Wiley in 1999) from a conference on cognitive aspects of survey response. In addition, he has published a number of papers on mode effects (including a very widely cited paper on audio-CASI with Tom Smith) and on forgetting and telescoping in surveys.
In 2002, Tourangeau received the Helen Dinerman Award for his work on the cognitive aspects of survey methodology. This is the highest honor given by the World Association for Public Opinion Research. In 2005, he received the 2005 AAPOR Innovators Award (along with Tom Jabine, Miron Straf, and Judy Tanur). He was elected a Fellow of the American Statistical Association in 1999 for his work on survey measurement error and his contributions to federal surveys as a sampling statistician. In 2006, he served as the chair of the Survey Research Methods Section of the American Statistical Association. He has a Ph.D. in Psychology from Yale University.
Dual Frame Theory Applied to Landline and Cell Phone Surveys
J. Michael Brick DATE AND TIME
Tuesday, November 10, 2009, 1:00 PM - 3:00 PM Eastern time ABSTRACT
As the number of households that have only cell phones has increased dramatically over the past 5 years, telephone surveys have addressed this problem by sampling from both landline and cell phone numbers. One of the issues emerging from these dual frame surveys is that the theoretical foundation for these surveys largely ignores nonsampling errors. Because these errors may be large and result in biases, they must be considered in dual frame telephone surveys. This Webinar begins with a review of dual frame theory with particular attention to surveys that sample landline and cell phone numbers. It then examines the effect of nonsampling errors when surveys are conducted without considering these errors. In particular, we describe the potential effect of nonresponse and measurement error using data from surveys of landlines and cell phone numbers. We discuss both practical sample design issues such as whether to screen for cell-only households, and weighting methods to reduce the effects of the errors. The advantages and disadvantages of different sample designs and estimation methods are discussed. The examples are from actual dual frame telephone surveys. INSTRUCTOR BIO
Dr. J. Michael Brick is a Vice President and Director of the Survey Methods Unit at Westat. He is also a research professor in the Joint Program in Survey Methodology at the University of Maryland, and an adjunct research professor at the University of Michigan. Dr. Brick has over 30 years of experience in sample design and estimation for large surveys, survey quality control, nonresponse and bias evaluation, and survey methodology. Dr. Brick has a Ph.D. in Statistics from the American University, is a Fellow of the American Statistical Association, an elected member of the International Statistical Institute.