[CANCELLED] IASS Webinar 41: Comparing Data Quality and Sources of Error in Probability-based Online Panels and Online Opt-in Samples

Map Unavailable

Date(s) - 26/06/2024
2:00 am - 3:30 am

Category(ies) No Categories

[This event was CANCELLED]

IASS Webinar 41: Comparing Data Quality and Sources of Error in Probability-based Online Panels and Online Opt-in Samples 


Speaker:  Andrew Mercer

Pew Research Center


26 June 2024  at 2pm – 3:30pm (CET)


All are invited to the webinar, organised by the International Association of Survey Statisticians.


Please register for the IASS Webinar at:



After registering, you will receive a confirmation email containing information about joining the webinar. There will be time for questions. The webinar will be recorded and made available on the IASS and ISI web site. See below for the abstract and biography of the speakers.


Webinar Abstract


Over the past decade, online surveys, both those recruited using traditional probability-based methods and those drawn from online, opt-in sources, have grown to become the most common means of conducting public opinion research in the United States. During this same period, the Pew Research Center has conducted several studies examining data quality and sources of error in online, probability-based and opt-in samples. In this webinar, Andrew Mercer will review the findings from this line of research and discuss the Center’s most recent such study comparing the accuracy of six online surveys of U.S. adults – three from probability-based panels and three from opt-in sources. This is the first such study to include samples from multiple probability-based panels, allowing for their side-by-side comparison. The study was also designed to permit an in-depth comparison of accuracy not only for full-sample estimates, but also for estimates within key demographic subgroups. Consistent with previous studies, it found that probability-based samples generally yielded more accurate estimates. While previous studies have tended to assume that differences between probability-based and opt-in samples are due to differences in the selection mechanism, the findings from this study suggest that in fact, many of the large biases found in online-opt in samples are instead due to measurement error stemming from the presence of “bogus respondents” who make no effort to answer questions truthfully. Critically, the study finds that errors from bogus respondents are especially large for subgroup estimates, particularly 18-29- year-olds and Hispanic adults.



Andrew Mercer is a senior research methodologist at Pew Research Center. He is an expert on probability-based online panels, nonprobability survey methods, survey nonresponse and statistical analysis. His research focuses on methods of identifying and correcting bias in survey samples. He leads the Center’s research on nonprobability samples and co-authored several reports and publications on the subject. He also served on the American Association for Public Opinion Research’s task force on Data Quality Metrics for Online Samples. He has authored blog posts and analyses making methodological concepts such as margin of error and oversampling accessible to a general audience. Prior to joining the Center, Mercer was a senior survey methodologist at Westat. He received a bachelor’s degree in political science from Carleton College and master’s and doctoral degrees in survey methodology from the University of Maryland. His research has been published in Public Opinion Quarterly and the Journal of Survey Statistics and Methodology.