ASA Newsroom

Best Practices Guide for Use of Statistics in Public Relations

Prepared by the Public Relations Society of America and the American Statistical Association

PDF Download in PDF Format

Public relations professionals have to deal with statistics frequently, but few have had the technical training to do so with confidence. This document provides a guide to promote best practices in the use of statistics by public relations professionals. It also provides a contact lifeline for public relations professionals who need urgent statistical- or research-based help.

Today's public relations professionals must be prepared to deal with the increasing use of statistics in communications. Understanding how to present and interpret statistics is often critical to writing a good press release, showing the results of a communications program, forecasting the business, or talking about the ROI of a campaign. To assist in this goal, the Public Relations Society of America (PRSA) and the American Statistical Association (ASA) have developed this cheat sheet to assist practitioners who must understand, interpret and communicate statistical issues.

Statisticians and public relations professionals are natural allies. Both wish to communicate useful and accurate information. Statisticians focus on the complexity of drawing sound conclusions from masses of data. The process requires strong mathematics, careful evaluation of assumptions and clear statements about inherent uncertainty. Public relations professionals often have the responsibility of communicating results to their employers, clients and other audiences. This cheat sheet is designed to assist public relations professionals to better transmit reliable and meaningful information to the public.

This best practices guide provides guidance in three major areas: describing and understanding projects, describing and understanding results, and drawing inferences and generalizing results.


Describing and Understanding Projects

  • Disclose who paid for the work, and specifically who did the project. Results from a pharmaceutical company's analysis and an FDA analysis may vary, but the source of the information can and will impact perceptions of results.
  • Be clear about how the information or research was conducted. For example, was it a focus group or a survey? If it was a survey, among whom was the survey conducted and how was the data collected - in person, online, by telephone, by mail, etc.
  • Describe the sample of the survey - how many people were in the sample, how it was projected to the target population and how representative is the survey sample of the target population. The results, and the ability to make projections from data, are directly tied to the quality of the sampling. In general, random samples are preferred to non-random samples because they are more representative of the population. Similarly, with random samples, as sample size increases, error decreases, which means that you can be more confident in your results. Last, sampling can be complex, so contact a statistician if you are not sure how to determine the appropriate sampling method and/or sample size.
  • Make sure to know and/or report if the survey was truly random where every person in the target population has an equal chance of taking the survey. The ability to generalize results from the sample to a larger population is compromised with a non-random sample. For example, people who respond to requests by news outlets to "call in or text in with their opinions" are not random.
  • Almost any survey can suffer from bias. There are many kinds of bias, but a major one is non-respondent bias - the people who refuse to take the survey are not like those who do. For example, younger people might be less likely to respond to a particular survey, and their views may differ greatly from the older people who did respond. The way the questions are ordered is another type of bias. One example of this is asking questions about awareness of school shootings before asking about the need for strict gun control in the U.S.

Describing / Understanding Results

  • Commonly used and reported statistics are referred to as descriptive statistics because they describe the distribution of responses or results. To accurately understand and communicate results, it is important to look at the underlying data (the frequencies and percentages of responses in each category) that the statistics are describing as well as the statistics themselves. The list below includes statistics that are important to know. They are:
    • Mean - the arithmetic average.
    • Median - the midpoint of the data.
    • Mode - the most common response.
    • Standard Deviation - how tightly clustered are the responses.
    • Range - the range of responses, the high and low.
  • Always compare the data with the statistics to ensure they accurately describe the distribution of responses or numbers. Many statistics are sensitive to violations of the assumptions that underlie them. For example, Means are sensitive to extreme ratings or responses (outliers), requiring a "normal" or bell-shaped distribution of responses to provide an accurate description of the data. Also, if respondents are split at the high and low end of a rating scale, the Mean will be in the middle of the scale and can be misleading (e.g., the Median income in Omaha is perhaps $50,000; but the Mean, or average, may be misleadingly higher because Warren Buffet is an extreme outlier making billions of dollars per year).
  • All statistical research includes some level of uncertainty. Statistical significance tests are used to reduce the uncertainty that the obtained results are due to chance. Be sure to report that uncertainty (usually through a confidence interval). Be mindful that confidence intervals based upon surveys do not typically capture all the uncertainty in the problem. When a statistical significance test is used, be sure to report the "P-value."

Drawing Inferences and Generalizing Results

  • Clearly describe trends and effects. Read results carefully, recognizing differences such as those between a percent increase and a percentage point increase. For example, if awareness of a product increases from 10 percent to 20 percent, it is a 100-percent increase, and a 10-percentage-point increase.
  • Trends do not continue indefinitely. Avoid making claims about the future based on recent history.
  • Causal statements are very hard to prove. There is a distinct difference between correlation and causation. Correlation describes the strength of the relationship between two factors or variables. Causation means one thing causes another to happen. Make sure to not use these terms interchangeably and question results that make such claims without support.
  • Using graphics to communicate results often helps make them easier to understand. It is best, however, to make sure they are clear in terms of the main points you are making about the statistics. Remember, it is less about the numbers, and more about what they mean to the audience.
  • Run your insights from the data by the person who actually did the research to be sure the data support its interpretation and use.

Need a lifeline?

The American Statistical Association will put you in touch with experts who can provide help. Need help deciding what you should be measuring, and how? The measurement advice and tools provided PRSA's "Business Case for Public Relations™" can serve as resources.