Journalists receive frequent emails, as many as hundred a day, from sources, PR folks, and the likes, each pitching what it claims to be the latest innovation that will change the world. A good portion of them is fluff, veiled in curated language to validate their legitimacy.
Information about the newest polls and surveys is a routine part of the unsolicited emails journalists receive. Some surveys provide excellent and in-depth insight into a particular trend or movement happening in a market. For example, the US Department of Labor’s Bureau of Labor Statistics releases a monthly employment report that looks at how the jobs market in the United States performed in the month prior. It also gives background on the performance of the jobs market in the last three months for comparison purposes. The role of the journalist is to look at these surveys, digest the information, and translate them into easily understandable text for readers and the general public.
Not all surveys, however, reveal insights into markets and some are purposely geared towards a specific outcome. Recently, the New York Times and CBS were criticized by the American Association for Public Opinion Research for publishing a poll on the Senate race in the country. The criticism leveled at the Times and CBS was that the survey used an unproven methodology and was not transparent.
Wrongful use of polls and survey results by journalists not only misleads readers but also affects lives and damages reputations. The internet has made polling an easier task than in the past and many companies and organizations, especially technology companies, now rely on polls for publicity. To avoid errors when working with survey results, here are 10 questions all journalists should ask before starting to delve into the findings. A general rule of thumb is that if the results appear too good to be true, they probably aren’t.
1. Always ask for the raw data file
When given survey results by an organization, journalists should always request for the raw data file in .xls or .csv. This way, they can look at the findings themselves, run charts and analysis using Excel or other database manager, and present results that are most relevant to their readership, instead of what appears in the press release.
2. Look at the methodology
Surveys can be conducted by a variety of means; over the telephone, via the Internet, in-person, through the mail, and even on street corners. Different methods of conducting a survey reveal a lot about the demographic that was surveyed. For example, if a survey was conducted solely over the internet, that means it included only those with computer access and an Internet connection and who were sufficiently computer literate to participate. Knowing the methodology helps journalists determine if the final results and findings are skewed towards a specific outcome.
3. What is the sample size
The sample size in any given survey is crucial. For a large sample size of about 1,000 people, the margin of error will be comparatively less than a small sample size of 100 respondents. Margin of error determines how reliable the findings of a survey are in relaying information about a specific demographic.
4. Demographic matters
The demographic of a survey is important because it determines the diversity within the sample size. If too many people belonging to the same demographic is surveyed for a national survey, the results may not be fully representative, especially to the under-represented groups. For example, if a survey on the state of public schools in New York surveyed 100 people from Manhattan, 10 from Bronx, 10 from Brooklyn, 10 from Queens, and none from Staten Island, then the results will be skewed towards the schooling experiences of public school students residing in Manhattan.
5. Who paid for the survey?
A survey that is paid for by an organization will likely have a skewed message that looks to promote and publicize the brand’s image. The PR spins for such surveys are usually called “thought leadership” and many technology companies engage in conducting such surveys. For example, cloud technology vendors like SAP and Ovum regularly conduct surveys to provide insights on how the cloud market is performing; these results can be useful in a story but it should always be disclosed to the readers that the survey is sponsored by a vendor with a specific interest in that market.
6. Clarity of the questions
Clarity is important because the way a question is phrased can affect how a person responds to it. The wordings can connote different meaning and context for participants, and hence the final outcome may not be representative of those polled.
7. Cross examine what’s already available
With more polls being conducted today, it is possible a similar study has already been conducted by multiple sources. Journalists need to do their research and find out what each of those surveys has been saying about a specific issue and compare it with the one on hand. If the results are drastically different, they should go back to points 2, 3, 4, and 7 to find out why.
8. Always call an expert
Journalists are not expert pollsters. They may have had experiences in dealing with polls and surveys but it is always a fail-safe method to call experts who are paid to deal with polls regularly. They can provide an insight into how the survey might have been conducted and whether the results are legitimate enough to be published.
9. Re-check your analysis
Even if a survey is found to be authentic, there might be instances when a journalist’s own error results in the publication of misleading information. This is particularly true if journalists obtain the raw survey results to tinker around and do analysis on. These raw databases are huge and frequently can run up to 100,000 samples to parse through; a single error can affect the entire analysis. Being methodical and having a proper procedure can go a long way in ensuring accuracy.
10. Timeliness of the results
Always factor into account when the survey was done, when the results were released, and when those results will go to print. If sufficient time has passed between the first and the third instances, it is likely that people’s opinion may have changed over the course and thus the results would not be authentic.