Specious Surveys and Bogus Data: A Cornerstone of Media Relations
Charting the future of public relations
Holmes Report
CEO

Specious Surveys and Bogus Data: A Cornerstone of Media Relations

As the volume and quality of media coverage generated by specious surveys indicates, there is a market out there for bogus data, and public relations people have historically been all too willing to exploit that market.

Paul Holmes

If you watched the U.S. news at all around Mother’s Day, you probably saw at least one of these stories: according to the makers of Wal-Mart Stores’ Parent’s Choice Organic Infant Formula, 81 percent of new moms said they wanted organic formula; according to leading automotive website, what mothers most desire in a new car is safety; and according to Adecco Staffing, 68 percent of working moms feel their bosses appreciate their efforts to balance work and motherhood.

But one survey generated more coverage than any of these, a study devised by compensation consulting firm Salary.com estimating that the work done by the average stay-at-home mom was worth a staggering $134,121 a year. That’s more, in some cases five or six times more, than their spouses make working a 40-hour week in a factory, mine or office. It’s more than four times as much as the median annual income of women in the workforce.

Those numbers should have raised questions about the study’s methodology, but the majority of news outlets carried the story without comment (except for a few obligatory paeans to the hard working moms of America). There was a substantial segment on CNN, another on NBC. It wasn’t until some time later that The Wall Street Journal’s “numbers guy” Carl Bialik looked at the assumptions behind the data that Salary.com provided to reporters.

Salary.com had conducted focus groups among mothers to identify about 20 of the job functions they performed in the home. Then the company emailed 30,000 users of its website, asking them to estimate how many hours they spent on each job function each day. Based on 400 responses, the company then focused on the 10 most popular job functions: housekeeper, day-care-center teacher, cook, computer operator, laundry-machine operator, janitor, facilities manager, van driver, household CEO and psychologist.

It then multiplied the average number of hours spent on each task by a representative salary for each—“household CEO was calculated at a rate of $600,000 a year—to arrive at its total. Any time worked beyond 40 hours (and the moms estimated they worked almost 92 hours a week) were counted as overtime and paid at time-and-a-half.

“What we’re getting at is trying to help people understand that the role of a mother is an important role,” said Bill Coleman, Salary.com’s senior vice president of compensation. “I don’t want to make this sound like this is Nobel Prize research here.”

Claudia Goldin, a professor of economics at Harvard, agreed. “The calculation isn’t for what anyone would pay an individual. Nor is it for exactly what the individual does. It is for what the person claims they are doing during a long day—CEO, psychologist, etc. And what exactly is the salary for the CEO of a business that shows no profits and sells no services or goods?”

Such questions were irrelevant to the volume and quality of coverage generated by the survey, however. That coverage illustrated what most public relations people know: that reporters rarely reject an interesting survey or study, perhaps because they don’t want to raise questions that might ruin a good story or because they find numbers inherently confusing and don’t feel confident questioning the validity of statistics they don’t really understand.

A few weeks before Mother’s Day, for example, the mainstream media had given vast amounts of coverage to survey produced by executive outplacement firm Challenger Gray & Christmas, which suggested that employees distracted by the annual college basketball playoffs known as March Madness could cost U.S. employers a staggering $3.8 billion in lost productivity.

The Challenger press release was reported more or less uncritically in many major newspapers, including the New York Times, Washington Post, Boston Globe, Baltimore Sun, San Jose Mercury News and Miami Herald. It also showed up on the CBS Evening News. Headlines like “During NCAA Tourney, Bet on a Loss in Productivity” and “Will Tourney Hurt Businesses? You Bet” made it clear that reporters and editors had fallen for the study hook, line and sinker.

It took Slate’s Jeff Merron and Jack Shafer to debunk the study. Challenger arrived at its estimate based on 58 million college basketball fans spending 13.5 minutes online at an average wage of $18 an hour on each of the 16 business days from March 13 through April 3. Shafer raised several objections: the study “misjudges the size of the dedicated college hoops audience. In 2005, for instance, the NCAA championship game drew 23.1 million households…. Also, many non-fans and casual fans who participate in office pools experience reduced interest in the tournament as it proceeds and the teams they bet on get knocked out.”

Moreover, Challenger failed to take into account the fact that some downtime is built into every workday. “Workers routinely shop during office hours, take extended coffee breaks, talk to friends on the phone, enjoy long lunches, or gossip around the water cooler. It’s likely that NCAA tourney fans merely reallocate to the games the time they ordinarily waste elsewhere.”

But as the volume and quality of media coverage generated by these and other specious surveys indicates, there is a market out there for bogus data, and public relations people have historically been all too willing to exploit that market.

One reason is that statistics—despite the widespread assumption that you can make them say anything you like—have a real power.

“Statistics are the chemical weapons of persuasion,” says Jamie Whyte, a lecturer in philosophy at Cambridge University and author of Crimes against Logic: Exposing the Bogus Arguments of Politicians, Priests, Journalists and Other Serial Offenders. “All good politicians and businesspeople know this. Release a few statistics into the discussion and the effects will be visible within moments: eyes glaze over, jaws slacken, and soon everyone will be nodding in agreement. You can’t argue with the numbers.”

Another reason is that surveys—at their best—provide readers with useful information in an easily-digestible format.

“The form and format perfectly suits our on-the-go society,” says Julie Winskie, marketing discipline leader at international public relations firm Porter Novelli. “In the U.S., the rise of the USA Today ‘snapshot’ in the 90s really gave surveys a boost. With a glimpse, you get the story instantly.  For longer content, it’s the perfect ‘grabber’ that also meets media business needs: it grabs ears and eyeballs and can force the viewer, reader or participant to focus on a particular space based on some provocative or compelling fact contained in the data.”

Robert Wheatley, a principal at Chicago branding and media relations firm Wheatley & Timmons, agrees: “We as a society are fascinated and continually engaged with trends—and feel more comfortable when we think our behaviors are shared. It validates our opinions and worldview, while also fulfilling a need for a security that comes from knowing others share similar interests.

“If the subject is engaging, revealing and relevant then the outcomes will probably make an appealing story. Storytelling is at the heart of what we do. If our claims, assertions, opinions, statements are corroborated by survey statistics that show we’re in sync with validated consumer interests or concerns then the credibility of what’s being conveyed rises ten-fold.”

And the bottom line, of course, is that they work.

“Surveys do get attention,” says Carol Cone, president of Boston-based communications firm Cone, although she says they may get less today than in the past “because of clutter, weak content or their self-serving nature.”

But the self-serving nature of surveys is causing an increasing amount of concern.

“The people who bring social statistics to our attention have reasons for doing so,” says Joel Best, author of Damned Lies and Statistics: Untangling Numbers from the Media, Politicians and Activists. “They inevitably want something, just as reporters and the other media figures who repeat and publicize statistics have their own goals. Statistics are tools, used for political purposes.”

If you spot thee words “sponsored by” in a news release or article about a survey, “beware,” says Carl Bialik. “They often indicate that some company or industry group has paid for the research. Someone is likely trying to sell something, and that means the results should be greeted with a healthy dose of skepticism.”

He cites a survey from an Internet-security firm claiming that half of U.S. companies don’t have a policy on the use of instant-message software, even though it poses a security risk, and from a corporate travel company suggesting that a third of business travelers want the ability to change their flight itineraries from mobile devices.

“Some appear to be done responsibly, while others are little more than advertisements,” he says. “But all sponsored studies start out with a credibility debt because of the funding source, and most don’t fully pay off that debt.” Nevertheless, many of these surveys continue to “receive undeserved attention—at least enough to spur companies to keep funding more studies.”

One example he cites is a survey of computer users by InsightExpress on behalf of Microsoft, which found that 25 percent spent eight hours a day at the computer and that Microsoft was the brand they were most likely to associate with quality keyboard and mouse products.

According to Bialik, the research firm “screened respondents to ask if they spent at least four hours a day at the computer. It’s thus not surprising, nor particularly informative, that two-thirds of those who said yes also spend more than six hours a day at the computer. As for Microsoft being named most often, the other choices weren’t household names…. Had the question been flipped to ask which company was least associated with reliable peripherals, Microsoft might well have won that one, too.”

Other flaws: the sample size was just 200, and the survey results were kept private, allowing Microsoft to pick which results to publicize.

Much of the criticism of self-serving research has focused on the researchers themselves. There is an assumption, it seems, that not much can be done about the expedient nature of corporations, activists, and politicians, but that research professionals can provide checks and balances to eliminate the worst of the excesses.

“Pollsters can set up surveys that deliberately shade the truth,” says Brad Edmondson, former editor of American Demographics magazine. “They do this by acting like trial lawyers: they ask leading questions, or they restrict their questions to people likely to give the desired response. In fact, pollsters can use dozens of obscure tricks to intentionally push the results of a survey in the desired direction.”

But some observers see a long-term drift away from rigorous standards toward a more relatavist view of the data produced by surveys.

“The psychology of researchers themselves has undergone a profound change over the past decade,” writes Croseen. “And many researchers’ ethical standards have drifted from the scientist’s toward the lobbyist’s. Researchers have almost given up on the quaint notion that there is any such thing as ‘fact’ or ‘objectivity.’ Although their tools have never been faster, more efficient or more accurate, the path to the truth is blocked by a financial obstacle—the escalating funding power of private interests—and an intellectual one: that is their growing understanding that the main subjects of their research, human beings, are even more unpredictable than they had known.”

The American Association for Public Opinion Research, which represents research professionals from academia, government and industry, is concerned about credibility of the research business, citing a mix of factors—the low cost of online opinion research, increased corporate sponsorship of surveys, and reporters eager for numbers-driven stories and charts—that led to the publication of more polling numbers, some robust and some spurious.

“Our ability to conduct good public opinion and survey research is under attack from many sides,” the group’s planning committee wrote in a May report. AAPOR plans to hire a staffer to spot and respond publicly to faulty polls. The group says it will issue press releases about particularly egregious survey data, regardless of whether they come from members or non-members.

For example, the association responded to a survey of 644 college women and graduates, which was sponsored by the American Medical Association and reported on risky behavior during spring break. For example, 92 percent of respondents said it was easy to get alcohol on such trips, and “one in five respondents regretted the sexual activity they engaged in during spring break, and 12 percent felt forced or pressured into sex.” But the press release didn’t make clear that only a quarter of its panel—fewer than 200 respondents—had actually gone on spring-break trips. The 92 percent figure was based on all respondents, but the “one in five” statistic applied only to the subgroup that had taken a trip.

Moreover, the press release said respondents were a “random sample” of women, claiming a margin of error of 4 percent for the survey. “But polling experts say that it’s incorrect to give a margin of sampling error for most online polls, because it’s impossible to get a representative sampling of all Americans online,” says Bialik. “That’s because respondents generally are selected from a panel of Internet users who joined the polling firm’s ranks after being attracted by an ad.”

AAPOR’s president Professor Cliff Zukin wrote to the AMA pointing out the flaws in the study, and then forwarded his complaints to the Mystery Pollster blog (www.mysterypollster.com) run by Mark Blumenthal. After Blumenthal ran a story, the wording of the release was corrected.

“We used the poll mostly to bring national attention to this issue,” says Richard Yoast, director of the AMA’s office of alcohol, tobacco and other drug abuse. “We also felt that people should hear what women think about it.” He drew a distinction between “the kind of effort we would make for an article in JAMA” (the Journal of the American Medical Association) and a “quick snapshot” of opinion on an issue of topical issue.

Indeed, many public relations people insist that surveys they send out on topical issues are not intended to be taken too seriously.

Says one U.S. public relations professional: “I don’t think we need to be as concerned about polls that are designed for fun: questions about whether people prefer one product to another, or surveys intended for fun. Does it matter whether 60 percent of men would rather watch the Super Bowl than spend the night with Jessica Alba? That tells you something about how much men love football, but it’s obviously just for amusement.”

But such innocuous research is not as common as research that is designed to influence decision-making: either prompting consumers to buy a particular product or seeking to influence public policy. And that research often contains serious flaws.

“Faulty survey data takes many forms,” says Bialik. “Sometimes the questions are loaded…. Other surveys have very low response rates… or pollsters don’t disclose all of their questions nor results, raising fears they’ve cherry-picked those responses that reflect best on the polls’ sponsors. Also, many polls you may read about have been conducted online, usually among a panel of volunteers lured by online ads—considered a less-representative sample by most pollsters than respondents who are found by random-digit telephone dialing….

“Often they are accepted automatically by the press and rendered indistinguishable from polls conducted by more standard means.”

Perhaps the easiest way to distort the findings of a survey is to manipulate the size or composition of the sample.

In 1936, for example, the editors of Literary Digest conducted a Presidential preference poll of more than two million Americans. The poll predicted that the Republican candidate, Alf Landon, would defeat Franklin Roosevelt. The Digest mailed more than 10 million ballots to households listed in telephone books and automobile registration records. In 1936, that biased the sample toward those affluent enough to own cars and phones.

“The most amazing thing about this story is that some journalists and businesses in the 1990s still make the mistakes the Literary Digest made 60 years ago,” says Edmondson. “Any journalist with half a pencil knows know that only a scientifically chosen survey sample will represent the country’s opinions. But the temptation to take a biased poll is great if you have a tight deadline and a small budget, as many news organizations do.”

Yet newspapers continue to make the same mistake. In 2003, for example, the Times of London reported the sensational news that “nearly a quarter of young drug users have smoked cannabis with a parent.” That number was based on a survey completed by 493 readers of rave magazine Miximag. Says Jamie Whyte: “Even if those who complete surveys in Miximag can be relied upon to tell the truth about their drug-taking habits, they are hardly a representative sample of young drug users… they are enthusiasts, the trend-spotters of the drug world.”

The British media also gave wide coverage to the claim that 40 percent of British women who go on holiday on Spain have sex with someone they had not previously met within five hours of arriving in their country. The sample for this survey consisted of women who had responded to a magazine’s request for stories about its readers’ interesting holiday sex experiences

Poll results based on so-called “convenience” samples can be “wildly misleading,” Edmondson says “even if the sample sizes are huge.” He cites a call-in survey conducted by an American television network asking whether the United Nations should continue to be based in the United States. About 185,000 callers responded, with two-thirds of them favoring relocation of the U.N. But at the same time, the network also conducted a random sample poll of 1,000 people, only 28 percent of whom said the U.N. should move.

More recently, many new outlets—including the New York Times, USA Today, and the NBC Evening News—ran scare stories based on a survey by the National Association of Counties (NACo) claiming a dramatic increase in meth abuse.

The group surveyed only county public hospitals or regional hospital emergency rooms, and 80 percent of them were in rural areas, even though 58 percent of all emergency depart5ments are in metropolitan areas and account for 82 percent of all annual usage. The questions, meanwhile, asked respondents for their opinions and best estimates, without any supporting data. The result was almost certainly an exaggeration of the scale of the problem.

The other obvious way to influence the outcome of a survey is to ask the question in such a way that it is likely to elicit the preferred response.

(If you want to understand how the phrasing of questions can influence the response, consider the following: Would you rather ask a minister whether it is okay to smoke while you pray, or whether you can pray while you smoke?)

For example, a survey finding that 90 percent of college students agreed that Levi’s 501 jeans were “in” on campus sounds pretty good, until one learns that the students were asked to choose from a list that included 1960s inspired clothing; Lycra and spandex; overalls; patriotic-themed clothing; and neon-colored clothing. There was no way to vote for blue jeans except to vote for Levi’s 501s. That didn’t stop the company hailing the results from “a fall fashion survey conducted annually on 100 U.S. campuses.”

The technique is even more common when organizations are dealing with public policy issues.

An Internet gaming company that wanted to demonstrate the public’s apathy about regulation of its industry, phrased its question this way: “Many gambling experts believe that Internet gambling will continue no matter what the government does to try to stop it. Do you agree or disagree that the federal government should allocate government resources and spend taxpayer money trying to stop adult Americans from gambling online?” Another: “More than 80 percent of Americans believe that gambling is a question of personal choice that should not be interfered with by the government. Do you agree or disagree that the federal government should stop adult Americans from gambling with licensed and regulated online sports books and casinos based in other countries?”

Not surprisingly, overwhelming majorities responded precisely the way the sponsoring company wanted them to.

Some survey questions are even more explicit in telling respondents how they are supposed to respond. Consider this, from the environmental activist group Greenpeace: “Depletion of earth’s protective ozone layer leads to skin cancers and numerous other health and environmental problems. Do you support Greenpeace’s demand that DuPont, the world’s largest producer of ozone-destroying chemicals, stop making unneeded ozone-destroying chemicals immediately?”

Other groups just keep on asking the same questions until they get the answer they are looking for. The National Right to Life Committee, delighted by a Supreme Court decision that the government could prohibit the discussion of abortion in family planning clinics that received federal funding, asked people whether they supported the decision, explaining that it would mean that “the federal government is not required to use taxpayer funds for family planning programs to perform, counsel, or refer for abortion as a method of family planning.”

Even with such a distorted description of the ruling, only 48 percent said they supported it. So the survey asked a follow-up: “If you knew that any government funds not used for family-planning programs that provide abortion will be given to other family-planning programs that provide contraception and other preventive methods of family-planning, would you then favor or oppose the Supreme Court’s ruling?” At this point, 69 percent said they supported the decision.

The poll was later cited in a House of Representatives debate as evidence that people supported the Supreme Court decision.

Survey sponsors don’t only get to decide the wording of the questions, they even get to decide how the results should be interpreted. Several years ago, for example, newspapers reported on a survey finding that 2 percent of adult Americans—about four million people—had been abducted by aliens.

But the researchers had not asked people whether they had been abducted, fearing that many abductees would not remember the experience, or that they were unwilling to acknowledge the trauma they had suffered. So they asked several other questions—did people wake up feeling paralyzed, with a vague sense of other people in the room?—and then concluded that anyone who answered yes to four or more of those questions had probably been abducted.

Michael Wheeler, author of Lies, Damn Lies and Statistics: The Manipulation of Public Opinion in America, has the following advice: to test whether a question provides an accurate reflection of public opinion, ask yourself whether you would be comfortable answering it with a simple yes or no. If you find yourself answering “yes, but” or “no, but,” then the poll results should be disregarded.

One additional factor—and one that tends to get overlooked—is the propensity of survey participants to provide self-serving answers. 

In her book Tainted Truth, journalist Cynthia Crossen writes about the distorting power of the ego on survey results. She cites a survey in which two groups of people were asked how much television they watched on an average day. One group was given a scale that started at less than 30 minutes a day and rose to more than two-and-a-half hours. The other was given a scale that started at less than two-and-a-half hours and rose to more than four-and-a-half hours. In the first group, only 17 percent said they wanted more than two-and-a-half hours; in the second group, more than twice as many admitted to watching at least that much.

Says Crossen, “Whatever the truth, people did not want to be on the high end of the scale.”

Accepting consumer statements about intended behavior as an accurate predictor of actual behavior is a risky proposition.

Earlier this year, for example, a study conducted by Green Press Initiative and American Co-op found that 80 percent of consumers who had purchased a book or magazine in the past six months or who currently have a magazine subscription said they would be willing to pay more for a book or magazine printed on recycled paper. That translated into an opening paragraph claiming that “a new independent study shows that four out of five consumers are willing to pay more for books and magazines printed on recycled paper.”

I would suggest that any publisher making decisions about paper selection and pricing based on the findings of that survey would run the risk of losing a lot of money.

Meanwhile, LRN, a provider of legal, complains, ethics management and corporate governance solutions, reported that “corporate ethical reputations have a clear impact on the purchasing and investment decisions of Americans.” The basis of that assertion: 72 percent of respondents to a survey by the company said they preferred to purchase products and services from a company with ethical business practices and higher prices, rather than from one with questionable business practices and lower prices.

But how many of those respondents even know enough about the companies with which they do business to make informed decisions?

“Journalists frequently criticise public relations people for using ‘bogus’ surveys and research in order to secure media coverage,” wrote British PR man Stuart Bruce at his blog earlier this year. “Many of these so-called surveys and research projects are constructed on very weak foundations. The problem is that they work. Despite the criticisms the media has an insatiable appetite for this type of material…. I for one will keep producing them for as long as journalists keep running them!”

That’s likely to be the attitude of many public relations professionals, even if they are not honest enough to say so out loud.

There’s a widespread belief that reporters—as gatekeepers—should be responsible for filtering out inaccurate data. But as Michael Wheeler says, asking the media to protect people from bad polls “is as unrealistic as expecting that prostitution will die because of lack of customer interest.” Says Crossen: “The media are willing victims of bad information…. They take information from self-interested parties and add to it another layer of self-interest: the desire to sell information.”

In May of 2000, the Times of London reported, based on data provided by the British Medical Association, that “anorexia nervosa affects about 2 percent of all women and kills a fifth of sufferers.”

There are 3.5 million British women between the ages of 15 and 25. If 2 percent of them suffer from anorexia, that’s 70,000 sufferers. If a fifth die as a result, that’s 14,000 deaths. Obviously, anorexia is a serious problem. But perhaps not as serious as the BMA and the Times would have us believe, since in 1999 the total number of women between the ages of 15 and 25 who died—from all causes—was 855.

In reality, according to the National Statisics Office, the number of women who died as a result of aneroxia in 1999 was 13. The BMA and the Times had overstated the problem by a factor of more than 1,000. Jamie Whyte wrote the editor of the Times. Neither his letter nor a correction was published, and he received no explanation of the error.

His respons: “My suspecision is that Helen Rumbelow, who wrote the article, suffers from an ailment that afflicts 25 percent of journalists and make a fifth of them talk nonsense. She has no sense of scale. When numbers get very small or very big, those afflicted lose all sense of whether or not they are reasonable.”

The problem is that “the survey offers benefits for everyone,” writes Crossen in her book Tainted Truth. “For a company competing in a marketing-driven industry… surveys can be incorporated into advertising or, even better, shopped to the media as news. The media themselves have become drunk on surveys of their constituents… And for consumers, a survey pronouncing a car America’s most popular offers the comfort of the majority.”

Brad Edmondson agrees: “Politicians use opinion polls as verbal weapons in campaign ads. Journalists use them as props to liven up infotainment shows. But this isn’t how polls are meant to be used. Opinion polls can be a good way to learn about the views Americans hold on important subjects, but only if you know how to cut through the contradictions and confusion.”

Still, many public relations people report that journalists are beginnigng to ask tougher questions about surveys.

“Reporters often want to know who was involved in conducting the study, who was surveyed and how big the sample size was,” says Wheatley. “But in the end, we should care about these issues as much as they do—whether or not they ask the probing questions. You have to have confidence in what you’re putting out there. The credibility of your firm, the client, the brand and the material itself is at stake. If the sample is not project-able to the population then you can’t use words that imply this is what Americans think generally.”

“The popularity and prevalence of the survey and how data is used has rightly caused many media to be skeptical,” says Winskie. “So we are very clear that it’s not necessarily an automatic hit with reporters any longer. Increasingly, reporters are questioning survey methodology and even the development of questions to assess validity and determine how self-serving the survey findings are.  Quirky and head twisting findings still grab media attention, but surveys on the whole are under a lot more media scrutiny.”

Winskie says her firm is “increasingly getting requests from reporters not only to see the complete survey and the survey methodology, but also to speak to the researcher who conducted the survey. Not all media operate at the same level of demand, but with today’s transparency and disclosure requirements, more and more are.” She says Porter Novelli counsels both staff and clients to adhere to certain standards, including full disclosure about sample size and make-up, and sponsor information.

Meanwhile, “more serious topics, such as those related to healthcare, present unique challenges for the PR firm, in that the standards adhered to must reflect those employed within the medical and academic establishments,” Winskie says. “Therefore, we work with our clients to make sure we achieve alignment both with their internal guidelines as well as with those we know to be the industry standards.”

But even without outside scrutiny, public relations people have an obligation to ensure that the information they provide—to reporters and to consumers—is accurate.

Says Wheatley, “The PR business already carries enough baggage with reporters about overly commercial material. Self-serving stories serve no purpose other than to irritate reporters and confirm their suspicions that we’re all spin-meisters who have little regard for editorial sensibility and appropriateness. We have a two-way relationship with the media that is precious to the reputation of the field we are in and should be carefully guarded. Bad surveys are one example of stepping over the line that in no way serves the client’s interests or the delicate relationship we work so hard to cultivate with the fourth estate.

Reputable public relations practitioners should agree to abide by several principles when it comes to releases and reports that include survey results. At the every least, there should be transparency surrounding:
• The exact wording of the questions, including any interviewer directions and visual exhibits;
• The order in which questions were asked, and whether the order was rotated to filter out order bias;
• How the sample was selected and what efforts were made to ensure that the sample is representative of a wider population;
• The total number of people contacted, the number of people who were not reached and the number of people who refused to answer, and the number of completed interviews.

Abiding by those standards, it is still possible to produce valuable research that helps a company achieve its objectives.

“A good survey is one that works with and serves the objectives of our greater PR plan,” Winskie says. “It starts with an interesting topic or question that speaks to a relevant or meaningful issue rather than an individual product, brand or company. It allows usable responses whatever the answer and has an ‘ah-ha’ factor that surprises. And when the subject matter is appropriate, at its best, it entertains.”

Winskie cites a survey that explored whether breast cancer patients were discriminated against in the workplace upon revealing their illness. “The findings brought core campaign messages to life reaching influential targets through extensive national print and broadcast coverage along with regional and niche media coverage over the span of six months.” Similarly, a survey for the propane industry found  a gap between understanding the importance of safety when grilling and their knowledge about safety measures, creating a hook on which the industry’s “safety tips” story could be hung.

Wheatley cites a similar national consumer survey on home safety knowledge and opinions for First Alert home safety products, conducted in conjunction with a campaign to elevate awareness of residential carbon monoxide poisoning and the need for alarms in households. “The survey confirmed an array of misconceptions about the hazard as well as an alarming lack of concern over the potential dangers from this invisible airborne poison,” he says. “The survey was a critical part of educating reporters and producers not only about the issue of CO poisoning but dramatically confirmed that education was desperately needed if we were going to save more lives.”

The ensuing media coverage helped the category expand from zero to $300 million in sales within three years, and Wheatley’s client—positioned as the thought leader—captured a 70 percent share of the market. “The survey gave us power in media outreach and made the story even more compelling.”

View Style:

Load 3 More
comments powered by Disqus