The Top 10 PR Measurement Atrocities and How You Can Avoid Them

P+ Measurement Service
9 min readJul 7, 2021

By: Katie Delahaye Paine

Katie Delahaye Paine, the queen of measurement, examines the 10 biggest PR measurement atrocities including improper search terms, multipliers, not testing data, using b.s. benchmarks, and unrealistic measurement budgets

1. Making up Metrics

I was recently on a conference call with a respected international measurement organization and a new member of the group was explaining how she’d been pressured by her CEO to put a dollar value on her efforts. “So what we’ve done,” she said with some pride, “is equate the reach of posts to the reach of a banner ad and value them according to what it would have cost to purchase the banner ad.” For the record: There is no evidence that banner ads have an even remotely similar impact on a customer’s path to purchase as PR. As John Oliver and Bob Garfield will tell you — the chance of anyone intentionally clicking on a banner ad is lower than surviving a plane crash. There should be a special place in measurement hell for people who make up bad metrics.

2. Torturing Numbers Until They Say What You Want

We once delivered a launch report that initially included data from around 250 social and traditional media items. It showed that the desired messages appeared in about 7% of the coverage — not a bad number at all. But it wasn’t what the client expected or wanted to see.

So first they eliminated all social media, arguing that their outreach efforts had all been focused on traditional media. That reduced the data set by half, but 150 is still an acceptable amount of data to analyze under the circumstances, as it happened, the data didn’t change all that much. The percentage of items containing one or more key messages went up slightly to 10%.

But apparently that still wasn’t an acceptable number. So we were asked to ONLY include the top tier media at whom the outreach was directed. That narrowed the data set to 10, but produced the desired percentage and the agency was able to declare that one third of all coverage contained a key message. The problem: percentages lie. For all the effort and energy put into the campaign only three media mentions contained a key message. That data point probably won’t get much praise from the corner office.

If you are given a bunch of data collected from agencies who may or may not have been clipping consistently with your methodology, you are treading in dangerous waters. If you are told to just report the “good news” you have crossed to the dark side.

3. You Use Multipliers

The reality is that when someone tells you that you have “reached” 200 billion eyeballs” chances are pretty good that you haven’t. Inflating impressions is a common problem in those organizations that still believe that impressions count. For some reason people want to “add weight” for specific media outlets or certain types of stories. The right way to do this is to develop a custom index that you can track over time. The wrong way is to multiply impression counts (which are arguably flawed to begin with) by multiplying a top tier publication by 2 or 3 or whatever number someone dreams up. The IPR has published a great paper on why you should never use multipliers. Your top tier list should reflect that degree to which it reaches your target audience, that’s why it’s called Top Tier. Why do you need multipliers if you are already reaching a high percentage of your target audience?

Most outlets and reporting companies like Compete and Alexa use average unique visitors and calculate data on a monthly basis. The number you see reports the total visitors to the Website, not the visitors to the URL of your story. The reported number represents opportunities-to-see (OTS), not actual readership. The actual readership of any given article is unknown, but far lower than the reported OTS or unique total visitors.

Then there’s broadcast. Most services use Nielsen or ComScore to report viewers, but God help you if you have data from both because their numbers often vary dramatically.

4. Using BS Benchmarks

Comparing results to the competition is a powerful and persuasive way to benchmark your results. However, in order to be truly comparable you need to make sure you are comparing the same media outlets in a similar time frame, in the same geography. If the competition launched its program the same day that Princess Diana died, and they were lucky to get any coverage at all (true story) it’s hardly a fair comparison. If you do not have the full data set for the competition because their launch or activity exceeds the time frame of your data, it’s not a fair comparison. If you are launching into a mature market but the competition was the first to market and had to educate the market as well as promote its products, it’s not a fair comparison.

5. Failing to Get Agreement on What “Good” Is

Too often, PR people define success as a big pile of clips, or a lot of neutral coverage, when in fact senior leadership measures success is terms of leads or messages communicated. When the PR person reports only numbers of media mentions, the report is considered worthless.

The same problem plagues RFP processes. Companies decide they want a measurement vendor and start calling in the sales people from various measurement firms. If you hire a measurement vendor before you agree on what success looks like, it’s very possible you’ll end up with a vendor that doesn’t measure what you need measured. Start with a solid list of metrics that have to be delivered, then write up a list of criteria and measurement specifications. A good place to start would be with the Vendor Questionnaire/ Transparency Table.

6. Not Having a Test in Place to Judge the Completeness of the Data

Reporting results based on lousy data is like building a high rise building on a foundation of bad concrete — it will look good just long enough to get everyone to buy into it, and then it will collapse.

There is nothing more important to measurement than the accuracy of your data. So how do you check to make sure that your news and social media data are accurate? Test, test and more tests.

Take at least a month’s worth of data and test it for completeness. Double check that your key media outlets are included in the media data. Next, check for duplicates and spam. From some media monitoring suppliers, chances are about 50% of raw data will fall into one of those categories of bad data. A good vendor will have systems in place to screen for duplicates, spam, and inappropriate content (wedding announcements, incorrect names or references, police blotters). If your media monitoring service doesn’t have good duplicate and spam filters, you’ll have to sort through the clips yourself and discard the bad ones. If you don’t, the bad clips will corrupt your measurement results.

Finally, check for accuracy of the way the incoming data is tagged or coded. Chances are you need data in specific buckets — i.e. corporate, product, customer service etc. Most systems rely on some sort of algorithm to identify these subjects based on computer-automated selection. Check a random selection of items — at least 50 to see if they are correctly tagged and coded.

If your vendor is providing sentiment or tonality, you will need to conduct a separate test to validate their coding. Select a sample of 50 items and have them read by an intern, or someone who didn’t have anything to do with “placing” the stories. Compare the results of human analysis to the automated system results. If you don’t find agreement in 80% of the items, your supplier may have to adjust their algorithm for sentiment analysis. Make sure they can handle this level of customization, or you’ll be in trouble down the line.

7. Not Having a System in Place to Deal with the Data When it Comes in

In today’s torrent of media, you will be surprised at just how many “alerts” you will get from your monitoring system. So many in fact that the system will soon be like the boy who called wolf too many times, and you will find yourself skipping over the emails. But the reason you have a monitoring system is to ensure that you aren’t the next Domino’s Pizza or Kenneth Cole — so you need a process to stay on top of them and send them along to the people who will need to handle them. Do NOT foist that task on to your summer intern — you need to identify your own internal Olivia Pope who has the judgment and background to know how and when to respond.

8. Not Having Clear Definitions and Search Terms

Today’s monitoring services are a lot like the early days of Match.Com. Then e-Harmony came along and it’s “search” factored in a slew of other desired characteristics. Sure, it took hours to complete the questionnaire, but in the end it was worth it.

Today’s media search is similar. You get what you search for. Search strings tell their systems what to monitor for. And even more importantly what to exclude. Like eHarmony, more detailed search queries produce better results. If your search query simply tracks a brand name like “Visa,” that big spike in coverage in June may be the result of your PR efforts on behalf of the credit card company, or it could be that there were new visa requirements to enter China. Boolean search queries can eliminate most of the extraneous articles. Example: Visa AND “credit card” OR bank OR purchase AND NOT passport OR immigration.

Regular expressions (REX) can refine search results even more. REX statements mandate certain specifications in the individual characters of the keywords — such as requiring a capital V on Visa. Only a few media monitoring services — CyberAlert among them — offer regular expressions in their search queries.

Here’s another example: If you are the PR manager for the city of Philadelphia you will want to make sure your monitoring excludes the Philadelphia Eagles, Sixers and Flyers. You’d also want to exclude “Philadelphia cheese steaks” and traffic reports for Philadelphia Avenue in King of Prussia.

Even more problematic are search strings that aren’t periodically updated with new products and new brand names. Chances are, you don’t realize you’re not capturing the coverage until a product manager announces at an important meeting that you are missing coverage and therefore all the results are invalid.

Whether you are incorporating human coding or relying solely on machine coding, or taking a hybrid approach and combining the two, you still need to have good clear definitions. Any abbreviations, acronyms, or internal code words need to be clearly defined and those definitions should be provided to each vendor.

9. Having Unrealistic Expectations — Budget Wise

If you’ve been using Google Alerts and doing your own collection, you know how much staff time this process consumes. So don’t expect to outsource the task to a service that does the same thing better, faster, or more efficiently and think it will cost peanuts. The reality in the monitoring world is that you get what you pay for. You can pay as little as $5,000 a year, but you’ll have to do most of the work yourself. A monitoring and measurement service that comes with account management can range from $20,000 to $500,000 depending on the number of items collected and analyzed, the degree of customization, and the frequency of reports. Do NOT be the Fortune 500 client I had once who asked me to prepare a proposal to monitor, measure and evaluate its media coverage in 10 different countries, with 5 different competitors, and then admit three months later that the budget was only $25,000.

10. Trying to Compare Apples to Fish

Given that there are 450 different vendors providing some form of media monitoring and measurement, the easiest mistake to make is to believe that they all do the same thing. They do not. Some are designed around vertical markets, some are designed for large enterprises and most do a few things well. Some are great at news monitoring; others are great at social media analysis and managing your conversations. Some may be great at local coverage but lack the resources to conduct national or international monitoring or measurement. Make sure that the vendors you are talking to have the necessary resources, experience in your industry, and are really good at what you need them to do.

Bottom Line: Of all the PR measurement atrocities, the most egregious may still be not measuring at all or merely counting clips.

--

--

P+ Measurement Service
0 Followers

P+ Measurement Services is an Independent PR measurement and evaluation consultancy.