Measuring PR: What Comes After AVE?
By Hannah Baker.
For years the communications industry has debated the use of advertising value equivalency (AVE), which persists despite obvious deficiencies. But will a new and interactive measurement tool finally kill off AVE? Or is chasing a single, catch-all solution a fool’s errand? And what does the future hold for a sector set on proving its worth?
In 2011, US PR strategist Shonali Burke began working on a campaign for USA for UNHCR — a non-profit organisation helping people displaced by violence, conflict and persecution. The organisation wanted to increase awareness of the plight of tens of millions of refugees across the globe and raise funds by selling small blue keys to highlight that displaced people do not have a key to their own home.
Burke’s first steps were to work out the campaign’s measurable objectives, so she asked how many keys USA for UNHCR wanted to sell; the goal was 6,000 in 12 months. Burke then agreed on a strategy that involved recruiting and developing an online community of digital influencers, dubbed ‘blue key champions’, to promote the issue and write blog posts. She also organised a 12-hour ‘tweetathon’ in the lead-up to World Refugee Day, tracking how many tweets included #bluekey.
As a result, more than 6,000 keys were sold; 113 blue key champions were recruited; the initial ‘tweetathon’ increased traffic to the campaign microsite by 169% on just one day (there were more than 1,000 tweets); and the organisation was provided with a list of prospective donors. In short, the campaign was a success. There’s one thing Burke is quite clear about: at no point in evaluating her campaign did she use advertising value equivalency (AVE), which calculates what editorial coverage would cost if it were a piece of advertising.
“How would AVE have helped?” asks Burke. “The client didn’t ask for it, but even if they had, we wouldn’t have used it. Donald Trump is supported by a large number of Republicans saying that it’s okay to build walls around the USA — does that make that right? No. It’s the same with AVE.”
Like many PR professionals, Burke is tired of debating the issue that hovers like a persistent dark cloud over the profession, occasionally unleashing a monumental downpour.
PR professional and Forbes contributor Robert Wynne caused a Twitter storm after writing a white paper on the benefits of AVE earlier this year: “I obviously touched a nerve… only in the PR industry can you write a chapter demonstrating PR is at least five times more valuable than advertising and get criticised.”
Daring to counter the view that AVE is a tired, out-of-date measure, Wynne believes that practitioners should listen to clients, not committees: “If [clients] want to pay someone to figure out an outcome based on suggestions and recommendations, great. If they want to benchmark versus the cost of advertisements, that’s fine as well.”
Data from media-measurement company Kantar Media reveals that Wynne is not alone. Of the company’s 1,000 analysis clients (95% of Kantar Media’s evaluation clients are in-house professionals), 25% are still requesting AVE figures.
“It ain’t dead yet,” says Marcus Gault, managing director of Kantar Media. “The metric is flawed, but it’s delivered at no cost and it’s understood by non-communications people. You get a more valid metric by valuing coverage according to the quality, but clients often demand AVE for budgetary reasons or continuity within their organisation,” he explains.
Stephen Waddington, chief engagement officer at global PR agency Ketchum, is not impressed: “People who are addicted to drugs will be after drugs, but it doesn’t mean we should supply them.”
In recent years, the industry’s leading bodies — including CIPR, the International Association for Measurement and Evaluation of Communication (AMEC) and the Public Relations Consultants Association — have made great strides in dissuading practitioners from using the AVE metric, from penalising awards submissions that include AVE to developing the Barcelona Principles.
These principles, drawn up at AMEC’s 2010 summit in Barcelona and revised in 2015, are guidelines for measuring PR campaigns. They are used, says Ketchum chief executive David Rockland, who led their development, by the likes of General Motors and the UK Cabinet Office. “It’s not a formula; PR measurement is complicated. This is not a one-size-fits-all, simple thing to do,” he says.
One of the principles states that AVE figures “are not the value of communications”. So what is?
The Government Communication Service Evaluation Framework — a PDF guide that can be used when planning activity and setting metrics to track success — is, arguably, a catch-all measurement solution. The document sets out the different disciplines within communications — internal, stakeholder, marketing, media and digital — and asks users what metrics they want to track in each area.
“These are just typical metrics, but, in almost every case, you would be able to select some metrics from those that will be applicable to your campaign that will provide good outcome figures,” explains Elayne Phillips, head of insight and evaluation, Prime Minister’s Office and Cabinet Office Communications, who was involved in leading the development of the framework.
An early demonstration of its application has been in measuring the effectiveness of the government’s 5p carrier bag campaign, launched in October 2015, which aims to reduce consumption of plastic bags in English supermarkets by 80% in 12 months. There was a 78% reduction in single-use carrier bags in Tesco stores in the first two months alone.
“The focus has to be about what you want to actually achieve and making sure communications is contributing to those results,” adds Phillips.
Sandra Macleod, director of Reputation Dividend, instinctively mistrusts the one-size-fits-all concept: “I have yet to see one universal solution for measuring PR’s contribution. Understanding where you are at the start of the campaign and how things have changed because of professional PR is what people need to think about.”
All good businesses have a planning process to reach their end goals — and it should be the same with PR, says Waddington. It is crucial to set out objectives at the start of a campaign and align metrics to those goals.
Campaigning to get a planning application through a local authority is just one example: “You need to persuade the local community of the value of the application. How do you measure if your campaign is successful? You look at the number of objections you receive and the support you get and, ultimately, whether the application is accepted or not,” he explains.
The most compelling way to persuade clients to ditch AVE, according to Waddington, is to ensure the contribution made through PR corresponds to the goals of the business in financial terms. Burke agrees: “Talk to clients about how they can make money for the business.”
New industry framework
In June 2016, at its international summit in London, AMEC will launch a new interactive measurement tool. Richard Bagnall, chief executive of Prime Research and board director of AMEC, who is leading the project, says the tool is a development of the social media monitoring frameworks that the organisation created two years ago.
“We were telling people to think about their objectives and the channels they were planning to use, then set targets for each channel and measure across that. But we realised that practitioners shouldn’t just be looking at social media; campaigns need to be measured across integrated media, which is paid, earned, shared and owned.”
The integrated communications measurement framework (ICMF), which will be free to use, allows users to fill in their own campaign details, set objectives and build a measurement plan in advance, offering tips and support along the way.
The framework has been through an iterative and exhaustive testing process, explains Bagnall, who has been working with a group of international experts, including in-house and agency professionals, measurement vendors, academics from Henley Business School and the University of Technology Sydney, and members of the Cabinet Office.
“It’s a way of putting in different data points from across your organisation and your suppliers in one consistent story,” explains Bagnall. “We’ve had input from as broad a church as we can, because if people keep reinventing the wheel and coming up with different approaches, all you end up with is a marketplace that’s confused.”
But will AMEC’s framework finally kill off AVE? “AVE won’t go away overnight,” says Bagnall, who would like to see more education within the sector around meaningful measurement approaches. The framework, he says, is “better but not an AVE killer on its own”, because it isn’t an instant answer to the success of a measurement programme, it’s not a single number and it still involves users having to plan, tailor and measure against bespoke objectives.
Consultant and professor at Henley Business School and Cardiff University Jon White doesn’t think it matters much if the industry waves goodbye to AVE or not, saying the measure is too flawed to be taken seriously.
“In our attempts to understand the value of what we are doing, [AVE] is a calculation that can be done, but, of the calculations that you can carry out, this is really not at all useful, because of the shaky assumptions on which it is based.”
He is more interested in looking at results in terms of behaviour: “You can do a much more thorough statistical analysis to find out from people who have changed their behaviour what the influences were on the change of behaviour reported or observed.”
He admits he is bemused by the Barcelona Principles and is sceptical of AMEC’s work, believing the organisation has a “vested interest” in tying the PR industry to a particular approach to measurement.
“Because [AMEC] is all about measurement and evaluation of communication,” he says, “it is not going to be so helpful when it comes to understanding how PR works to bring about behavioural change. [AMEC] sees PR as a communication practice, rather than as a practice that makes use of communication to bring about changes in relationships. It is only one contributor to the debate on measurement and evaluation and is too narrowly focused on communication.”
Orin Puniello had never worked in PR when he joined Ketchum’s New York City office in 2015. A specialist in research, statistics and analytics, he’d previously been employed at Bloustein Center for Survey Research at Rutgers University in New Jersey — and was brought in to ‘beef up’ Ketchum’s statistical-analysis offering.
Responsible for the design and execution of predictive analytic modelling, Puniello applies behavioural science — the study of human behaviour — to PR through the use of sophisticated statistical analysis.
The idea is to understand how someone’s purchasing habits, reputation or behaviour are affected by different combinations of variables: channels (such as Twitter or print media), messengers (the people doing the talking) and messages (what’s being said). “My job is to look at causal relationships and find out why something is happening,” explains Puniello.
A large US healthcare institution engaged Ketchum to track brand reputation and optimise communications strategy. Applying statistical analysis to data he gathered through surveys, Puniello was able to work out that, if the client’s visibility in daily newspapers increased by 10 percentage points from 11% to 21%, the likelihood of travelling to one of the client’s healthcare centres could increase from 7.95 to 8.05 on a 10-point scale.
“I was able to show the client which channels were most positively impacting their reputation and explain what people’s key behaviours were as a result of viewing the client via different channels,” says Puniello. “[Behavioural science] allows you to say to your client: ‘if you want the best bang for your buck, here’s who you need to say what, in what place, to get that to occur,’” explains Rockland, who believes behavioural science is one of the greatest and most rapidly growing aspects of PR measurement — and far more valuable than AVE.
The examination of human behaviour has been carried out in other industries for several decades, but was only brought to the fore in communications in 2008 with the publication of Nudge: Improving Decisions about Health, Wealth, and Happiness by US academics Cass Sunstein and Richard Thaler.
Simon Maule, director of communications agency Linstock, is a fan of the book and passionate about using behavioural science to measure communications, believing it provides a much richer understanding of how people think and act: “It’s about trying to get inside people’s heads. If you want [people] to do something, what are the barriers preventing them doing that?”
Maule believes the public sector is particularly adept at using behavioural science, pointing to how HMRC played with different ways of asking people to pay them: “In the past [HMRC] was very officious — ‘you must pay or you will be fined’ — but now it’s more personal. It uses friendlier language and talks about easier ways people can pay. The result? People are paying more quickly.”
Behavioural science may be a highly effective form of measurement, but most industry professionals do not have a background in statistics (Puniello is studying a PhD in survey research alongside his day job). So is it realistic to expect agencies and clients to develop behavioural change-induced targets?
There are simple ways to relate behavioural science and PR activity, says Maule. One of the easiest is to observe people’s behaviour before and after a campaign. “Do some social listening, because then you’re listening to actual conversations, and conduct interviews with people afterwards to try to gauge what’s happened,” he advises.
It’s also just one tool in the measurement arsenal. “[Behavioural science] should be used with other approaches. I’d never recommend that it’s just used on its own,” adds Puniello.
AVE may not be dead, but it is not the answer to evaluating PR. The industry is working hard to develop new, effective tools to evidence the value of comms. The ICMF, and others that will inevitably follow, may not be the last word in the evaluation debate, but they are useful milestones on the profession’s journey towards leaving AVE behind.