Do PR people practise what they preach on evaluation. Yes, No or Maybe?
Thirty-five years ago, public relations scholar James Grunig made this cri de coeur in the context of PR practitioner attitudes to measurement and evaluation:
'I have begun to feel more and more like a fundamentalist preacher railing against sin; the difference being that I have railed for evaluation in public relations practice; just as everyone is against sin, so most people I talk to are for evaluation. People keep on sinning, however, and PR people continue not to do evaluation research.'
Do we need to continue to preach the benefits of evaluation purity or have all public relations practitioners seen the light? Well, it’s a ‘Maybe’ from me. Significant progress has been made in recent years, but in the secrecy of the confessional I suspect that more practitioners than we expect would admit to succumbing to temptation and deviating from the path of best practice.
Let’s start by accentuating the positive with a quick summary of what’s gone well. It took a while for momentum to build following Grunig’s plea: until the mid-1990s the consensus was that PR practitioners and their employers/clients were not taking evaluation seriously. Attitudes changed with the dawn of the 21st century, culminating in the seminal event that was the AMEC Measurement Summit in Barcelona, June 2010.
The importance of the Barcelona Principles adopted at this summit (and updated in 2015) was not so much the content – being very unfair, the principles could be described as statements of the glaringly obvious. What was important was first that the measurement and evaluation industry had come together to agree and commit to them. And secondly, that their adoption was just the springboard for the initiative.
A lot of excellent work has gone into ‘operationalising’ – putting into practice to you and me – the Barcelona Principles. First, the launch by AMEC in 2016 of the now established Integrated Evaluation Framework (IEF), an excellent tool that is both useful and free. Second, the Measurement Maturity Mapper (M3) was launched this measurement month and is discussed elsewhere.
On the surface, all this excellent work by AMEC and others suggests the answer ‘Yes’ to the question whether PR people do valid evaluation research. And certainly, at one level all seems well in the evaluation firmament: the Framework underpins and coheres strategic communications planning, while the Mapper takes things further by benchmarking measurement & evaluation by market, sector and organisation type/size - warming the cockles of my researcher heart.
But under the surface, all is not well. Sitting in the confessional, we find technicians paying lip service to levels of practice they do not then implement. Two areas are particularly sinful: horrible objectives and the curse of quantification.
If you look at case studies published as examples of good practice and/or campaigns submitted as award entries, however reputable the originator or prestigious the awards, more likely than not the objectives will be problematical. For example, I’m wanting to assume that all these ‘good practice’ cases feature objectives that are SMART (Specific, Measurable, Achievable, Relevant, Time-Bound). But even this basic tenet of good practice in objective setting is honoured more in being ignored than employed.
Below are three additional all-too common problems found in objective setting, together with examples in italics. These examples are taken from a published case study with changes made to protect the guilty. Also, it will not take you long to conclude that none of three objectives can be described as SMART.
- Sinful objective #1: objectives that describe part of the process rather than impact
Test the possibility of engaging school children as a way of increasing purchase intent among families
- Sinful objective #2: objectives that play the substitute game
Work with [random celebrity] to secure 12 national media placements
- Sinful objective #3: objectives that incorporate social media for the sake of it
Use campaign activities to drive 20,000 incremental views on website
The curse of quantification first reared its ugly head with the advent of Advertising Value Equivalents (AVEs) that are now no longer acceptable in measurement and evaluation.
Quantification can be useful: measuring outcomes and establishing baselines, for example. It is also tempting for any corporate function to quantify what it does, which might be more easily understood by the rest of the organisation – provided the numbers make some sort of sense.
But addiction to quantification can be harmful and the Barcelona Principles call for measurement and evaluation to use both qualitative and quantitative methods. A prime example that many in the measurement and evaluation sector have seized upon is Opportunities to See (OTS): a measure of how widely and often content has been distributed. (Note that OTS are known as impressions in some parts of the world – particularly North America.)
The drawback of OTS/Impressions are best illustrated by traditional media. With print, they are calculated from readership and with broadcast, by viewers or listeners. The main problem lies with the word ‘opportunity’. Without further research, it is not possible to establish, for example, that a newspaper reader has attended to, remembered, absorbed and digested any particular item of content.
The seductiveness of OTS leads to sinning on a significant scale. An assumption is made that all readers have read and reflected on our earned content. This leads to metrics being developed such as ‘cost per contact’ – campaign budget divided by no of OTS/impressions. Here we have the curse of quantification flavoured by the substitution game. No contact between the reader and our content/message can be guaranteed.
A sin indeed, so now the answer is a ‘No' from me.
Paul Noble leads PR Academy's AMEC International Certificate in Measurement and Evaluation.
For more on objective setting, see the PR Place Guide to Developing and writing a communication strategy.