The idea of giving a clever man a desk in Whitehall is outdated, argues Geoff Mulgan in the third of our series on scientific advice. We need to take seriously the evidence about evidence
Governments should want and even crave the best possible scientific advice. With reliable knowledge come better decisions, fewer mistakes and more results achieved for each pound spent. In many respects it’s remarkable that only now are governments setting up and funding centres dedicated to assessing and communicating “what works”.
But knowledge is not always easy to use or to digest. The most authoritative advice may be uncomfortable, or at odds with what the public want. What’s recommended may appear inordinately expensive, with uncertain benefits to be reaped in the distant future. Harassed ministers with a low tolerance for ambiguity may be frustrated when they’re told how uncertain the knowledge is. And experts too, can be fallible, perhaps more than they would like to admit.
We should certainly want evidence to be more visible, more influential and better used. The drive to create institutions to promote evidence, such as the Alliance for Useful Evidence, and the new What Works centres, is a vital part of making government more competent, and more deserving of trust. But anyone concerned to promote evidence needs also to be attuned to the subtleties, and to the many reasons why the pure ideal of government guided by wise elders is neither possible nor desirable.
Evidence about evidence
To help us understand what kinds of evidence are most useful, and most likely to be used, there is, luckily, a science as well as a craft of scientific advice itself, although much of the commentary on scientific advice appears to be unaware of the extensive research on why certain kinds of knowledge and advice are acted on, and others are not.
I’m one of the guilty, in that I was for many years a champion of rational advice in government, but only belatedly caught up with the evidence about evidence. As a civil servant in charge of the government’s Strategy Unit, I brought in many people from outside government, including academia and science, to work in the unit, dissecting and solving complex problems from GM crops to alcohol, nuclear proliferation to schools reform. In our work we promoted rigorous analysis, and wherever possible published surveys of evidence. We encouraged better skills of modelling and quantitative analysis. And we prompted departments to undertake more rigorous mapping of future possibilities.
All of these were valuable counters to the influence of spin doctors and tacticians. Some were well ahead of their time – including work on topics such as behaviour change, happiness and systemic change. But I also learned that it’s not enough to bring clever people into government, or for advice to be rigorous and rational. Methods of this kind survive only as long as there is a political appetite for them, and the conditions in which they thrive may even be quite unnatural.
That prompted me to ask what was known about the role of knowledge in government. It’s true that the science of scientific advice is patchy. There have been a few randomised control trials (RCTs) to test how knowledge is taken up within professions (and why even apparently compelling evidence is often ignored). But the study of high-level advice has fallen more to historians and political scientists, and experts in the burgeoning study of knowledge itself. From their work, a reasonably coherent picture of how knowledge is formed, exchanged and used in practice, both on the frontline and within policy, has built up.
What their research shows is not definitive, but it is clear, and its consistent message is that the effectiveness of advice doesn’t depend greatly on the cleverness of the person giving the advice or even the logical cogency of their arguments. Instead it matters a lot who gives the advice – and whether they are trusted and reputable. It matters how advice is given, and in particular how it is framed – preferably fitting the cognitive style of the receiver, and with a tone that is neither hectoring nor patronising.
It matters when the advice is given – either in the heat of a crisis or emergency, or when an issue is salient. And it matters where the advice is given – the most influential scientists have usually installed their offices close to those with the greatest power, or ensured plenty of physical interaction (for example at conferences or on study trips).
Perhaps the most important finding of almost all research on this topic is that demand matters as much as supply. The most brilliant advice may go wholly unheeded if it’s not fitted to the social context of decision makers, the psychology of people making decisions in a hurry and under pressure, and the economics of organisations often strapped for cash. What works for whom and in what circumstances are crucial factors; and evidence and advice have to make themselves useful if they are to be used.
That demand is as likely to happen on the frontline as in Whitehall. Evidence-based practice tends to matter more than evidence-based policy, which may be why the National Institute for Health and Care Excellence focuses exclusively on what doctors do and prescribe. For a field like policing, it probably matters even more that police officers learn about evidence from the start, and engage with it throughout their careers, than that Home Office officials draw on evidence before they shape new laws. And for all professions it matters that there are opportunities – for example in study circles – for professionals to engage with recent research and discuss its relevance.
So how should advisers raise the odds of having impact – and of being useful? In my experience, the successful ones understand two fundamental aspects of the context in which their advice will be heard, both of which are radically different from the cultures they are likely to have experienced for most of their careers outside government.
The first is that they are operating in a context where there are often multiple goals and conflicting values. As a result, there may often not be a single right answer (though there may be any number of demonstrably wrong answers). Instead there will be right answers that are more or less aligned to the priorities of government (and of the public). The better the providers of advice understand decision-makers’ perspectives and needs the more likely they are to be influential.
Take energy. I twice had to oversee reviews of energy policy and in each case the scientific analysis of such things as potential energy sources, current and future renewables or carbon scenarios, had to be linked to the very different goals of ensuring affordable energy, energy security, and protecting the world from catastrophic climate change. Scientific method cannot tell us which of these goals is more important. This is a matter for judgement and wisdom – and as the study of wisdom tells us, wisdom tends to be context-specific, rather than universal like natural science.
The second vital, but not always obvious, point is that governments have to deal with multiple types of knowledge. A minister making decisions on a topic such as the regulation of pesticides or badger culls may need to take account of many different types of knowledge each of which is provided by a different group of experts. These include: evidence about policy, such as evaluations of public health programmes; knowledge about public opinion, and what it may or may not support; knowledge about politics, and the likely dynamics of party or parliamentary mood; intelligence, whether human or signals; statistics; economics; history; knowledge about civil service capacities; and performance data, for example on how hospitals or police forces are doing.
Trump cards and clever chaps
Formal scientific knowledge sits alongside these other types of knowledge, but does not automatically trump the others. Indeed, a politician, or civil servant, who acted as if there was a hierarchy of knowledge with science sitting unambiguously at the top, would not last long. The consequence is that a scientist who can mobilise other types of knowledge on his or her side is likely to be more effective than one that cannot; for example, by highlighting the economic cost of future floods and their potential effect on political legitimacy, as well as their probability.
These points help to explain why the role of a chief scientific adviser can be frustrating. Simply putting an eminent scientist into a department may have little effect, if they don’t also know how to work the system, or how to mobilise a large network of contacts. Not surprisingly, many who aren’t well prepared for their roles as brokers, feel that they rattle around without much impact.
For similar reasons, some of the other solutions that have been used to raise the visibility and status of scientific advice have tended to disappoint. Occasional seminars for ministers or permanent secretaries to acclimatise them to new thinking in nanotechnology or genomics are useful but hardly sufficient, when most of the real work of government is done at a far more junior level. This is why some advocate other, more systematic, approaches to complement what could be characterised as the “clever chap” theory of scientific advice.
First, these focus on depth and breadth: acclimatising officials and politicians at multiple levels, and from early on, to understanding science, data and evidence through training courses, secondments and simulations; influencing the media environment as much as insider decision making (since in practice this will often be decisive in determining whether advice is heeded); embedding scientists at more junior levels in policy teams; linking scientific champions in mutually supportive networks; and opening up more broadly the world of evidence and data so that it becomes as much part of the lifeblood of decision making as manifestos.
Here the crucial point is that the target should not just be the very top of institutions: the middle and lower layers will often be more important. A common optical mistake of eminent people in London is to overestimate the importance of the formal relative to the informal, the codified versus the craft.
Second, it’s vital to recognise that the key role of a scientific adviser is to act as an intermediary and broker rather than an adviser, and that consequently their skills need to be ones of translation, aggregation and synthesis as much as deep expertise. So if asked to assess the potential commercial implications of a new discovery such as graphene; the potential impact of a pandemic; or the potential harms associated with a new illegal drug, they need to mobilise diverse forms of expertise.
Their greatest influence may come if – dare I say it – they are good at empathising with ministers who never have enough time to understand or analyse before making decisions. Advisers who think that they are very clever while all around them are a bit thick, and that all the problems of the world would be solved if the thick listened to the clever, are liable to be disappointed.
One reason that I’m optimistic is that in the coming period we may see a revolution in how evidence feeds back into decision making, thanks to the proliferation of data, new tools such as semantic analysis of social media, and the proliferating sensors of the Internet of Things. At the very least it’s likely to become more natural for professions like teaching or the police to be influenced by data – whether it’s real-time personalised feedback on how individual pupils are faring, or data on crime patterns in neighbourhoods. Some schools already have journal clubs where teachers read, and discuss, the latest research. The Narayana Hrudualya hospital in India is famous for requiring doctors to meet weekly to discuss performance data – something that’s normal in a Japanese car factory but oddly alien to many professions.
This is also an era when the scientific method is becoming normal well beyond the confines of the university. Firms like Amazon and Google use thousands of RCTs to evaluate new services; individuals monitor their own bodies; and everyday life is being reshaped by a flood of data and feedback. In this context, scientific advice has many allies, and is going with the grain of a more reflective, more data-savvy culture.
The authority of chief scientific advisers will often depend on how well they provide useful answers at moments of crisis; when a minister or prime minister needs to know how to cope with an epidemic or natural disaster. But advisers don’t, and shouldn’t, only offer answers. I remember Margaret Thatcher’s CSA saying that what she had really valued were better questions more than better answers.
In optimistic moments, I hope that we are moving towards a period of more overtly experimentalist governance, where governments are willing to test their ideas out – to run RCTs and embed continuous learning and feedback into everything they do. Experimental government would certainly be better than government by instinct, government by intuition and government solely guided by ideology.
In such a context, the old model of a clever man given a desk in Whitehall, sitting in a corner writing memos may be even more anachronistic. We certainly need highly intelligent eminent experts to guide decisions. We need to pay more comprehensive and sophisticated attention not only to the supply of useful knowledge, but also to how that knowledge is used. By doing this, governments and advisers can make more informed decisions, fewer mistakes and respond better to the complex problems they face. But let’s be as serious in making use of the evidence about evidence, as we are about the evidence itself.
Geoff Mulgan is chief executive of Nesta, the UK’s innovation foundation. He is on Twitter @geoffmulgan. This article is from the book Future Directions for Scientific Advice in Whitehall (edited by Robert Doubleday and James Wilsdon) which is free to download here from 18 April 2013