What works now: how can we use evidence more effectively in policymaking?

Evidence use in policymaking is nothing new. It has been talked about by policymakers, academics and professionals for the best part of ten years, and has been highlighted a lot, among other places, on this blog. Over the years various government initiatives have been set up to try to establish how best to use evidence and identify “what works” in relation to specific policy interventions, and “evidence-based” policymaking has become the catchphrase of policymakers across most sectors.

One of the newest books to be added to the Idox Information Service library reflects on the rise of “what works” as an approach to policy development. The book builds on discussions from the first edition of the book, and provides a sector-by-sector breakdown of how evidence is – and could be – used in policymaking across areas like health, the environment, education and criminal justice. It also offers some insight into appraising evidence and how to assess quality, as well as how evidence is used internationally, providing examples from the USA, Australia, New Zealand and Scandinavia.

As one of our key aims is to support and facilitate the sharing and use of evidence in the public sector, this book has been a welcome addition to our collection.

Making use of research across policy

In 2013, the UK government launched the What Works Network, which is now made up of 10 independent centres committed to “supporting the creation, supply and use of evidence” in specific policy areas including crime and policing, education and economic growth. The centres aim to improve the way government and other organisations create, share and use (or ‘generate, translate and adopt’) high-quality evidence for decision-making. According to the UK government, the initiative is the first time a government has taken a national approach to prioritising the use of evidence in decision making.

What Works Now? highlights research from Weiss (1979) which suggests that there are “7 types of research use”:

  • Knowledge Driven – research will be developed, applied and used once it has been produced
  • Problem Solving – research will be applied directly to a particular policy problem in order to solve it
  • Interactive – research forms part of a wider web of knowledge, policy and other research which all interact with each other
  • Political – research could (and probably will) be used to retrospectively provide support for a policy decision which has already been made
  • Tactical – research can be used as a tool to delay or deflect from decision making or action around a particular issue (i.e. “more research is needed in this area”)
  • Enlightenment – research informs policy through encouraging people to think and discuss particular ideas or concepts in a different way
  • Embedded research – research production is embedded in a wider contextual system which includes political priorities, the law and the media

Building a research base to support “what works”

Creating and disseminating research effectively have been cited as being key to creating a “what works” evidence base. A number of research institutes and think tanks contribute alongside real-life experiences of practitioners and other stakeholders to try and establish the conditions which support effective interventions and lead to positive policy outcomes.

One of the big discussions currently is around the creation of academic research to support what works programmes. Exploring what sort of research is useful to practitioners and policymakers and aligning this with the research agenda of academics and universities can help to create an effective supply chain of evidence to inform policymaking. However, often academics often do not engage with the policy process, or politicians politicise evidence, picking and choosing which findings to take notice of, which can distort the perception of what evidence is available in a particular area.

Encouraging fuller participation and a more robust appraisal of research from across the board is something which many institutions are trying to work towards. Research impact and knowledge exchange is now integrated into research funding and a growing number of people are working to feed research more effectively into the policy arena.

Evaluating research and evidence and judging which to take forward to inform policy decision making is also important. Along with discussions around assessing and labelling evidence the book considers how some of the main organisations in the UK concerned with promoting evidence-informed policy have gone about appraising evidence, weighing it up, assessing quality and “fitness for purpose” and taking account of non-research based forms of knowledge and evidence, such as the personal experience of practitioners.

Applying “what works” in practice

Applying “what works” in practice can be a challenge, especially in a setting that is perhaps very different from the conditions of a study that has been shown to produce successful outcomes from a particular intervention.

In the book, 10 guiding principles to support the use of evidence in practice are set out:

  • Translated – To be used research must be adapted and reconstructed to fit with local contexts. Simply providing findings is not enough
  • Ownership – Ownership of the research and allowing people to feel a sense of ownership over the development of research
  • Enthusiasts – Individual “champions” can be useful in ensuring that research actually gets used
  • Local context – Local context must be taken into account, particularly in relation to specific barriers and enablers which might help or hinder change
  • Credibility – Credibility of researchers and the people who support the research is key to ensuring that the research is taken seriously
  • Leadership – Strong leadership provides motivation, authority and integrity in the implementation of evidence
  • Support to implement change – Ongoing support to implement change is important, this could include financial, technical, organisational or emotional support
  • Develop Integration – Activities need to be able to be integrated with existing organisational systems and practices, changes do not happen within a bubble
  • Engage key stakeholders – To ensure effective uptake and buy-in key stakeholders should be involved as fully as possible form the earliest possible stage
  • Capture learning/ Effective evaluation – Don’t forget the importance of evaluation, identify what worked and what didn’t to help share learning and support future projects

Final thoughts

In theory, using evidence to inform policy sounds straightforward. The reality can be quite different. What Works Now? highlights that the “what works” agenda remains dominant across the policy landscape, even if the application or approach to it differs from policy area to policy area.

What counts as evidence is still disputed; getting evidence “out there” and encouraging academics to be involved in the policy process is still hard to achieve (although there is good work being done in this area to try and combat this); and context is still key to making evidence work in a particular environment.

Understanding evidence, and how to use it effectively has been a core aim of policymakers in the UK, and across the world for the many years. This book, and the supporting research outlined in it highlights that while evidence is still at the fore of policymaking, actually identifying what works and putting it into practice is a bit more of a challenge.

Members of the Idox Information Service can log into our website to request a loan of “What works Now?”

If you enjoyed this post, you may also be interested in:

A world of evidence … but can we trust that it is any good?

Follow us on Twitter to find out what topics are interesting our research team.

In support of qualitative research: the value of qualitative insight for policy formation

For many people, working in the tangible, measurable area of hard figures provides an element of certainty to decision making processes. It is perhaps for this reason that quantitative research was, for many years, the largely uncontested preference for decision makers and people looking for research to evidence their decisions.research

Some criticisms that are often made of qualitative research are that it:

  • can be limited in scope and size and have a longer turnaround time than quantitative studies;
  • can be difficult to replicate and scale to achieve multiple results across multiple test sites
  • can be too reliant on researcher interpretation, perception and experience, and therefore too exposed to bias and unreliability.

In contrast, quantitative research is stereotypically presented as producing results that are consistent and replicable; and therefore ‘higher quality’ and ‘more valid’.

A question of quality

Qualitative research has always suffered from a reputatio of being less rigorous. Instead of dealing with empirical data, it deals with the more human side of research and the effects of a programme. It questions the reasoning of understanding, and the emotional implications of an intervention. As a result, qualitative researchers approach their subject from an entirely different epistemological standpoint (i.e. they have a different view of what ‘knowledge’ is, what should be judged as evidence, and what should not).

This challenges the understanding of what is meant by “research standards”. While quantitative researchers base their understanding on demonstrable results which can be proven and replicated to the same standard, qualitative research brings to the fore questions of researcher subjectivity, the concepts of validity and reliability of results and questions of ethics. It also stresses the importance not only of measuring information to gain results, but gaining results through examining and interpreting experiences and social contexts. It considers these social factors on policy outcomes rather than by categorical measurement using a predefined scale.

A mixed methods approach

While this traditional dichotomy between qualitative and quantitative research methods is convenient, in practice many organisations value the social dimension especially when looking at understanding the impact of policy interventions.

In fact many local authorities and public sector bodies, including the Greater London Authority (GLA), are increasingly looking to qualitative researchers to form part of their wider research teams. In a policy context, it is clear that qualitative research has its place alongside quantitative research as part of a mixed methods approach to evidence based policy making.

Some of the things that qualitative research brings to policy research are: a flexible research method (the methods of collection and analysis can easily be changed as the research is being conducted and the data emerges); and very rich data (if done correctly, one study could provide research data for a number of research tasks).

It allows for a deeper understanding of what lies behind results – not just that something has had an impact, but why. It also allows researchers to understand social phenomena from an individual perspective and consider the specific contexts and conditions which have contributed to it (for example, the experience of stigma or discrimination).group-discussion_unsplash

Giving a voice to marginalised groups

Qualitative research is also finding a role in evaluative research teams, looking at the meanings and constructions which made an intervention effective or ineffective and potential steps to make it more successful in the future.

One example is in engaging with hard-to-reach or marginalised people. While quantitative research would tell you that certain groups of people engage less frequently in community groups for example, qualitative research would help to explore what motivates people to engage, and therefore tease out potential methods to increase engagement. Qualitative research also allows for bespoke research questions on niche topics to be created and explored thoroughly. This can be useful for local authorities who wish to explore issues at a local level with specific communities which would not necessarily be distinct within wider national quantitative data sets and statistics.

Supporting the personalisation agenda with bespoke research

Qualitative research is also becoming more popular as a supplementary option to hard data and raw statistics because of the increased importance which is being placed on individual experience and personalisation in public services, particularly, but not exclusively within health and social care. Qualitative data allows researchers to get an in-depth view of how people experience services.

While it will never be a replacement for the empirical data produced by quantitative data, qualitative data brings its own benefits and enhances understanding around policies in a way that hard data often cannot. It encourages professionals to think beyond figures as a benchmark for outcomes. It also allows them to gain rich data on the experiences of marginalised groups in society, who often go unrepresented in large national quantitative data sets.


Follow us on Twitter to see what developments in public and social policy are interesting our research team.

 

Implementation science: why using evidence doesn’t guarantee success

Découverte

Using evidence in policy making is not a new concept. In recent years it has become commonplace across all areas of policy in the UK, with the introduction of the What Works centres being just one example of this. Policy makers also use evidence to defend the rationale of their initiatives and programmes. But a large evidence base does not necessarily guarantee a successful outcome for a programme or initiative. Without an effective implementation strategy, evidence might as well not exist.

Linking evidence use to implementation within policy is one of the key challenges for policy-makers and those on the frontline of service delivery. Implementation science is an emerging discipline which looks at the nature of implementation, and how it can affect the success of a programme or policy.

Introducing the Hexagon Tool

This tool was developed by the National Implementation Research Network. It outlines six broad factors that should be considered to promote effective implementation of programmes. Designed in a US context for application at state and district levels, many of the ideas about what makes for good implementation are relevant more broadly.

  1. Needs (of service user) – consider how well the programme or practice being implemented might meet identified needs.
  2. Fit – with current and pre-existing initiatives, priorities, structures, support, and local community values and context.
  3. Resource availability – for training, staffing, technology supports, data systems, and administration
  4. Evidence – indicating the outcomes that might be expected if the programme practices are implemented well (assessment criteria)
  5. Readiness for replication – including any expert assistance available, the number of existing replications, examples of best practice for observation, and how well the programme is operationalised.
  6. Capacity to implement – as intended, and to sustain and improve implementation over time
The Hexagon Tool How to cite: Blase, K., Kiser, L. and Van Dyke, M (2013) The Hexagon Tool: Exploring context. Chapel Hill, NC: National Implementation Research Network, FPG Child Development Institute, University of North Carolina.

The Hexagon Tool
Blase, K., Kiser, L. and Van Dyke, M (2013) The Hexagon Tool: Exploring context. Chapel Hill, NC: National Implementation Research Network, FPG Child Development Institute, University of North Carolina.

In addition to the hexagon, other useful frameworks for implementation exist. Some are more practical and others are more conceptual. These may link to theories underpinning the practice of implementation of programme strategies, or discuss the idea of values within systems.

However, frameworks only provide some of the knowledge and infrastructure for implementation. They do not take account of the skills, abilities, values and existing experience of “implementers”. All of these can have a significant impact on how a programme or strategy is implemented.Solution and business words jigsaw

Systems change and innovation

Implementation science has previously focused on changing the behaviour of individual practitioners. However, unless you change the understanding of the wider structures and systems, and implement whole system change, you won’t achieve practitioner change.

Alignment within systems, both within organisations in a hierarchical sense, but also across systems in order to create coherence across services, is important. Many service users have experience of receiving support simultaneously from a number of different organisations. Implementation scientists stress that it is important to align funding, outcomes, compliance and overall goals of parallel organisations in order to effectively implement programmes. This can be a major challenge.

One reason why this can be so challenging is the difference in values and experiences of the individual front line workers implementing a new programme on the ground. Teachers have a very different understanding, training and set of experiences relating to children than those of social workers, or those who work in youth criminal justice. The inherent and fundamental philosophical beliefs which drive the practice of different professionals will have an impact on how they implement a programme, regardless of how thorough guidelines are.

This, implementation scientists suggest, needs to be taken account of, and steps taken to try and more closely align the thinking of different professionals and agencies (interagency working) in order to effectively, and coherently implement new programmes.

Evidence is contextual

Implementation science raises some interesting points about how to facilitate change and implement new initiatives. It reminds us that no intervention – no matter how much evidence is produced in support of its effectiveness elsewhere – is guaranteed to be a success. It highlights the often overlooked elements to intervention strategies, such as the need to be context aware, and aware of the values of the people who are implementing the changes, and those affected by the changes.

Finally, it highlights the need to encourage wider structural and systems change, rather than just changing the behaviour of individual practitioners. This is the way to ensure lasting, sustainable and successful implementation of evidence-informed policy interventions and programmes.


Read some of our other blogs on evidence use in policy: