Different strokes for different folks

Download the Guide

Welcome to the second blog in our series on Demystifying Evaluation from Karl McGrath and Dearbháile Slane. Here we look at some key concepts in evaluation, examine different types of evaluations and consider when and why they each might be most useful.

How do you choose which type of evaluation you need?

The questions that an evaluation is tasked with answering are central to virtually all the methodological decisions that an evaluator will make.  

Some types of evaluations we’ll look at are better suited to some types of questions than others. However, different types of evaluation can be mixed and matched together, depending on the questions to be answered and the purpose and circumstances of the evaluation.

Decisions on the specific form and scope of an evaluation will depend on:

  1. Its purpose(s) and audience
  1. Capacity and resource availability
  1. The nature and scope of the ‘intervention’ being evaluated
  1. The complexity of the context within which the intervention takes place
  1. The stage of project implementation.

What are some of the different types of evaluation?

A group of colorful rectangular boxes with white textDescription automatically generated

1. PURPOSE - What do you want to know and why do you want to know it?

Formative evaluations are intended to help improve the design, implementation, and outcomes of an ongoing intervention. The lessons learnt through formative evaluations can also be used to improve future interventions.

In contrast, a summative evaluation is intended to assess the impacts, effects and overall merit or worth of a completed intervention, but not usually to improve its implementation. Summative evaluations are sometimes used to help decide whether the intervention should be continued and/or replicated in other settings or on a larger scale.

Case study

Between 2021-2024, CES conducted an evaluation of 3 Local Community Safety Partnership (LCSP) pilots for the Department of Justice (DoJ). An important purpose of the evaluation was to inform the wider roll-out of LCSPs nationally after the pilots ended. The evaluation tracked the pilot sites at regular intervals over 2 years, and reported findings while the pilots were ongoing, and at the end. The kinds of questions the evaluation tried to answer included:

  1. How did the LCSPs conduct its work?
  1. What was the impact on local communities?

The evaluation was both a formative and summative evaluation. It was formative because findings were reported while the pilots were ongoing to help improve their real-time implementation and operation. And summative because it also tried to assess the overall effects of LCSPs by the end of the pilot.

Process evaluations (sometimes called ‘implementation' evaluations) - look at questions about the processes involved in a programme or innovation and answer questions on what, how, by who and for whom an intervention was developed, implemented, or conducted its work. In this way, process evaluations share many similarities with formative evaluations. The types of questions that a process evaluation usually tries to answer include:

  • How did [intervention X] conduct its work?
  • What processes were used to implement [intervention X]?  
  • What are the barriers and enablers to implementation of [intervention X]?

A process evaluation can be used to better understand the factors affecting implementation, develop and improve implementation, and in some cases, can be used to inform decision making and planning for scaling up of an intervention.

Outcome evaluations (sometimes called ‘impact’ evaluations) look at questions about the effects and effectiveness of an intervention, to establish if (and sometimes by how much) it made a difference for the recipients or ‘beneficiaries’ of the intervention. The types of questions that an outcome evaluation usually tries to answer include:

  • What was the impact of [intervention X] on [population Y]?
  • What differences did the implementation of [intervention X] make? What were the outcomes for [populations X, Y and Z]?

In the sphere that CES works in, it’s often the case that commissioners of an evaluation ask questions about processes and outcomes, and many of our evaluations could be considered both process and outcome evaluations.

Case study

Between 2019-2023, CES conducted an evaluation of 9 Community Healthcare Network (CHN) Learning Sites for the Health Service Executive (HSE). CHNs had many goals, one of which was to improve the coordination and integration of community healthcare services.  

The HSE wanted the evaluation to answer a number of questions, such as:

1. What processes were used to implement change, and what were the barriers and enablers to implementation?

2. What difference did the changes make for service users, staff and services?

To answer these questions, the evaluation team conducted a process evaluation for the first set of questions and an outcome evaluation for the second set of questions.  

2. TIMING - When do you want to know it and why?

Whether an evaluation is retrospective or prospective depends less on the questions being asked and more on when the evaluation is being carried out in the lifecycle of an intervention.  

Retrospective evaluation (sometimes called an ‘ex ante’ evaluation) -- is an evaluation that looks back in time and usually take place at the end of an intervention. The types of questions that a retrospective evaluation tries to answer can include any of the types of questions asked of other kinds of evaluations, but the questions are more likely to be framed in the past tense. For example, instead of asking ‘what is the impact of [intervention X]?’, the question might be framed as ‘what was the impact of [intervention X]?’.

Prospective evaluation (sometimes called an ‘ex post’ evaluation) -- is an evaluation that is looking at the present, and the evaluation moves forward in time with the intervention being evaluated. Again, the questions that a prospective evaluation tries to answer can include any of the types of questions asked of other kinds of evaluations, but the questions are more likely to be framed in the present tense. For example, instead of asking ‘what was the impact of [intervention X]?’, the question might be framed as ‘what is the impact of [intervention X]?’.

Case study

CES is currently concluding an evaluation of the Better Start Quality Development Service (QDS) for the Department of Children, Equality, Disability, Integration and Youth (DCEDIY). The QDS work in partnership with early learning and care (ELC) providers in Ireland to build high-quality practice, based on the national early childhood frameworks.  

One of the evaluation questions was ‘does the QDS contribute to improved quality of practice in ELC settings?’. The timing of the evaluation (amongst other considerations like the nature of the question and the nature of the QDS service) led us to conduct a prospective and retrospective evaluation to answer this question.  

The prospective part of the evaluation involved surveying ELC settings at two different timepoints to measure change over time: once around the beginning of their engagement with QDS and then a second time at the end.

Due to the limited timeframe in which to collect data from newly engaged settings and the likelihood of only a small sample of services we also did a retrospective evaluation of ELC settings that had engaged with the QDS settings in the past.

3. CAPACITY and CONTEXT - the time and resources available to undertake your evaluation.

Pragmatic evaluations try to apply the best evaluation method, for each evaluation question, that can be feasibly conducted within the real-world constraints of time and resources, and which provides practical knowledge for stakeholders.  

They can be helpful when there are well-defined evaluation questions that can be answered without the need for theory, and when there is limited time and budget to conduct an evaluation. They can also be applied to lots of different types of evaluation questions.

Case study

Our previous example of the evaluation of the Better Start Quality Development Service (QDS) for the Department of Children, Equality, Disability, Integration and Youth (DCEDIY) is also a good example of a pragmatic evaluation.  

We wanted to provide findings that were as direct and practical as possible, which reduced the need for theory. We also considered using more elaborate evaluation methods to answer some questions, but these would have required large increases in time and budget for small increases in rigour. We had to be ‘pragmatic’ about what was feasible, but also what would still be able to provide useful findings about the QDS for DCEDIY. In the end, we used a mixture of qualitative and quantitative methods that could be applied efficiently to generate practical insights into the QDS.

Theory-based evaluations place emphasis on developing a theory of how and why an intervention is intended to work and produce outcomes. The theory can then be used to guide the development of questions, data collection and data analysis methods.  

Theory-based evaluations can be helpful for questions about ‘how, when and for whom’ an initiative works. They’re also helpful for questions about ‘if’ an intervention works when experimental methods are not feasible or appropriate. But they cannot answer ‘how much’ of a difference an intervention made.

A Realist Evaluation approach is a type of theory-based evaluation. CES has a developed a short guide to Realist Evaluations – read it here.  

Related Guides

Related

Work with CES

Get in Touch