The CES Guide to: Reporting on the Progress of Policy Implementation

Download the Guide

The purpose of reporting on any large-scale programme of work is to effectively communicate work progress against key milestones, provide assurance and insight to key stakeholders and facilitate timely and informed decision making. Specific challenges arise when implementing policy, and they are related to long timeframes, level of complexity, the multitude of interdependencies and the diversity and number of stakeholders involved.

Reporting is initiated during programme set up in close consultation with key stakeholders, and must be carefully managed, regularly reviewed, and embedded as a critical element of any large-scale programme plan.

This document outlines key considerations to help assure robust reporting process throughout the lifecycle of policy implementation. These can be adapted relative to the size and scale of the programme.

We have grouped the key considerations into THREE PHASES:

  • PREPARE: what needs to be in place before you start the reporting process
  • DEFINE: what needs to be developed and created to have the reporting process ready to go
  • PRODUCE AND REFLECT: what to think about after rolling out the process.

Benefits of Reporting

  1. Governance oversight for decision makers: relevant, tailored reports provide the necessary insights and data to help steer and guide programme delivery. This offers reassurance on the direction of the programme to key stakeholders.
  2. 360-degree view: quality reporting gives visibility of cross-programme interdependencies and transparency on issues, risks, and changes right from the start to ensure that all the necessary information is understood in a timely, accurate, and comprehensive manner.
  3. Tracking and monitoring: accurate and comprehensive reporting provides clarity on the progress of work in terms of budget, status of deliverables and schedule.
  4. Early risk detection: quality reporting can surface early warnings signs of potential problems to programme delivery for highlighting for action to senior decision makers.
  5. Accountability and objectivity in decision-making: producing a regular and consistent account of progress provides a record of risks and actions that have been escalated for review, so that decision making is easily tracked at crucial points during implementation.
  6. Check point: preparing reports as a team can act as a review point to ensure everyone is still aligned on goals and objectives. Production of the report can be used to hold a programme health check conversation.
  7. Communication tool: informative and clear reports can be used as a starting point for communication with your stakeholders. Consistent reporting mechanisms across workstreams or delivery groups facilitate periodic data-informed programme reviews with stakeholders in an open, transparent, and objective way. Good reporting involves follow-up conversations.
  8. Control and trust: Clear visibility helps promotes a feeling of control and leads to strengthening of trust among team members as well as programme governance groups.

Phase 1: Prepare

A. Know your audience

Develop a Stakeholder Engagement Matrix: Identify all your stakeholders and position them within the Power Interest Matrix (Appendix 1). This will help inform the reporting needs of those impacted  by the programme.

| KEY QUESTION: Who may expect or request a report from you? |

B. Review governance structures

The governance structures for the programme should clearly describe the decision-making processes. This will dictate the level of information needed in your reports. It will also ensure there is full understanding of reporting lines and accountability so that risks and issues highlighted within reports can be immediately escalated to appropriate stakeholders.

| KEY QUESTION: Who will make decisions based on the information provided by the reports you are producing? |

C. Have a clear definition of success

Tracking of Implementation should be clearly defined at different levels to ensure that the intended policy outcomes and benefits are being delivered (as per diagram below).

At the initial stages of the implementation the focus is on delivering the outputs (deliverables). Over time, you will need to consider how policy implementation will lead to improved outcomes across the system and realisation of the desired benefits of the policy. From a policy implementation perspective our recommended reporting approach is one which focuses not only on output, but also outcome measurement.

  • Deliverable(s): What will be delivered and when? Ensure the final deliverable(s) and key milestones are captured at the levels of granularity required by stakeholders (incl. timelines for delivery and owners)

Describe what “implemented” means for each deliverable as early as possible i.e., when a guideline is published or adoption rates by the local teams: ie adopted by 80% or 100% of teams by region etc.

  • Outcomes are the changes or differences the deliverables will lead to (short- or long-term).These need to be realistic and measurable, and within the sphere of influence of the programme of work.
“Thinking about the difference that an intervention will make to individuals, groups, organisations, systems and populations can help to determine outcomes. Outcomes may be short term, for example, an individual may learn or develop a new skill, or long term, such as a change in organisational culture.”
  • Policy outcomes are the intended impacts and benefits that a policy will have on a service and wider population when implemented. Make sure there is a clear connection between a deliverable, intermittent outcomes, and the policy outcomes.

A good technique to use for mapping outputs and outcomes is Logic Modelling, which is a graphic illustration of the theory of change for an intervention, service, or policy. The development of a logic model facilitates a process for systematically working through the connections between the different components of implementing a change to explore why we think it will work (situation analysis, evidence, inputs, outputs, outcomes, and indicators). It also helps to address questions about monitoring and evaluation and helps in selecting targets and indicators that can provide signs of progress or achievement. These may be derived from standards and benchmarks.

| KEY QUESTIONS: Are deliverables and outcomes well defined and documented? Is there a clear connection between deliverables and outcomes? Is there a shared understanding of these among the team? |

For health sector policy implementation:

Implementation science, which studies uptake of research findings and other evidence-based practices into routine practice, makes a distinction between implementation outcomes, and service and client outcomes.

  • Implementation Outcomes: are defined as “the effects of deliberate and purposive actions to implement new treatments, practices and services and are distinct from service and client (patient) outcome”. These can be defined within the following categories: adoption, appropriateness, feasibility, fidelity, implementation cost, penetration (reach), sustainability.
  • Service Outcomes: are impacted by the implementation outcomes and can be categorised into following groups: efficiency, safety, effectiveness, equity, patient-centredness, timeliness. These can answer a question on how we will know that we are making progress towards improved service delivery.
  • Client/Service User Outcomes: are impacted by the implementation outcomes and can be categorised into following groups: satisfaction, function, symptomatology. These can answer a question on how self-reported outcome measures demonstrate the impact of policy implementation.

Phase 2: Define

A. Define what needs to be measured  

Define what you need to measure, what type of metrics or indicators are available and what they can tell you. Are they able to provide an early progress indication and/or warning for potential future problems (lead indicators); and/ or measure past performance (lag indicators)? Ideally, indicators should be defined to measure (1) progress of deliverables over the entire implementation duration and (2) both the short-term and the long-term outcomes you are aiming to achieve (see Appendix 2).

Look to measure the progress of the implementation across 4 main areas:

  • Timelines e.g. On-time task/ milestone, task completion percentage
  • Budget e.g. Budget variance
  • Quality e.g. Service user/ Stakeholder satisfaction
  • Effectiveness e.g. Reduction in waiting times/ incident rates/ number of escalated issues.
Make your metrics/ indicators SMART (specific, meaningful, achievable, relevant, and timely)

Example:

Performance indicators will change as the programme of work progresses. Focus will shift between the delivery on the deliverables and measuring the impact (outcomes) of the entire programme.

| KEY QUESTIONS: Does the metric link to the deliverables and outcomes? Does the metric provide the right progress data to explain clearly the progress in implementing the policy? Did you consider stakeholder needs and concerns when selecting performance indicators? |

B. Define types of reports

Agree with key programme stakeholders what information needs to be included in each report type, how it will be presented and where the reports will be made available. Ensure the depth, breadth and format of reports meets stakeholder needs. We differentiate between 2 main types of reports: Status Reports and Progress Reports.

Note: outcomes/ impact monitoring updates may sit outside of Status and Progress Reports.

Reports can present data on a spectrum from strategic to operational, where the former presents information at programme level including overall progress, and the latter provides more details on specific projects, workstreams, and tasks (see diagram below).

C. Establish a single source of truth

Choose a tool for your data capture and storage. This will be your single source of truth for the programme implementation (master data). When choosing the tool consider how the administrative burden related to capturing, extracting, and processing data can be minimised. (for e.g. agree a core minimum data set and/or establish automated processes). Also, manual data entry should be minimised by using calculated fields, drop down options and validity checks where possible. Benchmark your tool against the principles of data quality.

| KEY QUESTIONS: Does the tool allow qualitative and quantitative data to be collected? Can principles of data quality be applied to the tool (Accuracy, Completeness, Consistency, Validity, Timeliness, Uniqueness)? |

D. Define the process

Define the process for (1) data capture, (2) report generation and (3) report presentation:

E. Have the right people on board

Those responsible for inputting, extracting, and analysing information must have the required skills to complete these tasks to time and quality standards. Organise training sessions and seek initial feedback. Communicate across the implementation team what is expected from everybody. Encourage a shared governance and ownership approach to reporting by promoting the shared benefits of high quality, robust reports to all programme stakeholders.

| KEY QUESTION: Is the Programme Leadership clear on the resourcing requirements and expected time commitments for reporting?|

Phase 3: Produce and Reflect

A. Produce the reports

Reports should be faithfully produced according to the agreed schedule. It is also essential that any report needs to be reviewed by the relevant stakeholders and decision makers.

B. Act on results

Programme leadership, including the project sponsor(s), are responsible for driving the programme forward and supporting the implementation team achieve the objectives and therefore should act accordingly if reporting is flagging risks to programme delivery.

Good reporting involves follow-up conversations. Consistent reporting mechanisms across implementation teams facilitate periodic data-informed programme reviews with stakeholders in an open, transparent, and objective way.

C. Reflect on the process

Reporting should be iterative and adapted to fit purpose. Regular feedback and lessons learnt meetings can be an instigator for making improvements to the reporting process.

Key points to consider:

  • Are the reports useful? In which way?
  • Do the reports provide the necessary insights and data for decision makers to help steer and guide programme delivery in the right direction?
  • Do the reports provide clarity with regards to the actual progress of work in terms of budget, status of deliverables and schedule?
  • Do the reports provide a clear picture of the progress of the policy and its implementation against indicators?
  • Do the reports provide early warning signs of potential problems (risks)?
  • Do the reports provide a clear view on the status of the existing issues?
  • Do the reports provide a sense of trust, control and assurance to the implementation team and programme leadership?
  • Is the reporting process working well for the staff generating the reports?

Note: You can use a report review template to review the quality of the data/information (Appendix 3).

Appendices

Appendix 1

Power Interest Matrix

Appendix 2

Performance Indicators Sheet

Appendix 3

Report review template

Want to talk to CES about a project? Book a call with the team here.

Related Guides

Related

Work with CES

Get in Touch