Reporting is initiated during programme set up in close consultation with key stakeholders, and must be carefully managed, regularly reviewed, and embedded as a critical element of any large-scale programme plan.
This document outlines key considerations to help assure robust reporting process throughout the lifecycle of policy implementation. These can be adapted relative to the size and scale of the programme.
We have grouped the key considerations into THREE PHASES:
A. Know your audience
Develop a Stakeholder Engagement Matrix: Identify all your stakeholders and position them within the Power Interest Matrix (Appendix 1). This will help inform the reporting needs of those impacted by the programme.
| KEY QUESTION: Who may expect or request a report from you? |
B. Review governance structures
The governance structures for the programme should clearly describe the decision-making processes. This will dictate the level of information needed in your reports. It will also ensure there is full understanding of reporting lines and accountability so that risks and issues highlighted within reports can be immediately escalated to appropriate stakeholders.
| KEY QUESTION: Who will make decisions based on the information provided by the reports you are producing? |
C. Have a clear definition of success
Tracking of Implementation should be clearly defined at different levels to ensure that the intended policy outcomes and benefits are being delivered (as per diagram below).
At the initial stages of the implementation the focus is on delivering the outputs (deliverables). Over time, you will need to consider how policy implementation will lead to improved outcomes across the system and realisation of the desired benefits of the policy. From a policy implementation perspective our recommended reporting approach is one which focuses not only on output, but also outcome measurement.
Describe what “implemented” means for each deliverable as early as possible i.e., when a guideline is published or adoption rates by the local teams: ie adopted by 80% or 100% of teams by region etc.
“Thinking about the difference that an intervention will make to individuals, groups, organisations, systems and populations can help to determine outcomes. Outcomes may be short term, for example, an individual may learn or develop a new skill, or long term, such as a change in organisational culture.”
A good technique to use for mapping outputs and outcomes is Logic Modelling, which is a graphic illustration of the theory of change for an intervention, service, or policy. The development of a logic model facilitates a process for systematically working through the connections between the different components of implementing a change to explore why we think it will work (situation analysis, evidence, inputs, outputs, outcomes, and indicators). It also helps to address questions about monitoring and evaluation and helps in selecting targets and indicators that can provide signs of progress or achievement. These may be derived from standards and benchmarks.
| KEY QUESTIONS: Are deliverables and outcomes well defined and documented? Is there a clear connection between deliverables and outcomes? Is there a shared understanding of these among the team? |
For health sector policy implementation:
Implementation science, which studies uptake of research findings and other evidence-based practices into routine practice, makes a distinction between implementation outcomes, and service and client outcomes.
A. Define what needs to be measured
Define what you need to measure, what type of metrics or indicators are available and what they can tell you. Are they able to provide an early progress indication and/or warning for potential future problems (lead indicators); and/ or measure past performance (lag indicators)? Ideally, indicators should be defined to measure (1) progress of deliverables over the entire implementation duration and (2) both the short-term and the long-term outcomes you are aiming to achieve (see Appendix 2).
Look to measure the progress of the implementation across 4 main areas:
Make your metrics/ indicators SMART (specific, meaningful, achievable, relevant, and timely)
Example:
Performance indicators will change as the programme of work progresses. Focus will shift between the delivery on the deliverables and measuring the impact (outcomes) of the entire programme.
| KEY QUESTIONS: Does the metric link to the deliverables and outcomes? Does the metric provide the right progress data to explain clearly the progress in implementing the policy? Did you consider stakeholder needs and concerns when selecting performance indicators? |
B. Define types of reports
Agree with key programme stakeholders what information needs to be included in each report type, how it will be presented and where the reports will be made available. Ensure the depth, breadth and format of reports meets stakeholder needs. We differentiate between 2 main types of reports: Status Reports and Progress Reports.
Note: outcomes/ impact monitoring updates may sit outside of Status and Progress Reports.
Reports can present data on a spectrum from strategic to operational, where the former presents information at programme level including overall progress, and the latter provides more details on specific projects, workstreams, and tasks (see diagram below).
C. Establish a single source of truth
Choose a tool for your data capture and storage. This will be your single source of truth for the programme implementation (master data). When choosing the tool consider how the administrative burden related to capturing, extracting, and processing data can be minimised. (for e.g. agree a core minimum data set and/or establish automated processes). Also, manual data entry should be minimised by using calculated fields, drop down options and validity checks where possible. Benchmark your tool against the principles of data quality.
| KEY QUESTIONS: Does the tool allow qualitative and quantitative data to be collected? Can principles of data quality be applied to the tool (Accuracy, Completeness, Consistency, Validity, Timeliness, Uniqueness)? |
D. Define the process
Define the process for (1) data capture, (2) report generation and (3) report presentation:
E. Have the right people on board
Those responsible for inputting, extracting, and analysing information must have the required skills to complete these tasks to time and quality standards. Organise training sessions and seek initial feedback. Communicate across the implementation team what is expected from everybody. Encourage a shared governance and ownership approach to reporting by promoting the shared benefits of high quality, robust reports to all programme stakeholders.
| KEY QUESTION: Is the Programme Leadership clear on the resourcing requirements and expected time commitments for reporting?|
A. Produce the reports
Reports should be faithfully produced according to the agreed schedule. It is also essential that any report needs to be reviewed by the relevant stakeholders and decision makers.
B. Act on results
Programme leadership, including the project sponsor(s), are responsible for driving the programme forward and supporting the implementation team achieve the objectives and therefore should act accordingly if reporting is flagging risks to programme delivery.
Good reporting involves follow-up conversations. Consistent reporting mechanisms across implementation teams facilitate periodic data-informed programme reviews with stakeholders in an open, transparent, and objective way.
C. Reflect on the process
Reporting should be iterative and adapted to fit purpose. Regular feedback and lessons learnt meetings can be an instigator for making improvements to the reporting process.
Key points to consider:
Note: You can use a report review template to review the quality of the data/information (Appendix 3).
Appendix 1
Power Interest Matrix
Appendix 2
Performance Indicators Sheet
Appendix 3
Report review template
Want to talk to CES about a project? Book a call with the team here.