Making Evaluations Work for Equity and Inclusion in Education

I’m reposting this blog (originally published by UNESCO) on the our synthesis of evaluations on education equality and inclusion. The report will be presented next week in Rome and Oslo.

International organizations typically have well-developed evaluation units, generating large volumes of evidence about their policies, programs and practices. Yet, while synthesis of evidence on international education development has evolved considerably in recent years, synthesis of evidence from the independent evaluations undertaken by international organizations has not.

cover

A new ‘evidence synthesis’ released this week from UNESCO’s IOS Evaluation Office and a group of international partners partly fills this gap. The study reviews 147 independent evaluations commissioned by 13 international organizations, all with a focus on measuring and assessing some aspect of education equality or gender equity. Using a rigorous search process, systematic coding and narrative analysis, the study gives a bird’s eye view of the types of interventions being evaluated by international organizations and synthesizes evaluation findings. It also proposes important recommendations to help improve evaluations commissioned by international organizations and ensure that these evaluations support country progress in achieving Sustainable Development Goal (SDG) 4.5.

What are the main findings from the study?

The volume of evidence is impressive: in an open search of the evaluation databases of 16 international organizations, we found that 147 of a total of 156 evaluations of education published between 2015 and 2019 included objectives or outcomes related to gender parity, equality and inclusion.  Approximately 30 to 40 education evaluations were published each year.

There are strengths and gaps in these evaluations. Their predominant focus is on interventions to support access and participation. Very few include learning as a measured area of impact. Furthermore, while the issue of girls’ education is well covered in these evaluations, the impact of programs on other aspects of equity, related to inclusion for those with disabilities, and disadvantage related to ethnicity and language, were less commonly studied. Geographically, the largest number of evaluations in the dataset are based in Africa – signaling an important new body of evidence on education in the continent.

Only 28 of the evaluations used rigorous quantitative methods with a counterfactual. The strongest evidence appears in evaluations of cash transfer and school feeding programs. Very few of the evaluations look at the equity impact of interventions that directly target improvements in service delivery, with a notable lack of strong evidence on what works to improve teaching practices for more equitable learning outcomes.

With a few exceptions, evaluations in this dataset are unable to show a convincing link between large-scale system-wide reform programs and improvements in learning equity and alleviation of other forms of educational inequality, in part because rigorous and consistent use of theory-based evaluation design is rare.

Furthermore, as noted in an earlier study (and also noted in a podcast), there is little attempt to compare and learn from system reform programs by looking across countries, or across the different organizational forms of support provided in a single country. Yet complex and multi-pronged ‘system-wide’ programs form an increasingly large share of donor-funded interventions in education. Tantalizing but incomplete findings from evaluations of system reform programs include the fact that decentralization and school-based management may have negative impacts on equity and inclusion; and that results-based financing has mixed impact on implementation.

Thaneshwar Gautam

In conclusion, this new report calls for international organizations to strengthen their evaluation of SDG 4.5 in four specific ways.

First, address evidence gaps by improving evaluation of the equity impact of interventions focused on changing frontline service delivery (improving classrooms, teachers and schools), including by incorporating stronger measures of learning equity.

Second, use the evaluation enterprise to contribute to stronger, country-owned generation and use of data.

Third, strengthen evaluation methodologies. 

Finally, based on validation workshops in five countries, the report calls on international organizations to make evaluation evidence more usable and useful to national stakeholders, by ensuring they are involved in the selection and timing of evaluative studies, and by preparing evidence syntheses to support ongoing learning.

In addition to the full report, a methodological note and list of the evaluations will be available on the UNESCO IOS website in due course.

2 thoughts on “Making Evaluations Work for Equity and Inclusion in Education

  1. Good morning, Karen,

    I attended the Making Evaluations Work seminar at Norad Thursday last week. I found the presentation of UNESCO’s SDG 4.5 synthesis report quite interesting, plus enjoyed the ensuing discussions.
    Reflecting a bit afterwards, I had some thoughts, which I pass on to you in case they might be of possible interest.
    By way of introduction, I am the father of a differently-abled grown-up son. And I am also a logframe nerd who believes LFA’s guiding principles and analytic tools can help strengthen theory of change models through application of downwards logic-checking and risk factor-probability analyses.

    The UNESCO study and your blog cite four specific ways for international organizations to strengthen their evaluations of SDG 4.5 — (A) address evidence gaps, (B) contribute to stronger, country-owned generation and use of data, (C) strengthen evaluation methodologies, and (D) make evaluation evidence more usable and useful to national stakeholders. At the Norad seminar, it was also mentioned that the study included five in-country validation workshops. And in follow-up discussions, there were statements about (1) evaluations being extractive, and (2) non-existence of long-term objectives and strategies for SDG 4.5 evaluations.
    With the above as a starting basis, I wondered about way-forward. Can we not make better use of evaluations? Can we not sometimes think bigger than ourselves?
    I managed to land some first thoughts, which are outlined below.

    Start with this fundamental premise: Efficient and effective achievement of SDG 4.5 in Country X by 2030 requires that SDG 4.5 evaluation evidence generated during the period 2020-30 be useful and used by national and local authorities to continuously improve equality and inclusion education for all.

    Then take this basic approach: Everybody think and act strategically and collectively to adopt and meet a desired goal of useful, usable and used SDG 4.5 evaluations in Country X.

    Then do this: In early 2020, carry out a 3-day strategic workshop in Country X involving key SDG 4.5 international and national stakeholders, with main steps of the workshop being:
    • One. Examine what overall outcome needs to be in place by 2025 vis-à-vis SDG 4.5 evaluations. (Example outcome: “SDG 4.5 evaluation evidence being used by national stakeholders to improve equality and inclusive education, as evidenced by-verifiable Actions L, M and N”.)
    • Two: With the above outcome at the top, build an objectives tree showing essential co- and contributory sub-objectives. (Sub-objectives at various levels would almost certainly include such co-/contributory statements as “Necessary evidence gaps filled by 20__”, “Country-owned data collection and reporting systems in use by 20__”, “Policy for National SDG 4.5 Evaluation Methodologies adopted by 20__”, “National SDG 4.5 Evaluation Coordinator in place by 20__”, “Dissemination procedures for evaluation evidence in place by 20__”, and so on.)
    • Three: Subject each sub-objective of the developed tree to a quick Key Assumptions/Risk Analysis to ensure that wording reflects everyone’s understanding of existing/potential political will, available resources and do-ability. In the process, modify statement formulations accordingly. Then re-confirm listed verification indicators and deadlines.
    • Four: Based on the developed objectives tree, flesh out required overall and partial strategies for achieving sub-objectives. All strategies should clearly state who does what when, which effectively turns them into overall action plans as well .

    Then:
    • Disseminate the strategy developed as workshop output — ideally as an official Government document — to everyone involved with SDG 4.5 efforts in Country X,
    • Everybody: Adhere to the strategy when planning, budgeting for, and implementing SDG 4.5-related reviews and evaluations., with regular follow up as required.

    Country X could be one of the five countries in which validation workshops were held.

    In Oslo, you mentioned you mused during the course of study work about what you would do if you were Queen-for-a-day. Along similar lines, here’s some King-for-a-day fantasizing.

    Best regards,
    Cliff Wang

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s