The New Learning Targets – Redux

The past year or so has seen a rash of new initiatives and exchanges among the global education literati about “global learning targets.” Lets take a look at what all the fuss is about, and why it is worth asking a few further questions.  [And a “heads up”:  in coming weeks I’ll be comparing the strategies of two international organizations – UNICEF and the World Bank-  for tackling the learning crisis.]

What’s all the fuss about?  Over the course of the year, it has become increasingly clear that the global community has not moved quickly towards selecting realistic measures for some of the most important SDG 4 goals.  Specifically, almost 30 years after the Jomtien Declaration, and more than 50 years since the first UN led efforts to “eradicate illiteracy,” education still has no comparable measure to those that have elevated global health diplomacy – no guiding metric, no single data-point, that can focus attention on learning (instead of schooling).

The UNESCO Institute for Statistics has been valiantly aiming to develop such a metric, specifically for SDG 4.1. Based on low levels of enthusiasm among both developing country members and civil society for a single global test that measures literacy, it’s strategy so far has been to design a “bridge” that can allow us to equate and compare learning outcomes from national and regional learning assessments. UIS however is in crisis: its modest funding is in no way adequate to meet its global role; furthermore, it faces an underlying problem: too few countries have national learning metrics to start with (though those that do is growing). In August, a big twitter fire erupted when a UN meeting on indicators for SDG 4.1 saw a recommendation to stop pursuing a learning target for SDG 4.1 at primary level and instead replace it with the more traditional adult literacy indicator proposed for use with 4.6.1. 

Enter stage left the World Bank.  For at least the last decade, the Bank has endorsed and advanced the idea that education system quality can be measured by learning outcomes. Yet it never quite got its act together to ensure that all its operations and loans included funding for a learning outcome measure or used learning outcomes as a metric for its own project performance.   Earlier this year the Bank signed a cooperative agreement with UIS that draws on UIS expertise to help it create a single learning metric that it says it will use as the centrepiece for all its education sector work going forward. Now we have a new, new indicator: the Learning Poverty target.   Last week at its fall meetings, the World Bank also announced that it will (support countries to) “halve” the number of children aged 10 who cannot read at a basic level of proficiency (do I hear echos of the last USAID strategy?). The Bank estimates that over 50% of children in low and middle income countries cannot read and understand a simple text. It also launched a new learning policy package to get all children reading by age 10.  

Why hasn’t more happened already?  Lets back up a bit and ask why education as been so slow in developing relevant global metrics.  The problem can be traced to several issues, but the common root may be fragmentation in the institutional architecture, as captured in a recent article by Nick Burnett.  Lets face it:  in the past 20 years the international community could at any time have provided steady and predictable investment in a process that engaged groups of countries in designing their own rigorous national and regional assessments. Such initial investments were made – largely by the World Bank – but it didn’t hold the course. Over the same period, the agencies that we might look to to develop or support broadly owned learning metrics, went off in different directions with their own technical fixes for measuring learning outcomes, usually working bilaterally (think USAID’s EGRA, think World Bank’s service delivery indicators). The governments and organizations that might be expected to provide core funding for global education statistics – including the World Bank, GPE and bilateral donors – have not provided it. In the meanwhile UNESCO (and its scion UIL), have lost their groove. 

International organizations and donors are fickle and always seeking a new shiny “silver ball” – their main focus is on their own organizational needs and drivers, and rarely on the regional and global goods needed to shift a whole education system or the aid regime. UIS is struggling to gain its ground, and may not be the best “agent” for scaffolding regional assessment regimes. PISA and the OECD may have overplayed their hand. GPE’s Strategy 2020 stood out because it was the first time an international organizations promised to hold its own work to account for improving learning outcomes, and UNICEF quickly followed. But have they really changed the way their organizations fund and support learning?  Today’s World Bank seems to be a frenetic “metrics machine”  – deeply fragmented internally about how and what metrics to use to monitor learning outcomes. For example, the World Bank’s new Learning Poverty Measure, draws from different data than last year’s  big annual meetings announcement, the Human Capital Index; while the Bank’s Service Delivery Indicators framework, which the Bank promises to spread across its portfolio, draws from – yup – another learning outcomes measure!  Go figure. 

Skepticism about the underlying “theory of change.”   So lets unpack the theory of change behind global metrics.  It might look something like this:  the world community comprised of nation states sets a meaningful goal, identifies a target and a measure of this goal. Countries report on this target, and change their priorities to reflect this shared goal. At the global level an advocacy and accountability “boomerang” helps to hold countries to account, and provides a significant tool for accountability politics among local and regional stakeholders (aka, governments and citizens!), stimulating further incentives for policy makers committed to change.  This is essentially the theory of change for the new Learning Poverty indictor, except that initiative is not a globally set target owned by countries. Nonetheless, it is gaining wide support (see this blog from the Gates Foundation).

Is this a plausible ToC?  The record is mixed.  As a recent Centre for Global Development paper on the role played by regional assessments in Latin America and Africa by Bruns et. al. shows,  successful assessment regimes are built painstakingly by engaging regional actors and countries over long periods of time. If these processes are managed well, along the road countries to develop both a strong normative frame for thinking about learning outcomes and learning equity, and the technical and policy capacity for using data.

We could hypothesize from this that the process through which learning targets and measures are developed is as important as the metric itself:  metrics work best when there is time to build a sense of ownership and belonging among a“club” of countries — where the “muscles” for constructive comparison are gradually built alongside national capacity to use data for reform. Yet this organic pathway is not at all what we see in current efforts.

We can further stress test the theory of change behind global learning indicators by looking at the mixed impact PISA has had in developing countries. It is true that international metrics can spur change where there is a change oriented policy entrepreneur ready to catch the boomerang (aka, Jaime Savaedra in Peru), or where a robust civil society and free media demands national response (see Germany).  But few low-income countries have either of these assets – which combined are what we typically mean when use the term “political will”. Instead several low-income countries have simply pulled out of PISA altogether, finding the metric irrelevant.  

We can also learn by taking the MDG era targets and experiences seriously. The MDG global goals for education were widely rejected by developing countries because of their sole focus on primary education; GPE’s early support for “all children reading” campaign for learning metrics suffered a similar pushback.

Do we want a quick fix, or country and regional ownership and commitment to learning?  

In short:  these new global metrics may all be great inventions for moving a single agency forward – and they certainly give an organization a reputational boost and the patina of strategic focus. But as far as I can see, none of these new metrics is built on the kind of consensus and the underlying accountabilities that are likely to generate lasting global change.  We should all worry when a new learning poverty metric is not organic.  If I am right, global targets of this type will do little to shift policy winds that are almost entirely focused on secondary schooling and youth skills, as for example across Africa.

More importantly, supply driven metrics and diagnostic tools of all kinds may have pernicious side effects, undermining rather than bolstering national will and ownership of learning goals, reinforcing a culture in which global policy talk and national policy action continue to diverge. This is one of the lessons highlighted in GPE’s recent country level evaluations, in regards to its emphasis on a prescribed model for education sector planning.

So what should we do? Three thoughts.  

·      First:  Pool international resources and support regional coalitions for learning. Predictable funding for collaborative policy dialogue about learning should be our first goal. Supporting country-led efforts to develop national and regional assessments (especially in Africa), can be part of this effort – but at best these should be a supporting concern, not an ends in themselves. And don’t forget: for learning metrics, it’s all about process, and that takes time and steadfast commitment.

·      Second: Invest adequate resources in a simple global learning indicator module in household surveys. This strategy is cost effective and has the advantage of drawing new focus for the learning crisis in the youth and adult populations. (Recall that even teachers aren’t reading at primary level in parts of Africa).  SDG 4.6.1 – everyone has the right to literacy – needs just as much attention as all children reading.

·      Third:   Don’t overestimate the importance of international organizations and international metrics.  But do demand more information about what each international organization plans to do increase learning outcomes and ask whether they will be using learning metrics to evaluate the performance of their own portfolios and programs. Too few IOs have looked rigorously at the impact of their sector financing on learning outcomes; more and more of them are moving funding away from primary education and towards secondary and TVET. I suspect the new focus on learning metrics is more political ploy than conscientious effort to shift the way aid dollars are spent. Twenty years ago a group of donors conducted a joint evaluation of aid to basic education – and shortly afterwards the World Bank evaluated its support to primary education. Each suggested lacklustre performance – in areas that are consistent with USAID’s more recent evaluation of its support to basic education and a Norwegian evaluation of GPE and UNICEF. It’s time to evaluate our sector again, answering one clear question (with thanks to Lant Pritchet):  why after 30 years, and so many global commitments, have aid and international donors had such limited impact on learning?

3 thoughts on “The New Learning Targets – Redux

  1. Thank you, Karen, for these thoughts — glad I came across them!
    Some great nuggets! I agree that ownership – and the whole process of developing it, particularly at national level – is just as important as the measurement tools and targets. The tension between the national level and the needs of international organisations (to justify their investments) is palpable and – for me – raises a question that is constantly in my mind: do we really need to invest so much effort in international comparability? Who needs it? Could rigorous national-level learning outcome assessments suffice to see children and adults benefitting from a more focused and effective learning process?
    By the way, very happy to see that your UT department includes adult education!

    Like

  2. Well done, Karen. The simple fact of how hard it has been to come up with this measure let alone the compromises and ‘smoothing of edges’ required to produce something that all the global actors will accept should reveal a global indicator as a fool’s quest. You hit on precisely the key question: what data and assessment process will be required and useful to move the quality needle on learning (and access, highly linked) at a system level? Not a single global indicator, for sure. Just as each country adheres fervently to its own curriculum, it can/should do so with its assessment standards and measures. And just as each country’s curriculum evolves as it is informed by regional and global experience, learning, and trends, so too can its assessment measures and methods. A global measure is too easy to dismiss or to use for political or other reasons. Home-grown measures are more difficult to manipulate. Thanks for raising this in your typically smart and blunt way.

    Like

  3. Karen Mundy asks: Why after 30 years, and so many global commitments, have aid and international donors had such limited impact on learning? Certainly not because of the lack of learning metrics! My answer is because we have been miserly and controlling. Miserly in that we have put in a pittance compared to what is needed. Education conditions in many developing countries are appalling (and, for too many, in rich countries as well). A very large influx of resources is needed. Controlling in that agencies of the Global North have been telling the Global South what is best to do for decades. A technical approach to supposed best education practices has been and will continue to be an utter failure. We need to approach improved practice through participatory debate, from the local to the global, along with providing the resources to make a real difference. That is a very different theory of change and improved metrics have a small role to play.

    Steve Klees
    University of Maryland

    Like

Leave a comment