There has been a lot of discussion in recent months on global SDG indicator 4.1.1a on learning outcomes in early grades. As few countries have been able to report on it, its future is at risk ahead of the upcoming comprehensive review of the SDG indicators. We have explained in a recent blog, the technical reasons that have delayed consensus and how we are working to overcome them. This blog attempts to sketch the outline of a sustainable solution.

An argument that has been voiced in recent days is that the perfect should not be the enemy of the good. A ‘good’ indicator must convey reliable, comparable information on learning outcome levels and trends – and be based on blocks of information that can guide policy and planning.

In order for indicator 4.1.1a data are trustworthy, the following inputs are required:

  • What content, or domains, does it include to make one assessment comparable to others?
  • What are the minimum standards per domain and how should performance in each domain be aggregated to allow estimation of the share of students achieving the minimum proficiency level?
  • What data collection procedures ensure quality?

Various cross-national and national assessments lay claims to be contenders for reporting on indicator 4.1.1a. Yet assessment tools have been designed to serve different objectives in various education contexts, with global comparability not necessarily being one such objective.

Cross-national assessments may not even have been designed to be comparable – and, even when comparable, they tend not to measure the SDG 4 minimum proficiency level. In 2018, there was a consensus decision to use one of their own proficiency level definitions to report, as a proxy for the global minimum proficiency level. Cross-national assessments are not necessarily aligned with national curricula, especially in the case of countries that were not among those that established the assessment programme in the first place.

National assessments can potentially be used as a basis for reporting as long as further work is done to align with the minimum proficiency level and ensure the quality of the assessment procedure. One hybrid approach is to include within a national assessment enough questions that are designed to measure the global minimum proficiency level. This route would be sustainable, as it respects national authority and priorities, it is targeted to local contexts, and it can provide evidence related to the national curriculum, pedagogy, and policy. It also addresses concerns related to content and procedural capacity, building local capacity in assessment development, analysis and reporting, while ensuring compliance with the minimum proficiency level.

To learn more please follow the link: