A four-year-old is currently being given a pencil in an American preschool classroom and asked to perform tasks that developmental science suggests she might not be neurologically prepared to perform. It will take about thirty minutes to complete the test. The results will be entered into a database. A number will appear next to her name somewhere in that database, and it will be used to make decisions regarding her placement, the assessment of her teacher, and, in certain situations, the funding for her school. She doesn’t know any of this. All she knows is that the person posing the questions isn’t her teacher and the pencil feels uncomfortable in her hand.
Long before standardized early childhood assessment developed into the industry it is today, the World Organization for Early Childhood Education has opposed this situation in a variety of ways and in numerous nations. The stance of OMEP is clear. It contends that the standardized evaluation of children from birth to age eight is inaccurate because it misinterprets the nature of early childhood development and how it ought to be recorded. A gap in accountability is not the organization’s preferred option. When the subject is a three-year-old building with blocks, telling a story to a stuffed animal, or resolving a disagreement over who gets to use the swing, accountability takes on a different meaning. observation. documentation. evaluation of a portfolio. the methodical, slow observation of a child’s actual development in the setting where they actually live and learn, carried out by adults who are familiar with the child.
OMEP Position on Standardized Assessment in Early Childhood — Key Facts
World Organisation for Early Childhood Education | OMEP ESD Rating Scale | Assessment Policy Context | 2024–2026
| OMEP’s core position | Strongly opposes standardized assessment of children from birth to age 8; advocates instead for contextual, ethical, child-centered evaluation aligned with developmental science and children’s rights |
| Preferred assessment methods | Play-based assessment; observation and documentation; portfolio assessment; authentic evaluation through daily classroom activities — all placing children as active agents, not passive test subjects |
| OMEP ESD Rating Scale (2nd ed.) | OMEP’s own Educational Rating Scale for Sustainable Development in Early Childhood (ERS-SDEC) — designed explicitly as a self-audit tool for practitioner reflection; not intended for comparing preschools, ranking children, or accountability labeling |
| Scale development history | Originally developed 2011–2014 across seven countries: Chile, China, England, Kenya, Korea, Sweden, and the USA; endorsed by OMEP World Assembly and Executive Committee; 2nd edition adapted 2019 |
| Scale structure | Organized around UNESCO’s three pillars of sustainability (social-cultural, environmental, economic); rated 1–7 (inadequate to excellent); uses qualitative indicators, not quantitative scoring of children |
| Why OMEP opposes standardized testing | High-stakes readiness tests are over-used and misinterpreted; many tools are developed in high-income countries with no psychometric evidence for other cultures; standardized tests narrow quality and reduce engagement; they treat early development as quantifiable in ways it fundamentally is not |
| 76th OMEP World Assembly (Bangkok, 2024) | Theme: “Right from the Start for ECCE: Step Beyond All Together”; nearly 400 participants from 60+ countries; holistic approach to ECCE assessment and decentralization were major themes; only ~35 of 399 presentations addressed human rights dimensions |
| Key OMEP figure | Mercedes Mayol Lassalle, OMEP World President — led Assembly reaffirmation that “transformation of ECCE is multidimensional and does not just end with the provision of more services and infrastructure” |
| Global ECCE data context (UNESCO, 2024) | 30% of children in countries with available data are not developmentally on track; pre-primary enrolment for one year of organized learning dropped from 75% (2020) to 72% (2023); trained teacher ratio only 57% in low-income countries |
| NASP position (aligned with OMEP) | National Association of School Psychologists supports early childhood assessment that allows accurate and fair identification of developmental needs — explicitly states standardized testing has been over-used and misinterpreted in early childhood settings |
| What OMEP advocates instead of commercial testing | Publicly funded, observation-based, holistic ECCE evaluation; pedagogies rooted in Froebel, Montessori, Steiner, Freire, and Reggio Emilia (Malaguzzi); children as competent contributors and agents of change, not recipients of assessment |
| Cultural validity problem | Most standardized early childhood tools are developed without psychometric evidence for non-Western cultures; their use in low- and middle-income countries is described as “problematic” by developmental researchers |

This position is disliked by the commercial testing industry for simple reasons. Early childhood assessments have a sizable global market, and instruments created by publishers in wealthy, predominantly Western contexts have made their way into educational systems all over the world, frequently without the psychometric validation required to determine whether they accurately measure anything in those cultural contexts. OMEP has been direct about this, stating that many standardized tools are problematic in any situation other than the one in which they were created. In a similar statement, the National Association of School Psychologists noted that standardized testing has been overused and misconstrued, particularly in early childhood settings. These are not extreme fringe viewpoints. Developmental scientists, who investigate how children truly learn, hold these opinions. Despite this scientific consensus, the publishing industry’s continued control over early childhood assessment practices is one of those discrepancies between research findings and policy actions that never receive enough attention.
OMEP’s own contribution to this field is the Educational Rating Scale for Sustainable Development in Early Childhood, which is currently in its second edition. It was created in seven nations, including Chile, Kenya, Korea, and Sweden, and it was modified in 2019 with approval from the OMEP World Assembly. The scale is based on the three pillars of sustainability identified by UNESCO: social-cultural, environmental, and economic. It is rated on a qualitative continuum ranging from inadequate to excellent. It is specifically not intended to rank children, compare preschools to one another, or produce the kind of accountability data that is used to penalize programs or categorize families. The purpose of the documentation is to assist educators in assessing their current situation, making plans for their future, and having frank discussions with colleagues regarding the disparity between the two. Perhaps the best way to characterize how it differs from the majority of what the commercial assessment market produces is that it reads like something created by people who trust teachers.
The holistic approach to ECCE assessment was a recurrent theme at the 76th OMEP World Assembly in Bangkok in July 2024, which drew nearly 400 participants from over 60 countries. However, an examination of the 399 presentations made during the conference revealed that only roughly 35 of them explicitly addressed early childhood through a human rights lens. That gap is important. Because OMEP’s assessment argument is essentially a rights argument. The instruments used to assess a child’s development must take into account their status as a rights-bearing subject, a complete human being with agency, rather than a vessel to be measured and sorted. It is not reflected in a timed test given by an outsider in a room where the child has no relationship or context, yielding a score that tracks the child throughout their academic career. It is demonstrated by a practitioner who has observed a child for months, recorded their observations, gathered proof of development in several areas, and presented the documentation in a professional setting.
Whether the commercial assessment sector will significantly modify its procedures in reaction to this kind of persistent advocacy is still up for debate. Too many contracts with school districts, too much money, and too much institutional inertia are all working against it. But there’s a feeling, watching this debate develop across early childhood research conferences and policy briefs and international working groups, that the scientific ground has already shifted. Not all policy disputes are being won by OMEP. However, it is quietly and persistently forcing the opposing side to defend a viewpoint that is no longer supported by developmental science.
