PFM Results Consulting
Title
Go to content

Outcome Measurement I

BLOG ARCHIVE
(Originally published 30 July 2010)
Performance budgeting and management are, according to many of their critics, guilty of an “emphasis on unmeasurable outcomes” *.  The critics claim that most government outcomes are either impossible or extremely difficult to measure. They see this as the Achilles’ heel of results-oriented management.
This overlooks the great progress in outcome measurement has been made by governments around the world in recent decades. Measures have been developed for outcomes which were previously thought to be too difficult or impossible to handle. Of course, outcome measurement is not easy, and there are areas of government where good outcome measures will always remain out of reach. But the difficulties are greatly exaggerated by the critics. Importantly, progress has been greatest in sector where government services play a particularly important role, such as health.
The approach to measuring the performance of education systems is a good example of the shift to more outcome-focused performance measurement. The traditional approach focused overwhelmingly on outputs and inputs. Input indicators such as the student/teacher ratio were used as measures of quality. Educational equity was measured entirely in terms of participation rates (which are output measures) such as the percentage of girls attending school and the enrolment rate of children from low income families in the optional final years of high school. To the extent that outcome measures were used at all, they were confined to examination success and graduation rates – highly imperfect measures given the lack of consistent yardstick either over time or between jurisdictions. Assessment-based measures of education outcomes are particularly susceptible to so-called “grade inflation”.
Today, the position is totally different in leading countries. The primary emphasis is upon standardized tests of student knowledge outcomes, including but not confined to literacy and numeracy levels. “Standardized” is the key word here. These tests are comparable over time and between different jurisdictions. In countries such as the UK, France and Australia, this means that the same tests are applied right across the country, making relative performance very clear. At the international level, the PISA tests developed by the OECD provide a tool for international benchmarking of student knowledge outcomes.
Building on these foundations, so-called “value-added” outcome measures have been developed which adjust, say, measured school performance to take into account the social characteristics of school student populations. Everyone knows that, for example, children in well-off suburbs have an educational advantage because of the greater support they tend to get at home – or that children in immigrant families where neither of the parents has good command of the national language have a great disadvantage. Value-added measures used robust statistical methodology to adjust school outcome measures for this, to put comparisons between schools on a fair basis. Sometimes the results of this adjustment have been surprising – with, for example, some supposedly strong schools in affluent areas being revealed as in fact quite mediocre in their real contribution to their student’s educational improvement.
Value-added measures in education are one impressive example of the adjustment of outcome measures for the impact of so-called “external factors” (i.e. environmental factors or client characteristics beyond government control which impact on outcomes achieved).
Measures of educational equity have also become more outcome-based. Instead of focusing only on school participation rates, the focus has now moved to comparative student knowledge outcomes. An example is measures of the difference between numeracy and literacy levels of disadvantaged children relative to those of the rest of the population of the same age. Such “gap” measures of outcomes are becoming increasingly widely used in many sectors.
All of this has been accompanied by scepticism about the value of input-based measures of educational quality, reflecting research showing a very weak relationship between class size and educational outcomes (at least below quite large class sizes). Research is similarly lukewarm about the impact of, say, the percentage of teachers with advanced qualifications on educational outcomes.
This transformation in educational performance measurement is not something of merely historical interest, because the new outcome-based approach remains minority practice internationally. This is the case, for example, in most low income and middle income countries. On the African continent, for example, even the regional leader, South Africa, lacks measures of student knowledge outcomes based on standardized tests. Amongst the key performance indicators reported in the education budget under its performance budgeting system, the matriculation rate is the only outcome indicator.
Outcome measurement continues to makes great strides around the world. I’ll discuss in future blog pieces some of the other key measurement techniques which have been used.
 
* Berryl Radin (2006), Challenging the Performance Movement (Georgetown University Press).
Back to content