5256.0.55.001 - Information Paper: Non-Profit Institutions - A Draft Information Development Plan, Jul 2010  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 27/08/2010  First Issue
   Page tools: Print Print Page Print all pages in this productPrint All



The Productivity Commission proposes a broad measurement framework as a context for future information development. It is consistent with approaches developed to measure the contribution of the government sector, as many of the measurement issues are the same. Both governments and NPIs provide non–market services (services provided free or for an insignificant price). This contrasts with market based production undertaken by for–profit enterprises where the economic value (price) of a service is established in the market place, making it relatively straightforward to measure economic contribution.

The Productivity Commission proposes an inputs–outputs–outcomes–impacts approach. The Framework can be represented as follows.


Measurement Framework
Types of statistical information needed
Financial data, employment, volunteering, giving, and organisation counts
Data for units of output– number of clients, hours of contact, placements etc.
Social outcome indicators for activities, target individuals and groups
Social indicators of the wider impacts on community wellbeing

While the Productivity Commission Report provides a valuable context for measuring the contribution of the sector, the application of such a framework to the NPI sector is in its infancy. Defining appropriate measures for outcomes and impacts is problematic, and improvements in data available are likely to be over the longer term.

The Productivity Commission Report presents the Framework in more detail. It is reproduced below as Figure 2.


Source: Productivity Commission 2010, Contribution of the Non–for–Profit Sector, Research Report, Canberra

The framework provides for measurement (collectively or individually) at 4 levels:
  • Inputs – funding by source, direct economic contribution and in–kind support. It consists mainly of financial information and physical information on volunteering inputs and other in–kind support.
  • Outputs – direct services provided to clients and members, and indirect services – connecting the community, influence and community endowments. It is a mixture of financial and non–financial (physical and qualitative) information.
  • Outcomes – from direct services provided, and indirect services. It is mainly non–financial information, and attribution of outcomes to particular service providers is often difficult.
  • Impacts – the broader context of wellbeing of the community. It is mainly non–financial information.

The statistical measurement framework for NPIs should also incorporate a variety of other statistical classifications and standards developed for the NPI sector and for social and economic statistics more generally. These include the International Classification of Non–Profit Organisations (ICNPO) (see appendix 5), The Australian and New Zealand Standard Industrial Classification (ANZSIC), the Australian Culture and Leisure Classifications, frameworks for the measurement of social capital and wellbeing, and a number of other social and economic statistics frameworks.

In assessing the data needs for improved evaluation of the contribution of the sector, the Productivity Commission (p.88) notes that it is important to distinguish two broad purposes of data development, as they will typically require different information:
  • Macro evaluation for the sector or for broad subsets of the sector provides information on the structure of the sector, its role in delivering services to the economy and to society and trends over time. It can also explore links with the economy and social developments such as social and human capital and volunteering.
  • Micro evaluation focuses on individual organisations or programs – performance and outcomes at the individual level are the key focus. Research at the micro level has the advantage that expected outputs and outcomes can be more precisely defined and a larger number of relevant observations are potentially available for research and to inform decisions around best practice approaches more generally.
In an ideal data environment much of the macro view would be derived from a summation of data available for micro evaluation (bottom up approach), or alternatively, the micro view would be disaggregated from the macro level information. However, in practice this is unlikely to be the case. For technical and cost reasons much of the macro information is derived using representative samples rather than a full enumeration and from a variety of high–level information taken from administrative sources.


At the macro level, populating the framework with inputs (mainly financial) data is relatively straightforward, provided data collections are in place. Much of the macro data currently available for NPIs relates to inputs.

The measurement of outputs in physical terms is also relatively straightforward for groups of organisations providing the same or similar service, such as school education. Non–financial indicator data are generally not additive across the range of activities undertaken by NPIs, so it is not possible to present a wider view of the performance of the sector. One approach that might have value for aggregating outputs is to use the costs of providing services to weight the different non–financial indicators into a composite index to measure change over time (the approach used in the Australian national accounts to measure change in a range of government services outputs). Such an approach can be envisaged for some subsets of the NPI sector such as health, education and employment services.

Outcomes and impacts measurement becomes increasingly more contentious because of issues around the choice of appropriate indicators. There is a large body of work already available or underway on outcomes indicators for areas of government policy interest. The COAG Reform Council (CRC) has prepared or is preparing framework papers on various aspects of government service measurement under National COAG Agreements. The Productivity Commission’s annual Report on Government Services (ROGS) assembles and publishes data on broad level indicators of public sector performance. Although these indicators mainly relate to outcomes against government policy targets regardless of who is actually delivering the services on the ground, they are the types of broad level indicators that could be considered for the NPI sector. The challenge is to distinguish or attribute outcomes to the services delivered by the NPI sector.

The ABS, AIHW and other Australian Government, and state and territory organisations have also been active in developing indicators of social, economic and environmental conditions. Again, the challenge is to be able to attribute outcomes to the various service providers involved, and to take account of the external factors impinging on outcomes. Care has to be taken in the interpretation of indicators. If particular indicators of wellbeing are deteriorating, it does not necessarily mean that the outcomes from NPI interventions and services have declined. The opposite may be true, as there may be significant other factors at play affecting the demand for services (such as deteriorating economic conditions). Ultimately, outcome indicators may still be regarded as useful as long as NPIs are dominant or substantial contributors, but not necessarily the only contributors. They are also useful to NPIs as indicators of need.

In terms of what is feasible, the potentially high cost of data development and collection (including the costs to individual organisations providing data) has to be weighed against the benefits of having new or improved information. Consultation with stakeholders is needed to set priorities for new data development. The task of developing an agreed set of macro level statistical indicators of outcomes and impacts should not be underestimated. Progress is likely to be slow.


Given the diversity of the NPI sector, and the difficulty of aggregation of indicators not denominated in units of currency, the Productivity Commission considers outcomes measurement and analysis to be most suited to the assessment of micro level service delivery programs rather than to macro level measurement across program types or segments of the sector.

Contract agreements between service funders and NPI service providers include reporting requirements to account for funds spent. They are also likely to include a measure of physical units of output and, increasingly, a measure of performance against expected outcomes. Funders will also require information on the financial health and suitability of service providers before a contract is let. These data are required for administrative purposes, but if the data can be collected and stored in a suitable way they have the potential to be captured for statistical purposes and for analysis and use in evidence based policy advice. The ABS has a role in providing advice on data frameworks and systems in order to increase the coherence of information across agencies, and to maximise the statistical potential of data.

The Productivity Commission highlights a need for better and more standardised financial reporting and evaluation for organisations involved in service contracting. The recent adoption of the Standard Chart of Accounts for financial reporting by NPIs to governments should have significant benefits for streamlining administrative processes, and is potentially a significant boost to data that might become available for statistical use. The need for improved measurement and reporting approaches has been echoed in the stakeholder consultation for this IDP.