Australian Bureau of Statistics
1504.0 - Methodological News, Dec 2010
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 09/12/2010
|Page tools: Print Page Print All RSS Search this Product|
Benchmarking Small Area Estimates
The demand from users for regional estimates has been growing rapidly in Australia and throughout the world over the last 20 years. This demand is driven by the increasing requirement on policy makers to formulate evidence based policy and deliver programs that are cost effective, responsive to a changing world and targeted to relevant areas.
Small area estimation (SAE) is concerned with developing methods for calculating reliable estimates for geographic areas or domains that are sample deprived. In most small regions or domains, sample sizes are often so small that estimates calculated using the conventional design based estimation methods (such as survey weighted totals) are subject to very high sampling errors, making these estimates statistically meaningless. SAE can often overcome this problem by using a statistical model that relates survey data to available auxiliary data. In other words, SAE overcomes the small sample problem by borrowing strength from auxiliary information and similar units in other areas. The ABS has been involved in producing small area estimates (SAEs) for many years and completed projects have included: disability, health, Indigenous health and water use and SAE feasibility studies of labour force status and household net wealth.
Users expect the small area estimates provided to them to be both consistent and coherent, with respect to published official statistics. Coherence is the seventh dimension of the ABS Data Quality Framework and is defined as "the internal consistency of a statistical collection, product or release, as well as its comparability with other sources of information, within a broad analytical framework and over time". Small area estimates that are not consistent and coherent will struggle to gain credibility with users. Other advantages in benchmarking SAEs are that it can reduce the impacts of model mis-specification (including poor quality of auxiliary data) as well as ameliorate the effects of influential or outlier data points.
Recently, the Analytical Services Unit (ASU) developed a methodology for producing SAEs that are guaranteed to sum to higher level published estimates. Measures of accuracy for these "benchmarked" estimates were also produced. The work was carried out on Labour Force Survey data using Centrelink and Census data as auxiliary variables in a logistic binomial model with random effects. The method involved adding a constraint to the standard log likelihood function and then using the Lagrange multiplier method to derive a maximum penalised quasi likelihood algorithm to estimate model parameters. Four different constraint levels were trialed, these being the Australian, state, state by capital city / non-capital city and dissemination region levels. Relative root mean square errors (RRMSEs) were calculated using the parametric bootstrap.
The results showed that when benchmark constraints were set at the Australian or state levels, SAEs and their RRMSEs were indistinguishable from the corresponding unconstrained estimates. This occurred because the unconstrained model produced SAEs that came close to summing to these published estimates and the sampling errors on these broader level benchmarks were quite low. However, when the constraint level was set at either the sub state or dissemination region level, benchmarked SAEs varied considerably and RRMSEs were much higher. This suggests that it is not advisable to constrain SAEs to benchmark levels that are themselves subject to high sampling error.
Further information can be obtained from Daniel Elazar on (02) 6252 6962 or firstname.lastname@example.org.
These documents will be presented in a new window.
This page last updated 30 March 2011