Australian Bureau of Statistics

Rate this page
ABS Home
ABS @ Facebook ABS @ Twitter ABS RSS ABS Email notification service
Newsletters - Methodological News - Issue 5, December 2001

A Quarterly Information Bulletin from the Methodology Division

December 2001



Information on the composition, accumulation and distribution of wealth between households has been in increasing demand among analysts and policy makers in recent decades. Since the mid 1990's, the ABS has produced a household balance sheet as part of the Australian System of National Accounts. The household balance sheet dissects the aggregate wealth of households and unincorporated enterprises into a number of asset and liability classes. In addition, several Australian studies have been undertaken to estimate particular components of household wealth using a variety of methods and data sources. Notwithstanding these developments, information on wealth at the household level remains somewhat limited.

In 2000, Analysis Branch began work to build a dataset which would expand the scope of previous work by:
  • extending the number of asset and liability categories modelled to more closely reflect the structure of the household balance sheet, and incorporate additional detail in some areas;
  • compiling data which can be dissected by different statistical unit types (households, income units), a range of household types (based on the composition of the household and ages of the people within it), and a range of other characteristics such as State and income level;
  • generating estimates for components of wealth which have not been included in previous studies, such as consumer durables; and
  • generating estimates of wealth across time.

The Survey of Income and Housing Costs and the Household Expenditure Survey are being used as the main data sources for the estimates. A large number of other data sources are also being used, including the Rental Investors Survey, the Business Longitudinal Survey, and the Survey of Employment Arrangements and Superannuation.

There are a variety of methods used to estimate the different components of aggregate wealth. These include:
  • the use of directly reported survey data on asset and liability values;
  • grafting asset and liability values from one data source to another using the maximum level of information available,
  • distributing aggregate asset values in proportion to income; and
  • modelling asset or liability values based on income unit characteristics.

Adjustments are made to the aggregate figures in the household balance sheet, to account for differences in the scope and coverage between the household sector in the National Accounts and household surveys. The survey based estimates are then benchmarked to the household balance sheet, to ensure consistency between the two estimates at the aggregate level.

Work on compiling estimates for the years 1993-94 to 1999-00 is almost complete. The results of this study will be released to the public in early to mid 2002. In the longer term, the estimation methods developed may be used to compile wealth estimates on an ongoing basis. There are a number of relevant agencies who would find this data highly valuable as an input to analysis, forecasting or revenue estimation.

The project team is Kristen Northwood, Terry Rawnsley and Lujuan Chen.

For more information, please contact Kristen Northwood on (02) 6252 5854.



We have established a 'centre of excellence' for significance editing within the methodological program. A major goal of the centre is to develop and foster a more unified approach to editing within the ABS. The centre is headed by Keith Farwell in our Tasmanian office.

Recently, a paper (available on request) was presented to the Economic Statistics Strategy Co-ordination Committee which outlined the directions we would like to see for methodological research as well as practical application. Two of the aims of the paper were to present both a framework from which significance editing practices and procedures could be applied and our views on the future directions that significance editing should take within the ABS.
One of the main features of the framework proposed is the use of the terms "input significance editing" and "output statistical editing".

What is significance editing?

The term "significance editing" is used to describe a general editing approach which incorporates survey weights and estimation methodology into edits and maintains a link between individual responses and output estimates. However, it is often necessary to distinguish between whether significance editing is being performed at the input stage or at the output stage of the collection cycle. When the distinction is needed, we use 'input significance editing' to describe significance editing applied at the input stage and 'output statistical editing' when it is applied at the output stage. Although each requires specific measures, both fit within the general framework. For input significance editing, a score is produced for each response which links editing effort to the likely impact it will have on estimates. For output statistical editing, a score links units and their weighted contributions to specific estimates or 'output cells'. In either case, responses can be ranked in order of score size to produce a prioritised list of units which will direct resources to those areas where editing effort is expected to have the greatest impact.

Scores can be calculated at the item level and the provider level. For example, a provider can have several item scores and one provider score. The provider score is a summary score based on the provider's item scores. It is expected that both kinds of scores will be useful.

Input Significance Editing

A basic standardised input significance score (which can be thought of as measure of editing benefit) for an estimate of level is made by calculating the absolute difference between the reported value and an imputed value for that unit and multiplying this difference by the unit's weight.

The input significance score can be calculated independently of the response rate thus allowing editing action to begin as soon as responses are received. A unit's previous return is often used as the imputed value, survey design weights are usually used as approximations for estimation weights, and the previous estimates are usually used to approximate the current expected estimates. Units with a score higher than a specified cut-off value can be selected and placed in a prioritised list for editing attention. Even if a cut-off is not used, units can still be ranked and prioritised for attention.

Output Statistical Editing

For output statistical editing, scores are based on a combination of unit contributions to estimates, unit contributions to standard errors of estimates, and unit contributions to movements in the estimates. In output editing we need to focus our attention on actual weighted contributions of units rather than on predictions of change in estimates that could be expected due to editing (as is the case with significance input editing). Output editing involves a combination of detecting outliers, detecting remaining significant reporting errors, and analysing the trends in estimates (such as movements for continuing surveys). The output editing scores will prioritise those responses which most assist with the dual objectives of controlling estimate quality and understanding the trends in the estimates.

Three separate initial output scores are created for a specific item based on contributions to the estimate, the movement, and the standard error. These scores are then combined into a single item score which can be interpreted as an average score representing a provider's overall importance to the item. Item scores can be further combined to generate provider scores.

Both input and output significance scores can be calculated for selected items (e.g. turnover, wages) and for each provider. They have the advantage of using only a minimal amount of auxiliary information. For example, they use information on the current unit (including historical if available) and a small store of common information (such as the current and previous estimates and design weights). The scores are based on simple statistical techniques and can be constructed from output from generalised tools. The scores have a similar form regardless of the complexities of the estimation system and are consistent with the type of estimates being produced.


Significance editing has already been demonstrated in a number of studies in the ABS to be a cost effective way to manage editing resources and output quality. However, studying the effect on each survey prior to implementation may prove to be costly. The significance editing framework outlined above allows existing methods to be mapped against it and a basic significance editing system developed. It it is our belief that this framework could be used for most surveys.

For more information, please contact Paul Sutcliffe on (02) 6252 6759, or Keith Farwell on (03) 6222 5889.



The Forms Consultancy Group (FCG) are stakeholders, together with sections in the Technology Services Division and the Economic Statistics Group, in the ABS' response to the Electronic Transactions Act (ETA). The ETA requires the ABS to accommodate businesses and individuals who insist or need to report to us electronically. While other, more effective electronic forms are being developed, MS EXCEL spreadsheets were considered the most cost effective and viable option in the short-term and will be used as the 'ETA-fallback' electronic data collection vehicle for businesses.

The FCG have conducted extensive research into electronic reporting and have an approval role for ABS electronic data collection instruments. We coordinated the testing of the ETA fall-back MS EXCEL forms to ensure the design would be reasonably acceptable to respondents. As the forms were intended to be as cheap and easy to produce as possible, we conducted a useability test in the Research and Design Centre, using ABS staff as our test subjects. With detailed scenarios they were able to pretend to be reporting for a real business quite effectively.

The first round of testing primarily examined navigation, using four versions of the form each with a different method of moving through the worksheets. Each subject filled out one of the versions using their scenario and then completed an evaluation questionnaire. For comparison, the subject was then quickly shown another version and asked to explain any preference. A balanced number of version-pair combinations allowed us to examine which versions were preferred overall and why.

None of the methods emerged as more strongly preferred by users than the others. Some of the navigation results were:
  • labels on spreadsheet tabs, links or buttons that refer to page numbers in the form weren't very useful;
  • the varying positions of the above navigational devices were a confounding factor in assessing their functionality;
  • scrolling through a form that was one long page wasn't anywhere near as irritating to subjects as expected;
  • using macros to solve any of our design problems is out of the question - subjects turned them off when they opened the form almost every time; and
  • the form needed to provide much better keyboard navigation.

Some of the other results from the first round of testing included:
  • subjects had a very strong desire for automatic totals;
  • having a form in the corner of a large spreadsheet caused subjects to get lost in the surrounding white space; and
  • answer boxes that need to take varying lengths of text don't work well in MS EXCEL.

After combining what we hoped were the best navigation elements into a single form, and addressing most of the glaring or easily fixed design problems, we tested again. The new form included spreadsheet tabs labelled with sections of the form, automatic totals and instructions about keyboard navigation. Some of the previous subjects and some fresh ones went through the new form with another scenario to ensure the changes we had made were actually improvements. This was indeed the case. A few more minor improvements and the MS EXCEL 'ETA fallback' electronic form will go live.

For more information, please contact Emma Farrell on (02) 6252 7316.



An important issue for the analysis program is how it can remain targeted at ABS priorities. In particular, how can we ensure that:
  • the prototypes of analytical products that we are building will fill the most important gaps in the mix of statistical products?
  • the analytical methods that we are developing will deliver the most valuable improvements to statistical processes?

Our colleagues in some ABS workgroups are adopting a three-phase approach to scanning their environment and discovering needs for statistical development work. They are creating:
  • information models - which encapsulate the key entities and relationships in a field, and depict how they might be given statistical expression. Such models provide a systematic view of the potential demand for statistics.
  • information maps - which describe the available ABS and nonABS datasets in a field. Such maps provide a systematic view of the supply of statistics.
  • information development plans - which spell out the activities that will be undertaken to address gaps and overlaps in statistics.

This intelligence flows into the annual strategic directions statements for social and economic statistics and into PSG and ESG work programs. And linking it to the analysis work program offers the best prospect of our discovering analytical opportunities that would deliver greatest value to the ABS.

Analysis Branch has carved up the statistical universe into "portfolios", each overseen by a member of our senior management team. Each portfolio manager is responsible for scanning the subject matter environment to discover the emerging needs for analytical work. This entails:
  • understanding our clients' information models/maps/plans, strategic directions and work programs;
  • developing a map of the demand for new analytical products or methods; and
  • assessing the possibilities for analysis projects.

We are also scanning the professional literature and talking with researchers in other statistical agencies and universities to understand analytical methods that might be useful to the ABS. An article in the next issue of this newsletter will summarise what we have discovered so far about emerging methods.

For more information, please contact Ken Tallis on (02) 6252 7290.



As part of the annual retail reanalysis (performed in August 2001) Time Series Analysis introduced an improved Easter correction method.

The date of Easter varies from one year to another. This variation may impact on the figures of a monthly survey. This impact is referred to as an Easter proximity effect.

The improved Easter correction method takes into account the proportion of activity in March and April which may be due to the date of Easter. A regression analysis is applied to test if the Easter related activity is significant. If the test is significant then an Easter proximity correction would be applied in the seasonal adjustment process to remove the Easter proximity effect. The improved Easter correction method gives a superior Easter correction when the dates for Easter fall on or around the March and April boundary (ie. approximately 25 March to 7 April). The next occurrence of an Easter proximity effect will occur in 2002 when Good Friday falls on March 29.

The application of the improved Easter correction method will cause revisions to the seasonally adjusted and trend estimates when compared with previous estimates. In practice, the seasonally adjusted and trend estimates for March and April would be expected to be revised along the length of the time series. Of course there are the other "normal" or "expected" revisions associated with the seasonal adjustment process which may affect all months of the series to varying degrees.

More details on the Easter correction method can be obtained from 'Adjusting for an Easter Proximity Effect': ABS Working Papers in Econometrics and Applied Statistics - Working Paper no.99/3 - December 1999 (Catalogue number 1351.0, or the journal article 'An Easter Proximity Effect: Modelling and Adjustment' in Volume 43, No. 3, September 2001 issue of the Australian and New Zealand Journal of Statistics.

For more information, please contact Craig McLaren on (02) 6252 6540 or Mark Zhang (02) 6252 5132.



In commemoration of the contribution made by Mr Ken Foreman to the ABS over many years, the Australian Statistician agreed to institute an annual award for an officer of the Methodology Division who is performing at a high level, and is showing the potential for substantial further development as a methodologist in the ABS, and the ability to make a significant contribution within the ABS. The award comprises an overseas trip to an international conference or short training course.

The 2001 Ken Foreman Award has been awarded to Ruel Abello of the Analytical Services Branch, in recognition of his outstanding technical ability in modelling and analysis, and his contribution towards challenging analytical projects. In receiving the award, it is proposed that Ruel will attend the 27th General Conference of the International Association for Research in Income and Wealth (IARIW), to be held in Stockholm in August 2002. While in Europe, he also plans to visit a couple of statistical agencies to discuss work-related research issues.

In addition, Craig McLaren received a special commendation for his strong technical ability in survey methodology, and the contribution he has made towards improving the ABS's capability in time series analysis.

Previous recipients of the award have been:

1996 Robert Clark - Strong technical contribution in his work as demonstrated in the Labour Force Survey redesign.

1997 Steven Kennedy - Demonstration of good technical knowledge and skills, as well as the willingness to further develop his skills through formal study, and in the course of his work. Part of a team responsible for undertaking a significant part of the productivity analysis project. Has represented the ABS and its methodological interests and helped build some useful networks with others who were conducting research in the area.

1998 Dina Neiger - Contribution to developing the methodology for business collections, particularly through the review of the sample and frame maintenance procedures (SFMP).

1999 Richard McKenzie - Strong contribution to the development of the methods used in the Labour Cost Index and playing a lead role in the development and implementation of a novel 'rotating panel' method of selecting the sample for the Labour Cost Index. Has also worked on significance editing and significance follow up.

2000 Kristen Northwood - Her major contributions have been on the longitudinal analysis of the Growth and Performance Survey, the creation of experimental output measures for the Justice sector and her work to date on extending and refining estimates of Australian household wealth.

For further information on the Ken Foreman Award please contact Bill Allen on (02) 6252 6302.


Commonwealth of Australia 2008

Unless otherwise noted, content on this website is licensed under a Creative Commons Attribution 2.5 Australia Licence together with any terms, conditions and exclusions as set out in the website Copyright notice. For permission to do anything beyond the scope of this licence and copyright terms contact us.