|Page tools: Print Page Print All RSS Search this Product|
A more rigorous approach to case-study methodology in Annual Integrated Collections
The ABS' Annual Integrated Collections program (AIC) collects data used in the production of Australia's National Accounts. The primary form of data collection is the Economic Activity Survey (EAS), an annual probability survey that samples from most industries in Australia via self-completed mail-out/mail-back methods, augmented by secondary 'flexible' surveys.
However, these surveys have a long development time. Additionally, their broad scope and self-enumeration make them unsuitable for collecting fine-level commodities data, e.g. a breakdown between 'Computer systems, hardware and software design and development services' and 'Computer support services'.
To meet this need AIC sometimes uses a 'case study' approach, selecting approximately 20 of the largest businesses within an industry and collecting data via personal interview. If a case study can achieve near-total coverage of the whole industry (e.g. the top 20 petroleum and coal manufacturers account for 98.5% of industry turnover and 75% of employment) then this is likely to produce good results.
It’s less obvious whether case studies are suitable for industries which aren’t dominated by the top 20. Personal interviews and small sample size allow for high-quality data collection, minimising respondent error. But the selection method risks introducing bias when the difference between large and small industries goes beyond just scale – e.g. large businesses may use capital-intensive production methods that give higher productivity per employee, in which case pro-rating their results to small businesses will give misleading results.
To provide guidance on this issue, Business Survey Methodology section used a simulation approach based on recent EAS survey data. EAS estimates are based on a weighted sample covering both large and small units. These were compared to estimates derived by treating the top 20 units as a mock case study and pro-rating them to the whole industry by turnover (data available for all businesses).
These simulations showed that the accuracy of case studies depends not only on the level of coverage, but also on the industry and on the variable of interest.
For example, a simulated case study of the employment placement/recruitment/labour supply services industry gives coverage of 27% of total industry turnover. Comparing to AIC estimates indicates that case-study error is approximately 12% for expenditure on labour costs/wages and 5% for total income. (In this industry, total income is very closely correlated with turnover, so pro-rating case study data by turnover gives excellent results.)
By contrast, a case study of the computer systems design and related services industry achieves coverage of 36% but the errors for these same items are 15% and 9% respectively. Even though coverage is higher, results are poorer.
Simulations in one of the mining industry groups showed that case studies on the same group of units produced errors of 11% for fuel tax reimbursement but 35% for expenditure on employment agencies, perhaps reflecting that miners have similar patterns of fuel usage but may have very different recruitment strategies.
The methods developed here can be used to assess the viability of future case studies, providing EAS survey data contains a suitable proxy for the variables of interest.
These documents will be presented in a new window.