Australian Bureau of Statistics

Rate the ABS website
ABS Home > Statistics > By Release Date
ABS @ Facebook ABS @ Twitter ABS RSS ABS Email notification service
6227.0.30.001 - Microdata: Education and Work, Australia, May 2011 Quality Declaration 
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 15/05/2012   
   Page tools: Print Print Page Print all pages in this productPrint All RSS Feed RSS Bookmark and Share Search this Product

SURVEY METHODOLOGY


SCOPE AND COVERAGE
SURVEY DESIGN
DATA COLLECTION METHODOLOGY
WEIGHTING, ESTIMATION AND BENCHMARKING
RELIABILITY OF ESTIMATES


SCOPE AND COVERAGE

Scope

The statistics in the CURF and Survey TableBuilder files were compiled from data collected in the SEW, conducted throughout Australia in May 2011 as part of the Monthly Population Survey (MPS). The MPS consists of the Labour Force Survey (LFS) and supplementary surveys.

The publication Labour Force, Australia (cat. no. 6202.0) contains information about survey design, sample redesign, scope, coverage and population benchmarks relevant to the monthly LFS, which also applies to the supplementary surveys. It also contains definitions of demographic and labour force characteristics, and information about interviewing which are relevant to both the monthly LFS and supplementary surveys.

The scope of this survey was persons aged 15-74 years, excluding the following persons:

  • members of the Australian permanent defence forces
  • certain diplomatic personnel of overseas governments, customarily excluded from the census and estimates resident population figures
  • overseas residents in Australia
  • members of non-Australian defence forces (and their dependants) stationed in Australia
  • persons permanently unable to work
  • persons aged 65-74 years who are permanently not intending to work, or not in the labour force, or not marginally attached to the labour force
  • special dwelling type institutionalised persons (e.g. patients in hospitals, residents of retirement homes, residents of homes for persons with disabilities, inmates of prisons) and
  • special dwelling type boarding school pupils.
Boarding school pupils have been excluded from the scope of SEW since 2005, but were included in earlier collections. The LFS in May 2011 yielded an estimated 4,400 boarding school pupils aged 15 years and over, who were excluded from the SEW.

In 2009, persons aged 65-74 who were in the labour force or were marginally attached to the labour force, were interviewed for the first time for the SEW. In May 2011 there were an estimated 323,600 persons aged 65-74 years in the labour force or marginally attached to the labour force, out of a total of 1,661,900 persons aged 65-74 years. Persons are determined to be marginally attached to the labour force if they were not in the labour force in the reference week, wanted to work and:
  • were actively looking for work but did not meet the availability criteria to be classified as unemployed, or
  • were not actively looking for work but were available to start work within four weeks or could start work within four weeks if child care was available.
The survey was conducted in both urban and rural areas in all state and territories, but excluded persons living in Indigenous communities in very remote parts of Australia. In 2009, persons who live in very remote areas that are not part of the Indigenous Community Frame (ICF) were interviewed for the first time for SEW. Nationally, approximately 0.5% of persons in scope of SEW in 2011 live in very remote areas that are not part of the ICF. In the Northern Territory, this proportion is 6%.

Coverage

In the LFS, coverage rules are applied which aim to ensure that each persons is associated with only one dwelling and has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more details.
SURVEY DESIGN

The survey was conducted as a supplementary to the LFS. After sample loss, the sample included 39,838 respondents in 19,802 households.

Supplementary surveys are not conducted using the full LFS sample. The sample for the SEW was seven-eighths of the LFS sample.
DATA COLLECTION METHODOLOGY

Information was collected via face-to-face or telephone interviews. Trained interviewers asked members of each household, or a responsible adult answering on behalf of other household members, detailed questions about their educational attainment and recent participation in education.

All interviews were conducted using Computer Assisted Interviewing (CAI).
WEIGHTING, ESTIMATION AND BENCHMARKING

Weighting

Weighting is the process of adjusting results from the sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many population units are represented by the sample unit.

The first step in calculating weights for each person is to assign an initial weight which is equal to the inverse probability of being selected in the survey. For example, if the probability of a person being selected in the survey was one in 300, then the person would have an initial weight of 300 (that is, they represent 300 persons in the population).

Estimation

Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest. Estimates of non-person counts (e.g. days away from work) are obtained by multiplying the characteristics of interest with the weight of the reporting person and aggregating.

Benchmarking

The weights were calibrated to align with independent estimates of the population, referred to as 'benchmarks', in designated categories of sex by age by state by area of usual residence and age by labour force status. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population, rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons which may occur due to either the random nature of sampling or non-response.

The survey was benchmarked to the estimated resident population aged 15-74 years living in private dwellings and non-institutionalised special dwellings in each state and territory. People living in Indigenous communities in very remote parts of Australia were excluded. The process of weighting ensures that the survey estimates conform to persons benchmarks per state, part of state, age and sex. These benchmarks are produced from estimates of the resident population derived independently of the survey.

Due to differences in scope and sample size between this supplementary survey and that of the LFS, the estimation procedure may lead to some small variations between labour force estimates from this survey and those from the LFS.

For further information, see the Explanatory Notes in the publication Education and Work, Australia 2011 (cat. no. 6227.0).
RELIABILITY OF ESTIMATES

All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.

Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors when coding or processing the survey data.

Sampling error

One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed and about 19 chances in 20 (95%) that the difference will be less than two SEs.

Another measure of the likely difference is the Relative Standard Error (RSE), which is obtained by expressing the SE as a percentage of the estimate.

Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In ABS publications, estimates with an RSE of 25% to 50% are preceded by an asterisk (e.g. *15.7) to indicate that the estimate should be used with caution. Estimates with RSEs over 50% are indicated by a double asterisk (e.g.**2.8) and should be considered unreliable for most purposes.

In addition to the main weight (as outlined earlier), each record on the CURF also contains 30 'replicate weights'. The purpose of these replicate weights is to enable the calculation of the standard error on each estimate produced. This method is known as the 30 group Jackknife variance estimator.

The basic concept behind this replication approach is to select different sub-samples repeatedly (30 times) from the whole sample. For each of these sub-samples the statistic of interest is calculated. The variance of the full sample statistics is then estimated using the variability among the replicate statistics calculated from these sub-samples. As well as enabling variances of estimates to be calculated relatively simply, replicate weights also enable unit record analyses such as chi-square and logistic regression to be conducted which take into account the sample design.

Further information about RSEs and how they are calculated can be referenced in the 'Technical Note' section of the following publication relevant to this microdata: Education and Work, Australia, May 2011 (cat. no. 6227.0). RSEs for estimates in the tables published in this publication are available in spreadsheet format, as attachments to this publication.

Non-sampling error

Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. One of the main sources of non-sampling error is non-response by persons selected in the survey. Non-response occurs when persons cannot or will not cooperate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends upon the rate of non-response and the extent of the difference between the characteristics of those persons who responded to the survey and those that did not.

Every effort was made to reduce non-response and other non-sampling errors by careful design and testing of the questionnaire, training and supervision of interviewers, and undertaking extensive editing and quality control procedures at all stages of data processing.

Bookmark and Share. Opens in a new window

Commonwealth of Australia 2014

Unless otherwise noted, content on this website is licensed under a Creative Commons Attribution 2.5 Australia Licence together with any terms, conditions and exclusions as set out in the website Copyright notice. For permission to do anything beyond the scope of this licence and copyright terms contact us.