Australian Bureau of Statistics
6202.0.30.004 - Microdata: Labour Force Survey and Labour Mobility, Australia, Feb 2012 Quality Declaration
Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 11/12/2012
|Page tools: Print Page Print All RSS Search this Product|
The sample for the February 2012 LFS consisted of 56,489 respondents in 36,812 households. The Labour Mobility Survey was conducted as a supplementary to the LFS. After sample loss, the sample included 32,119 respondents in 28,117 households.
Supplementary surveys are not conducted using the full LFS sample. The sample for the Labour Mobility survey was seven-eighths of the LFS sample.
DATA COLLECTION METHODOLOGY
Data were collected by trained interviewers, who conducted computer-assisted personal and telephone interviews at selected private and non-private dwellings throughout Australia. These interviews were primarily conducted during the two weeks ending Saturday, 18 February, 2012 with any follow up activity necessary undertaken during the following two weeks.
The publication, Labour Force, Australia (cat. no. 6202.0), contains information about survey design, sample redesign, scope, coverage and population benchmarks relevant to the monthly LFS, which also applies to the supplementary surveys. It also contains definitions of demographic and labour force characteristics, and information about interviewing which are relevant to both the monthly LFS and supplementary surveys.
WEIGHTING, BENCHMARKING AND ESTIMATION
Weighting is the process of adjusting results from a sample survey to infer results for the total population. To do this, a 'weight' is allocated to each sample unit. The weight is a value which indicates how many population units are represented by the sample unit.
The first step in calculating weights for each person is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).
Separate weights were calculated for LFS and Labour Mobility Survey samples (as some units were in scope for LFS but not for the Labour Mobility Survey). The LFS weighting method ensures that LFS estimates conform to the benchmark distribution of the population by age, sex and geographic area, and also LFS region by sex (two sets of benchmarks). Weights are allocated to each sample respondent according to their state/territory of selection, state/territory of usual residence, part of state of usual residence, age group and sex.
The weights were calibrated to align with independent estimates of the population, referred to as benchmarks, in designated categories of sex by age by area of usual residence. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distribution of the population rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons which may occur due to either the random nature of sampling or non-response.
The Labour Mobility Survey is benchmarked to LFS estimates for the following variables: state of usual residence, part of state of usual residence, sex, age group, full-time or part-time status of employment and labour force status.
Benchmarking to LFS estimates accounts for the one eighth of the sample where the Labour Mobility Survey is not conducted and for non-respondents to the Labour Mobility Survey. The Labour Mobility Survey weighting excludes all residents in institutions, boarding schools, and very remote areas because the sample scope excludes these people.
Due to differences in scope and sample size between this supplementary survey and that of the LFS, the estimation procedure may lead to some small variations between labour force estimates from this survey and those from the LFS.
Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest. Estimates of non-person counts (e.g. days away from work) are obtained by multiplying the characteristic of interest with the weight of the reporting person and aggregating.
RELIABILITY OF ESTIMATES
All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.
Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors when coding or processing the survey data.
One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed and about 19 chances in 20 (95%) that the difference will be less than two SEs.
Another measure of the likely difference is the Relative Standard Error (RSE), which is obtained by expressing the SE as a percentage of the estimate.
Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In ABS publications, estimates with an RSE of 25% to 50% are preceded by an asterisk (e.g. *15.7) to indicate that the estimate should be used with caution. Estimates with RSEs over 50% are indicated by a double asterisk (e.g.**2.8) and should be considered unreliable for most purposes.
In addition to the main weight (as outlined earlier), each record on the CURF also contains two sets of 30 'replicate weights' (one set is applicable to the Labour Mobility Survey and the other, to the LFS) . The purpose of these replicate weights is to enable the calculation of the standard error on each estimate produced. This method is known as the 30 group Jackknife variance estimator.
The basic concept behind this replication approach is to select different sub-samples repeatedly (30 times) from the whole sample. For each of these sub-samples the statistic of interest is calculated. The variance of the full sample statistics is then estimated using the variability among the replicate statistics calculated from these sub-samples. As well as enabling variances of estimates to be calculated relatively simply, replicate weights also enable unit record analyses such as chi-square and logistic regression to be conducted which take into account the sample design.
Further information about RSEs and how they are calculated can be referenced in the section on Standard Errors under File Structure in this product and the Technical Note of Labour Mobility, Australia, February 2012 (cat. no. 6209.0).
Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. One of the main sources of non-sampling error is non-response by persons selected in the survey. Non-response occurs when persons cannot or will not cooperate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends upon the rate of non-response and the extent of the difference between the characteristics of those persons who responded to the survey and those that did not.
Every effort was made to reduce non-response and other non-sampling errors by careful design and testing of the questionnaire, training and supervision of interviewers, and undertaking extensive editing and quality control procedures at all stages of data processing.
Estimates are based on information collected in the survey month, and, due to seasonal factors, they may not be representative of other months of the year.
Further information on the survey methodology can be found in:
These documents will be presented in a new window.
This page last updated 17 December 2012