|Page tools: Print Page Print All RSS Search this Product|
In 2010, CORMS was conducted in both urban and rural areas in all states and territories, but excluded people living in Indigenous communities in very remote parts of Australia. Previous cycles of this survey excluded all persons living in very remote areas.
The records in this CURF relate to persons covered by the survey enumerated in November 2010. In the LFS, coverage rules are applied which aim to ensure that each person is associated with only one dwelling, and hence has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more details.
DATA COLLECTION METHODOLOGY
Information was collected through interviews conducted over a two-week period during November 2010. Interviews were mainly conducted over the phone with some conducted face-to-face. Information was obtained from one responsible adult present on each visa application in the household. For example, consider a household with three usual residents where two were listed together on one visa application and the other person listed on a separate visa application. In this case, two people in the household would have provided information, one for each visa application.
All interviews were conducted using computer assisted interviewing (CAI).
Supplementary surveys are not conducted using the full LFS sample. The sample for the CORMS was seven-eighths of the LFS sample.
The CORMS CURF contains 47,099 fully responding confidentialised records. Of these 2,650 records were obtained from recent migrants and temporary residents.
It should be noted that steps are taken to confidentialise the unit record data made available on the CURF. This may include deleting some records. Consequently, the number of dwellings in the sample outlined above may not correspond to the number of records included on the CURF. For further details, see 'Chapter 3 Using the CURF Data'.
WEIGHTING, ESTIMATION AND BENCHMARKING
As the survey was conducted on a sample of households in Australia, it is important to consider the method of sample selection when deriving estimates from the CURF. This is particularly important as a person's chance of selection in the survey varies depending on the state or territory of enumeration.
Weighting is the process of adjusting results from the sample survey to infer results for the total in-scope population. To do this, a weight is allocated to each sample unit, i.e. each person. The weight effectively indicates how many population units are represented by the sample unit.
The first step in calculating weights for each sample unit is to assign an initial weight which is equal to the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was one in 600, then the selected person would have an initial weight of 600 (that is, they represent 600 persons in the population).
Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest.
The initial weights are calibrated to align with an independent estimate of the population of interest, referred to as 'benchmarks'. Weights are calibrated against population benchmarks to ensure that the survey estimates conform to an independently estimated distribution of the population, rather than to the distribution within the sample itself. Where estimates are derived from the CURF, it is essential that they are calculated by adding the weights of persons in each category and not just by counting the number in each category. If each person's 'weight' were to be ignored, then no account would be taken of a person's chance of selection or of different response rates across population groups, and the resulting estimates could be biased. Replicate weights have been included on the CURF which can be used to calculate sampling error. For more information, refer to the 'Standard Errors' section in Chapter 3.
The population included in the benchmark totals correspond to the scope of the survey. For this survey two sets of benchmarks were used, and were derived from the November 2010 LFS. The first set of benchmarks specified the population distribution in designated categories of state or territory of usual residence by area of usual residence by sex by age group. The second set of benchmarks in designated categories of state or territory of usual residence by migrant status.
RELIABILITY OF ESTIMATES
All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.
Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors when coding or processing the survey data.
One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been enumerated and about 19 chances in 20 (95%) that the difference will be less than two SEs.
Another measure of the likely difference is the Relative Standard Error (RSE), which is obtained by expressing the SE as a percentage of the estimate. Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In ABS publications, estimates with an RSE of 25% to 50% are preceded by an asterisk (e.g. *15.7) to indicate that the estimate should be used with caution. Estimates with RSEs over 50% are indicated by a double asterisk (e.g.**2.8) and should be considered unreliable for most purposes. The formula for calculating the RSE of an estimate (y) is:
In addition to the main weight (as outlined earlier), each record on the CURF also contains 30 'replicate weights'. The purpose of these replicate weights is to enable the calculation of the sample error on each estimate produced.
The basic concept behind the replication approach is to select different sub-samples repeatedly (30 times) from the whole sample. For each of these sub-samples the statistic of interest is calculated. The variance of the full sample statistic is then estimated using the variability among the replicate statistics calculated from these sub-samples. As well as enabling variances of estimates to be calculated relatively simply, replicate weights also enable unit record analyses such as chi-square and logistic regression to be conducted which take into account the sample design. Further information about RSEs and how they are calculated can be referenced in the 'Technical Note' section of the following publication relevant to this CURF: Characteristics of Recent Migrants, Australia, Nov 2010 (cat. no. 6250.0). RSEs for estimates in the tables published in this publication are available in spreadsheet format, as attachments to this publication.
One of the main sources of non-sampling error is non-response by persons selected in the survey. Non-response occurs when persons cannot or will not co-operate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends upon the rate of non-response and the extent of the difference between the characteristics of those persons who responded to the survey and those that did not.
Every effort was made to reduce non-response and other non-sampling errors in CORMS to a minimum by careful design and testing of the questionnaire, training and supervision of interviewers, and undertaking extensive editing and quality control procedures at all stages of data processing.
One advantage of the CAI technology used to conduct interviews is that it potentially reduces non-sampling error by enabling edits to be applied as the data are being collected. The interviewer is alerted immediately if information entered into the computer is either outside the permitted range for a particular question, or contradictory to information previously recorded during the interview. These edits allow the interviewer to query respondents and resolve issues during the interview. CAI sequencing of questions is also automated so that respondents are only asked relevant questions and in the appropriate order, thereby eliminating interviewer sequencing errors.
These documents will be presented in a new window.