|Page tools: Print Page Print All RSS Search this Product|
People living in CDs which contained Discrete Indigenous Communities were not enumerated for operational reasons.
Households where all of the residents were less than 18 years of age were excluded from the survey because the initial screening questions needed to be answered by a responsible adult (who was aged 18 years or over).
If a child aged 15-17 years was selected, they could be interviewed with the consent of a parent or responsible adult.
Multi-stage sampling techniques were used to select the sample for the survey. After sample loss, the sample included 11,532 households. After exclusions due to scope and coverage the final sample comprised 8,600 respondents. Of these 8,446 were fully responding or provided sufficient detail for scores to be determined. The remaining 154 respondents did not complete the survey due to literacy or language inadequacies and only their age and sex is included. In addition three respondents did not complete the survey for other reasons, and may be missing from some data items.
DATA COLLECTION METHODOLOGY
Information for this survey was collected face-to-face. Trained interviewers asked members of each household questions via Computer Assisted Interviewing (CAI) using the following methods:
WEIGHTING, BENCHMARKING AND ESTIMATION
Weighting is the process of adjusting results from the sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many persons in the population are represented by the sample person.
The first step in calculating weights for each person is to assign an initial weight which is equal to the inverse probability of being selected in the survey. For example, if the probability of a person being selected in the survey was one in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).
Non-response adjustments were made to the initial person-level weights with the aim of representing those people in the population that did not respond to PIAAC. Two adjustment factors were applied:
After the non-response adjustment, the weights were adjusted to align with independent estimates of the population, referred to as 'benchmarks', in designated categories of sex by age by state by area of usual residence. This process is known as calibration. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population described by the benchmarks, rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration or particular categories of persons which may occur due to either the random nature of sampling or non-response.
The education and labour force benchmarks were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The 2011 Survey of Education and Work (persons aged 16-64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (aggregated data from November 2011 to March 2012) provided the pseudo-benchmark for labour force status. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.
The process of weighting ensures that the survey estimates conform to persons benchmarks per state, part of state, age and sex. These benchmarks are produced from estimates of the resident population derived independently of the survey. Therefore the PIAAC estimates do not (and are not intended to) match estimates for the total Australian resident population (which include persons and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.
Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest.
Note that although the literacy-related non-respondent records (154 people) were given a weight, plausible values were not generated for this population.
RELIABILITY OF ESTIMATES
All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.
Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors when coding or processing the survey data.
One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed and about 19 chances in 20 (95%) that the difference will be less than two SEs.
Another measure of the likely difference is the Relative Standard Error (RSE), which is obtained by expressing the SE as a percentage of the estimate:
Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In the publication Programme for the International Assessment of Adult Competencies (PIAAC), 2011-12 (cat. no. 4228.0) estimates with RSEs between 25% to 50% are annotated to indicate they are subject to high sample variability and should be used with caution. In addition, estimates with RSEs greater than 50% have been annotated to indicate they are considered too unreliable for general use.
These documents will be presented in a new window.