4228.0.30.001 - Microdata: Programme for the International Assessment of Adult Competencies, Australia, 2011-2012 Quality Declaration 
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 09/10/2013   
   Page tools: Print Print Page Print all pages in this productPrint All





The statistics in the CURFs were compiled from data collected in the Program for the International Assessment of Adult Competencies (PIAAC) survey, conducted throughout Australia from October 2011 to March 2012.

The scope of the survey is restricted to people aged 15 to 74 years who were usual residents of private dwellings and excludes:

  • diplomatic personnel of overseas governments
  • members of non-Australian defence forces (and their dependants) stationed in Australia
  • overseas residents who have not lived in Australia, or do not intend to do so, for a period of 12 months or more
  • people living in very remote areas
  • persons living in Collection Districts (CDs) which contained Discrete Indigenous Communities.

People living in CDs which contained Discrete Indigenous Communities were not enumerated for operational reasons.


Households where all of the residents were less than 18 years of age were excluded from the survey because the initial screening questions needed to be answered by a responsible adult (who was aged 18 years or over).

If a child aged 15-17 years was selected, they could be interviewed with the consent of a parent or responsible adult.


Multi-stage sampling techniques were used to select the sample for the survey. After sample loss, the sample included 11,532 households. After exclusions due to scope and coverage the final sample comprised 8,600 respondents. Of these 8,446 were fully responding or provided sufficient detail for scores to be determined. The remaining 154 respondents did not complete the survey due to literacy or language inadequacies and only their age and sex is included. In addition three respondents did not complete the survey for other reasons, and may be missing from some data items.


Information for this survey was collected face-to-face. Trained interviewers asked members of each household questions via Computer Assisted Interviewing (CAI) using the following methods:
  • An interview with Any Responsible Adult (ARA) to collect household details.
  • A Personal Interview (PI) with a randomly selected household member in scope to collect information for the Background Questionnaire (BQ). This questionnaire asked about education and training, employment, income and skill use in literacy, numeracy and ICT.
  • A self-enumerated exercise, which was conducted either 1) via a computer delivered instrument on a laptop, 2) by paper booklet, or 3) a mixture of both. Respondents who had experience using a computer, as determined by the BQ, undertook a self-enumerated exercise on the laptop computer that determined whether the respondent had the necessary mouse skills needed to complete a computer-based exercise. Respondents without the necessary skills were given a paper-based exercise.



Weighting is the process of adjusting results from the sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many persons in the population are represented by the sample person.

The first step in calculating weights for each person is to assign an initial weight which is equal to the inverse probability of being selected in the survey. For example, if the probability of a person being selected in the survey was one in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).

Non-response adjustment

Non-response adjustments were made to the initial person-level weights with the aim of representing those people in the population that did not respond to PIAAC. Two adjustment factors were applied:
  • a literacy-related non-response adjustment, which was aimed at ensuring survey estimates represented those people in the population that had a literacy or language related problem and could not respond to the survey (these people cannot be represented by survey respondents because their reason for not completing the survey is directly related to the survey outcome, however they are part of the PIAAC target population.)
  • a non-literacy-related non-response adjustment, which was aimed at ensuring survey estimates represented those people in the population that did not have a literacy or language related problem but did not respond to the survey for some other reason.


After the non-response adjustment, the weights were adjusted to align with independent estimates of the population, referred to as 'benchmarks', in designated categories of sex by age by state by area of usual residence. This process is known as calibration. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population described by the benchmarks, rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration or particular categories of persons which may occur due to either the random nature of sampling or non-response.

The survey was benchmarked to the in scope estimated resident population (ERP).

Further analysis was undertaken to ascertain whether benchmark variables, in addition to geography, age and sex, should be incorporated into the weighting strategy. Analysis showed that including only these variables in the weighting approach did not adequately compensate for undercoverage in the PIAAC sample for variables such as highest educational attainment and labour force status, when compared to other ABS surveys. As these variables were considered to have possible association with adult literacy additional benchmarks were incorporated into the weighting process.

The benchmarks used in the calibration of final weights for PIAAC were:

  • state by highest educational attainment
  • state by sex by age by labour force status
  • state by part of state by age by sex.

The education and labour force benchmarks were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The 2011 Survey of Education and Work (persons aged 16-64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (aggregated data from November 2011 to March 2012) provided the pseudo-benchmark for labour force status. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.

The process of weighting ensures that the survey estimates conform to persons benchmarks per state, part of state, age and sex. These benchmarks are produced from estimates of the resident population derived independently of the survey. Therefore the PIAAC estimates do not (and are not intended to) match estimates for the total Australian resident population (which include persons and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.


Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest.

Note that although the literacy-related non-respondent records (154 people) were given a weight, plausible values were not generated for this population.


All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.

Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors when coding or processing the survey data.

Sampling error

One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed and about 19 chances in 20 (95%) that the difference will be less than two SEs.

Another measure of the likely difference is the Relative Standard Error (RSE), which is obtained by expressing the SE as a percentage of the estimate:

Equation: relative standard error in percent is equal to the standard error divided by the estimate all multiplied by one hundred

Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In the publication Programme for the International Assessment of Adult Competencies (PIAAC), 2011-12 (cat. no. 4228.0) estimates with RSEs between 25% to 50% are annotated to indicate they are subject to high sample variability and should be used with caution. In addition, estimates with RSEs greater than 50% have been annotated to indicate they are considered too unreliable for general use.

In addition to the main weight (as outlined earlier), each record on the CURFs also contain 60 'replicate weights'. The purpose of these replicate weights is to enable the calculation of the standard error on the estimate produced. This method is known as the 60 group Jack-knife variance estimator.

The basic concept behind this replication approach is to select different sub-samples repeatedly (60 times) from the whole sample. For each of these sub-samples the statistic of interest is calculated. The variance of the full sample statistic is then estimated using the variability among the replicate statistics calculated from these sub-samples. As well as enabling variances of estimates to be calculated relatively simply, replicate weights also enable unit record analyses such as chi-square and logistic regression to be conducted which take into account the sample design.

Further information about RSEs and how they are calculated can be referenced in the 'Technical Note' section of the following publication relevant to this microdata: Programme for the International Assessment of Adult Competencies (PIAAC), 2011-12 (cat. no. 4228.0). RSEs for estimates in the tables published in this publication are available in spreadsheet format, as attachments to this publication.

Non-sampling error

Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. One of the main sources of non-sampling error is non-response by persons selected in the survey. Non-response occurs when persons cannot or will not cooperate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends upon the rate of non-response and the extent of the difference between the characteristics of those persons who responded to the survey and those that did not.

Every effort was made to reduce non-response and other non-sampling errors by careful design and testing of the questionnaire, training and supervision of interviewers, and undertaking extensive editing and quality control procedures at all stages of data processing.