Programme for the International Assessment of Adult Competencies, Australia methodology

Latest release
Reference period
2011 - 2012
Released
9/10/2013
Next release Unknown
First release

Explanatory notes

Introduction

1 This publication contains results from the Australian component of the Programme for the International Assessment of Adult Competencies (PIAAC) conducted in 24 countries around the world. The PIAAC survey was enumerated throughout Australia from October 2011 to March 2012 with funding provided by the then Australian Government Department of Education, Employment and Workplace Relations.

2 PIAAC is an international survey coordinated by the Organisation for Economic Co-operation and Development (OECD). PIAAC provides information on skills and competencies for people aged 15 to 74 years in the three domains of:

  • literacy;
  • numeracy; and
  • problem solving in technology-rich environments (PSTRE).
     

3 PIAAC is the third survey of international comparisons of adult proficiency skills in specific domains conducted in Australia. Its predecessors were the Adult Literacy and Life Skills Survey (ALLS) 2006 and Survey of Aspects of Literacy (SAL) 1996 (internationally known as the International Adult Literacy Survey (IALS)). PIAAC expands on these previous surveys by assessing skills in the domain of 'problem solving in technology-rich environments' and by asking questions specifically about skill use at work.

4 The literacy and numeracy scores previously released in the ALLS and SAL publications are not comparable with PIAAC data for reasons which are listed in the Comparability of Time Series section below. Data based on remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes to allow direct comparison. Caution however is advised when comparing results from the PIAAC with the earlier 1996 SAL and the 2006 ALLS. While the data from previous surveys has been re-modelled which should facilitate comparability over time, analysis undertaken by the ABS and internationally has shown that in some cases the observed trend is difficult to reconcile with other known factors and is not fully explained by sampling variability. Further analysis is needed to better understand the cause of these variations before drawing conclusions about the trend. For more information, see: The Survey of Adult Skills: Reader's Companion (Organisation for Economic Co-operation and Development, 2013); Technical Report for the Survey of Adult Skills - PIAAC (Organisation for Economic Co-operation and Development, 2013); and Skills in Canada: First Results from the Programme for the International Assessment of Adult Competencies - PIAAC (Statistics Canada 2013).

5 Data from PIAAC, ALLS and SAL are used to inform on the literacy, numeracy and problem-solving in technology-rich environments skills of Australian adults and the relationship between skills and education, employment, income, and demographic characteristics. Data is used for a range of purposes including the Council of Australian Governments (COAG) National Agreement for Skills and Workforce Development.

6 To analyse the relationship between the assessed competencies with social and economic well-being, PIAAC collected information on topics including:

  • general demographic information including income;
  • participation in education and training activities;
  • participation in labour force activities;
  • self-perception of literacy, numeracy and information communication technology (ICT) skill use at work and everyday life;
  • self-perception of generic skills used at work;
  • volunteering, trust and health;
  • language background; and
  • parental background.
     

7 Twenty-four countries participated in the PIAAC survey internationally. This publication contains Australia data only. The OECD published international results on 8 October 2013 in the OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. The report is available from the OECD website at www.oecd.org.

Scope

8 The scope of the survey is restricted to people aged 15 to 74 years who were usual residents of private dwellings and excludes:

  • diplomatic personnel of overseas governments;
  • members of non-Australian defence forces (and their dependants) stationed in Australia;
  • overseas residents who have not lived in Australia, or do not intend to do so, for a period of 12 months or more;
  • people living in very remote areas; and
  • people living in Census Collection Districts (CDs) which contained Discrete Indigenous Communities.
     

Coverage

9 Households where all of the residents were less than 18 years of age were excluded from the survey because the initial screening questions needed to be answered by a responsible adult (who was aged 18 years or over).

10 If a child aged 15 to 17 years was selected, they were interviewed with the consent of a parent or responsible adult.

Data collection

11 Data was collected by trained ABS interviewers who conducted computer-assisted personal interviews.

12 A series of screening questions were asked of a responsible adult in a selected household to determine whether any of the residents of the household were in scope for the survey.

13 One resident of the household, who was in scope, was randomly selected to be interviewed. This respondent was asked a background questionnaire to obtain general information on topics including education and training, employment, income and skill use in literacy, numeracy, and ICT.

14 If a child aged 15 to 17 years was selected, the parent or responsible adult was asked questions about the household's income.

15 For language problems, if acceptable to the respondent, an interpreter could assist with the background questionnaire, but the self-enumerated exercise was not completed.

Self-enumerated exercise

16 After the background questionnaire was completed, the respondent undertook a self-enumerated exercise. This contained tasks to assess their literacy, numeracy or problem solving skills in technology-rich environments. The exercise tasks were based on activities that adults do in their daily lives such as following instructions on labels, interpreting charts and graphs, measuring with a ruler, using email, internet searches and navigating websites. Tasks were at varying levels of difficulty.

17 The exercise could be completed at a separate time to the background questionnaire.

18 Respondents either completed the exercise on the notebook computer (with a mouse attached) or in paper-booklets. All respondents first took a core exercise to assess their capacity to undertake the main exercise. Those who passed the core stage proceeded to the main exercise. Those who failed the core stage were directed to the Reading Components booklet, which was designed to measure basic reading skills. Refer to the appendix titled Pathways through the self-enumerated exercise for further information about this process.

19 All respondents were provided with a pencil, ruler, notepad and calculator to use during the exercise. There were no time limits, and the respondent was not allowed to receive any assistance from others.

20 The role of the interviewer during the self-enumerated exercise was to discreetly monitor the respondent's progress, and to encourage them to complete as many of the tasks as possible.

Observation module

21 When the interview was complete and the interviewer had left the household, the interviewer used the computer to answer a series of questions which collected information about the interview setting such as any events that might have interrupted or distracted the respondent during the exercise.

Scoring the exercise tasks

22 At the completion of the interview, if the respondent had a paper exercise booklet, it was forwarded to the Australian Bureau of Statistics (ABS). The core and main exercise booklets were marked by trained scorers, and the responses from the Reading Components booklets were recorded. A sample of the booklets were independently re-scored by a different scorer to assess the accuracy of the scoring.

23 The responses from the computer-based and paper-based exercises were used to calculate scores for each of the skill domains completed by the respondent. The derivation of the scores was performed by Educational Testing Service (ETS) of Princeton USA (who also performed this task for the ALLS and SAL surveys). Refer to the appendix titled Scores and skill levels for further information about the calculations of the scores.

Score imputation

24 In order to minimise respondent burden, respondents did not complete exercises in all three of the skill domains. Respondents completed exercise tasks in only one or two of these domains, depending on the assessment path they followed. To address this, PIAAC used multiple imputation methodology to obtain proficiency scores for each respondent for the skill domains for which the respondent was not required to do an exercise. Problem solving in technology-rich environments scores were not imputed for respondents who were sequenced to the paper-based Core booklet (i.e. they had no computer experience, or they did not agree to do the exercise on the computer, or they did not pass the computer-based Core Stage 1).

Sample design

25 PIAAC was designed to provide reliable estimates at the national level for five-year age groups, and for each state and territory.

26 Dwellings included in the survey in each state and territory were selected at random using a multi-stage area sample. This sample included only private dwellings from the geographic areas covered by the survey.

27 The initial sample for PIAAC consisted of 14,442 private dwellings. Of the 11,532 households that remained in the survey after sample loss, 8,446 (73%) were fully responding or provided sufficient detail for scores to be determined.

Estimation method

​​​​​​​Weighting

28 Weighting is the process of adjusting results from a sample survey to infer results for the total population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many people in the population are represented by the sample person.

29 The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of the unit being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).

Non-response adjustment

30 Non-response adjustments were made to the initial person-level weights with the aim of representing those people in the population that did not respond to PIAAC. Two adjustment factors were applied:

  • a literacy-related non-response adjustment, which was aimed at ensuring survey estimates represented those people in the population that had a literacy or language related problem and could not respond to the survey (these people cannot be represented by survey respondents because their reason for not completing the survey is directly related to the survey outcome, however they are part of the PIAAC target population); and
  • a non-literacy-related non-response adjustment, which was aimed at ensuring survey estimates represented those people in the population that did not have a literacy or language related problem but did not respond to the survey for some other reason.
     

Population benchmarks

    31 After the non-response adjustment, the weights were adjusted to align with independent estimates of the population, referred to as 'benchmarks', in designated categories of sex by age by state by area of usual residence. This process is known as calibration. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population described by the benchmarks, rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration or particular categories of people which may occur due to either the random nature of sampling or non-response.

    32 The survey was calibrated to the in scope estimated resident population (ERP).

    33 Further analysis was undertaken to ascertain whether benchmark variables, in addition to geography, age and sex, should be incorporated into the weighting strategy. Analysis showed that including only these variables in the weighting approach did not adequately compensate for undercoverage in the PIAAC sample for variables such as highest educational attainment and labour force status, when compared to other ABS surveys. As these variables were considered to have possible association with adult literacy additional benchmarks were incorporated into the weighting process.

    34 The benchmarks used in the calibration of final weights for PIAAC were:

    • state by highest educational attainment;
    • state by sex by age by labour force status; and
    • state by part of state by age by sex.
       

    35 The education and labour force benchmarks, were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The 2011 Survey of Education and Work (people aged 15 to 64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (aggregated data from November 2011 to March 2012) provided the pseudo-benchmark for labour force status. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.

    36 The process of weighting ensures that the survey estimates conform to persons benchmarks per state, part of state, age and sex. These benchmarks are produced from estimates of the resident population derived independently of the survey. Therefore the PIAAC estimates do not (and are not intended to) match estimates for the total Australian resident population (which include people and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.

    Estimation

    37 Survey estimates of counts of people are obtained by summing the weights of people with the characteristic of interest.

    38 Note that although the literacy-related non-respondent records (154 people) were given a weight, plausible values were not generated for this population. This population is included in the "missing" category. These people are likely to have low levels of literacy and numeracy in English.

    Reliability of estimates

    39 All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.

    40 Sampling error is the difference between the published estimates, derived from a sample of people, and the value that would have been produced if all people in scope of the survey had been included.

    41 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording answers by interviewers, and errors in coding and processing data. Every effort was made to reduce the non-sampling error by careful design and testing of the questionnaire, training and supervision of interviewers, follow-up of respondents, and extensive editing and quality control procedures at all stages of data processing.

    42 In contrast to most other ABS surveys, the PIAAC estimates also include significant imputation variability, due to the use of multiple possible assessment modules and the complex proficiency scaling procedures. The effect of the plausible scoring methodology on the estimation can be reliably estimated and is included in the calculated RSEs. This is covered in more detail in the Data quality (Technical Note).

    Seasonal factors

    43 The estimates are based on information collected from October 2011 to March 2012, and due to seasonal factors they may not be representative of other time periods in the year. For example, employment is subject to seasonal variation through the year. Therefore, the PIAAC results for employment could have differed if the survey had been conducted over the whole year or in a different part of the year.

    Data quality

    44 Information recorded in this survey is essentially 'as reported’ by respondents and hence may differ from that which might be obtained from other sources or via other methodologies. This factor should be considered when interpreting the estimates in this publication.

    45 Information was collected on the respondents' perception of various topics such as their employment status, health status, skill use and aspects of their job. Perceptions are influenced by a number of factors and can change quickly. Care should therefore be taken when analysing or interpreting these data.

    46 For each competency, proficiency is measured on a scale ranging from 0 to 500 points. To facilitate analysis, these continuous scores have been grouped into six skill levels for the literacy and numeracy skill domains, and four skill levels for the problem solving in technology-rich environments skill domain, with below Level 1 being the lowest measured level of literacy. The relatively small proportions of respondents who are assessed as being at Level 5 for the literacy and numeracy skill domains, often results in unreliable estimates of the number of people at this level. For this reason, when results are presented by skill level, Levels 4 and 5 are usually combined. Similarly, for many tables it has been necessary to combine Levels 2 and 3 in Problem Solving in Technology-rich environments tables.

    'Don't know', 'Refused' and 'Not stated or inferred' categories

    47 For a number of PIAAC data items, some respondents were unwilling or unable to provide the required information. When this occurred, the missing response was recorded as either 'don't know', 'refused' or 'not stated or inferred'. These categories are not explicitly shown in the publication tables, but have been included in the totals with footnotes provided to note these inclusions. Proportions shown in the tables are based on totals which include these categories.

    48 Listed below are data items where responses coded to the 'don't know' or 'refused' category were higher than 1%:

    • 'Current work earnings from wage or salary - annual gross pay' data item, 2.5% of people (approximately 424,000) had responses of 'don't know' or 'refused';
    • 'Current work earnings from business - last financial year' data item, 1.4% of people (approximately 234,000) had responses of 'don't know' or 'refused';
    • 'Level of highest educational qualification of mother or female guardian (ISCED)', 8.1% of people (1.4 million) had responses of 'don't know' or 'refused'; and
    • 'Level of highest educational qualification of father or male guardian (ISCED)', 9.7% of people (1.6 million) had responses of 'don't know' or 'refused'.
       

    49 Aside from the items listed above, the proportions of responses of 'don't know' or 'refused' did not exceed 1% for any other data item, with the vast majority being less than 0.5%.

    'Missing' category

    50 Some respondents were unable to complete the background questionnaire as they were unable to speak or read the language of the assessment, in Australia's case, English; had difficulty reading or writing; or had a learning or mental disability. In the case of the background questionnaire, there was no one present (either the interviewer or another person) to translate into the language of the respondent or answer on behalf of the respondent. In the case of these respondents, only their age, sex and geographical details are known. Non-respondents represented 2% of the total population. While the proficiency of this group is likely to vary between countries, in most cases, these people are likely to have low levels of proficiencies in the language of the country concerned.

    Level of education

    Level of highest educational attainment (ASCED)

    51 Level of highest educational attainment was derived from information on highest year of school completed and level of highest non-school qualification. The derivation process determines which of the 'non-school' or 'school' attainments will be regarded as the highest. Usually the higher ranking attainment is self-evident, but in some cases some secondary education is regarded, for the purposes of obtaining a single measure, as higher than some certificate level attainments.

    52 The following decision table is used to determine which of the responses to questions on highest year of school completed (coded to ASCED Broad Level 6) and level of highest non-school qualification (coded to ASCED Broad Level 5) is regarded as the highest. It is emphasised that this table was designed for the purpose of obtaining a single value for level of highest educational attainment and is not intended to convey any other ordinality.

    Decision Table: Level of Highest Educational Attainment (ASCED level of education codes)
    Highest year of school completed Level of highest non-school qualification
    Certificate n.f.d. (500)Certificate III or IV n.f.d. (510)Certificate IV (511)Certificate III (514)Certificate I or IIn.f.d. (520)Certificate II (521)Certificate I (524)
    Secondary Education n.f.d. (600)Secondary Education n.f.d.Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Senior Secondary Education n.f.d. (610)Secondary Education n.f.d.Certificate III or IV n.f.d.Certificate IVCertificate IIISecondary Education n.f.d.Secondary Education n.f.d.Secondary Education n.f.d.
    Year 12 (611)Year 12Certificate III or IV n.f.d.Certificate IVCertificate IIIYear 12Year 12Year 12
    Year 11 (613)Year 11Certificate III or IV n.f.d.Certificate IVCertificate IIIYear 11Year 11Year 11
    Junior Secondary Education n.f.d. (620)Junior Secondary Education n.f.d.Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Year 10 (621)Year 10Certificate III or IV n.f.d.Certificate IVCertificate IIIYear 10Year 10Year 10
    Year 9 (622)Year 9Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Year 8 (623)Year 8Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I
    Year 7 (624)Year 7Certificate III or IV n.f.d.Certificate IVCertificate IIICertificate I or II n.f.d.Certificate IICertificate I

     

    53 The decision table is also used to rank the information provided in a survey about the qualifications and attainments of a single individual. It does not represent any basis for comparison between differing qualifications. For example, a respondent whose highest year of school completed was Year 12, and whose level of highest non-school qualification was a Certificate III, would have those responses crosschecked on the decision table and would as a result have their level of highest educational attainment output as Certificate III. However, if the same respondent answered 'certificate' to the highest non-school qualification question, without any further detail, it would be crosschecked against Year 12 on the decision table as Certificate not further defined. The output would then be Year 12. The decision table, therefore, does not necessarily imply that one qualification is 'higher' than the other. For more details, see Education Variables (cat. no. 1246.0).

    54 Once the ASCED coding was complete, a concordance was applied to obtain the data item 'Level of highest qualification completed - ISCED' which is an international standard classification of education.

    Current study level (ASCED) and incomplete study level (ASCED)

    55 Level of education of current study was derived using the decision table displayed above, taking into account level of education of school study in current year and level of education of non-school study in current year for people who are undertaking concurrent qualifications.

    56 Once the ASCED coding was complete, a concordance was applied to obtain the data items 'Level of qualification currently studying for - ISCED' and 'Level of incomplete qualification - ISCED'.

    Labour force status

    57 The international PIAAC survey's concept of labour force status is defined in a slightly different way to that used in the ABS Labour Force Survey. The definition of the 'Employed' category in the international and the Australian data item are essentially the same. However, there is a subtle difference in the concept of 'Unemployed', which in turn impacts on the estimates for 'Out of labour force'. The labour force status data presented in the tables of this publication contain labour force data which is more closely aligned with the Australian definitions used in the ABS Labour Force Survey.

    Unemployed - international data item definition

    58 People aged 15 to 74 years who were not employed, and:

    • had actively looked for full-time or part-time work at any time in the four weeks up to the end of the reference week and were available for work within two weeks; or
    • will be starting a job within three months and could have started within two weeks had the job been available then.
       

    Unemployed - Australian data item definition

      59 People aged 15 to 74 years who were not employed, were available for work in the reference week, and at any time in the four weeks up to the end of the reference week:

      • had actively looked for full-time or part-time work; or
      • were waiting to start a new job.
         

      Data comparability

      Comparability of time series

      60 As noted above (paragraph 4), data previously released in the ALLS and SAL publications are not directly comparable with PIAAC data. The reasons for this are:

      • The literacy and numeracy scores previously published for ALLS and SAL have been remodelled to make them consistent with PIAAC. These scores were originally based on a model with a response probability (RP) value of 0.8 but are now based on a model with a RP value of 0.67. The latter value was used in PIAAC to achieve consistency with the OECD survey Programme for International Student Assessment (PISA), in the description of what it means to be performing at a particular level of proficiency. The new RP value does not affect the score that was calculated for a respondent. However, it does affect the interpretation of the score. Therefore, users of these data should refer to the new skill level descriptions provided in the appendix Scores and skill levels when performing time-series comparisons;
      • The prose and document literacy scales from ALLS and SAL have been combined to produce a single proficiency scale which is comparable to the PIAAC proficiency scale; and
      • The numeracy scores from ALLS have been recalculated using a model that incorporates the results of all countries that participated in ALLS. (The previous model was based only on countries that participated in the first round of ALLS.) This has resulted in some minor changes to the ALLS numeracy scores. SAL did not collect a numeracy domain which is comparable with ALLS and PIAAC.
         

      However, as noted in paragraph 4 caution is advised in the use of time-series, even based on the remodelled data.

      61 Data from ALLS and SAL based on these remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes.

      62 The problem solving in technology-rich environments competency is a new edition in PIAAC and is not comparable to the problem solving scale derived in ALLS.

      63 PIAAC was not designed to assess health literacy preventing any comparison with ALLS on that skill domain.

      64 To ensure comparability between the previous surveys, 60% of the literacy and numeracy tasks used in the PIAAC exercise were previously used in the ALLS and SAL surveys. However, in PIAAC most respondents completed the exercises on a computer (70%), rather than a paper-based exercise (30%). In ALLS and SAL, all respondents completed paper-based exercises. This may impact on the comparability of estimates.

      65 PIAAC includes new questions for respondents who were employed or had recent work experience about:

      • the frequency of use of a number of generic skills used in the workplace, including communication, presentation and team-working skills; and
      • skill practices at work, specifically reading, writing, mathematics and ICT skill activities at work which are considered important drivers of skills acquisition and the questions are designed to complement what is being measured in the exercise.
         

      66 For each respondent in PIAAC, ten plausible values (scores) were generated for the domains measured (whereas for ALLS and SAL only five plausible values were generated). While simple population estimates for any domain can be produced by choosing at random only one of the ten plausible values, this publication uses an average of the ten values. For example in order to report an estimate of the total number of people at Level 1 for literacy, the weighted estimate of the number of respondents at Level 1 for each of the ten plausible values for literacy individually, was calculated. The ten weighted estimates were then summed. Finally, this result was divided by ten to obtain the estimate of the total number of people at Level 1 for literacy. The process was repeated for each skill level. Refer to the appendix titled Scores and skill levels for further information about the calculation of estimates using all ten plausible values in combination.

      67 Changes to the scope and coverage of PIAAC from ALLS and SAL are:

      • overseas residents who have lived in Australia, or intend to do so, for a period of 12 months or more are included in the scope for PIAAC and ALLS, but were excluded for SAL;
      • people living in Collection Districts which contain Discrete Indigenous Communities were excluded from the scope of PIAAC, but were included for ALLS and SAL if they were not in a very remote area; and
      • households where all of the residents were less than 18 years of age were excluded from PIAAC coverage, but were included in ALLS and SAL.
         

      68 The full and part literacy non-response records (154 people) were weighted but not given plausible scores for PIAAC. Other part non-response (3) records were weighted and given plausible scores for PIAAC. However, similar records were treated as non-respondents for ALLS and SAL.

      Comparability with other ABS surveys

      69 PIAAC collected data across a range of topics, some of which have been included in previous ABS surveys. Where possible, question modules from existing surveys were used in the PIAAC questionnaire to facilitate comparison with other surveys. However, given PIAAC is part of an international survey, there was a requirement to use internationally developed question modules to ensure the results are comparable with data from other countries involved in the survey.

      70 Additionally, PIAAC is a sample survey and its results are subject to sampling error. As such, PIAAC results may differ from other sample surveys, which are also subject to sampling error. Users should take account of the RSEs on PIAAC estimates and those of other survey estimates where comparisons are made.

      71 Differences in PIAAC estimates, when compared with the estimates of other surveys, may also result from:

      • differences in scope and/or coverage;
      • different reference periods reflecting seasonal variations;
      • non-seasonal events that may have impacted on one period but not another; and
      • underlying trends in the phenomena being measured.
         

      72 Finally, differences can occur as a result of using different collection methodologies. This is often evident in comparisons of similar data items reported from different ABS collections where, after taking account of definition and scope differences and sampling error, residual differences remain. These differences often have to do with the mode of the collections, such as whether data are collected by an interviewer or self-enumerated by the respondent, whether the data are collected from the respondent themselves or from a proxy respondent. Differences may also result from the context in which questions are asked, that is where in the interview the questions are asked and the nature of preceding questions. The impacts on data of different collection methodologies are difficult to quantify. As a result, every effort is made to minimise such differences.

      Classifications

      73 Country of birth data are classified according to the Standard Australian Classification of Countries (SACC), Second Edition, 2008 (cat. no. 1269.0).

      74 Geography data (State/territory) are classified according to the Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas, July 2016 (cat. no. 1270.0.55.001).

      75 Languages data are classified according to the Australian Standard Classification of Languages (ASCL), 2005-06 (cat. no. 1267.0).

      76 Education data are classified according to the Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0). Coding was based on the level and field of education as reported by respondents and recorded by interviewers. From the ASCED coding, the level of education was also classified according to the International Standard Classification of Education (ISCED), 1997. For an example of a broad level concordance between these two classifications, see Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0).

      77 Occupation data are classified according to the ANZSCO - Australian and New Zealand Standard Classification of Occupations (cat. no. 1220.0). From the ANZSCO coding, occupation was also classified according to the International Standard Classification of Occupations (ISCO), 2008.

      78 Industry data are classified according to the Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (Revision 1.0) (cat. no. 1292.0). From the ANZSIC, industry was also classified according to the International Standard Industrial Classification of All Economic Activities (ISIC), Rev.4, 2008.

      Products and services

      Data cubes

      79 Data cubes (spreadsheet) containing tables produced for this publication are available from the Data downloads section of the publication. Estimates, proportions and the corresponding relative standard errors (RSEs) and margin of errors (MOEs) are presented for each table.

      80 Additional data cubes containing state and territory data are to be appended to this product in 2014. Users can subscribe to receive Email Notifications to be advised when updates are available for this product. From the attached link, select 4. Social Statistics, sub-topic 42. Education, then select the product 4228.0 Programme for the International Assessment of Adult Competencies.

      Microdata

      81 For users who wish to undertake more detailed analysis of the survey data, a basic confidentialised unit record data file (CURF) is available on CD-ROM from Microdata: Programme for the International Assessment of Adult Competencies (PIAAC) (cat. no. 4228.0.30.001).

      82 Further information about microdata is available from the Microdata Entry Page on the ABS web site.

      Data available on request

      83 In addition to the statistics provided in this publication, the ABS may have other relevant data available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey on a fee-for-service basis. Inquiries should be made to the National Information and Referral Service on 1300 135 070. A spreadsheet containing a complete list of the data items available from PIAAC can be accessed from the Data downloads section.

      Acknowledgements

      84 ABS publications draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated; without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

      Next survey

      85 The OECD proposes to conduct the PIAAC survey internationally every ten years. The next PIAAC survey is therefore proposed to be conducted in 2021.

      Related publications

        86 The OECD published international results on 8 October 2013 in OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. The report is available from the OECD website at www.oecd.org.

        87 The OECD publication titled 'Literacy, Numeracy and Problem Solving in Technology-Rich Environments - Framework for the OECD Survey of Adult Skills' provides further information about the PIAAC survey. This publication, as well as further background and conceptual information about the PIAAC survey, is available from the OECD website at www.oecd.org.

        88 The Education and Training Topics @ a Glance page contains a wealth of information and useful references. This site can be accessed through the ABS website.

        Appendix - pathways through the self-enumerated exercise

        Show all

        Figure 1 - Flowchart of pathways through the self-enumerated exercise

        Flowchart of pathways through the self-enumerated exercise

        Computer-based or paper-based exercise

        1 If the respondent indicated in the background questionnaire that they had experience using a computer, they were directed to do a computer-based exercise. If the respondent did not have experience using a computer, or they refused to do the exercise on a computer, they were directed to do a paper-based exercise.

        2 In the computer-based exercises, the computer calculated the respondent's scores and also collected processing information.

        3 For the core booklet in the paper-based path, the interviewer entered the respondent's scores into the computer to determine if they were to proceed to the main exercise.

        Computer-based core exercise

        4 Core Stage 1 of the computer-based path assessed if the respondent had the necessary basic computer skills (such as clicking, typing, scrolling, dragging, highlighting and using pull-down menus) to proceed with the computer-based exercises. If they did not pass this stage, they were directed to do a paper-based core exercise.

        5 Core Stage 2 of the computer-based path assessed if the respondent had the basic literacy and numeracy skills to proceed to the main exercise. If the respondent passed this stage, they then proceeded to the computer-based main exercise. If the respondent did not pass this stage, they were directed to the Reading Components booklet (see paragraph 14 below).

        Paper-based core exercise

        6 The paper-based core booklet contained eight basic tasks (four literacy tasks and four numeracy tasks) which assessed to check if the respondent had the basic literacy and numeracy skills to proceed to the main exercise. If the respondent passed this stage, they then proceeded to the paper-based main exercise. If the respondent did not pass the core booklet, they were directed to the Reading Components booklet (see paragraph 14 below).

        Main exercise

        7 The tasks in the main exercise were more diverse in complexity and subject matter than the core booklet, and were designed to provide an understanding of the skills of the general population.

        Computer-based main exercise

        8 The computer-based main exercise contained modules which each assessed either literacy, numeracy or problem solving in technology-rich environments. Each respondent was randomly allocated two modules. For example, a respondent could be allocated a numeracy module and a problem solving module. Each module was designed to take an average of 30 minutes.

        9 The tasks within the literacy and numeracy modules varied in difficulty, and the tasks were allocated to respondents on the basis of their performance at different stages of the assessment. The literacy and numeracy module contained 20 tasks each. The problem solving modules contained seven fixed tasks in each module.

        10 Respondents were sequenced through an orientation session which contained instructions on how to navigate through each module before they began the module.

        11 Seventy per cent of respondents were directed to the computer-based main exercise.

        Paper-based main exercise

        12 The paper-based main exercise required the respondent to complete a booklet containing either 20 literacy tasks or 20 numeracy tasks. The computer randomly allocated which booklet to issue to the respondent.

        13 Thirty per cent of respondents were directed to the paper-based main exercise.

        Reading components

        14 The Reading Components booklet was designed to measure basic reading skills and contained three parts: word meaning, sentence processing and basic passage comprehension.

        15 All respondents who were issued a paper-based core exercise, as well as those respondents who did not pass Core Stage 2 of the computer-based exercise, were issued a Reading Components booklet.

        Examples of tasks

        16 Refer to the OECD publication 'Literacy, Numeracy and Problem Solving in Technology-Rich Environments - Framework for the OECD Survey of Adult Skills' for examples of literacy, numeracy and PSTRE tasks.

        Appendix - scores and skill levels

        Show all

        Calculation of scores

        For each skill domain, proficiency scores are derived on a scale ranging from 0 to 500 points. Item Response Theory is used so that the score reflects the percentage of items in the skill domain that the respondent answered correctly, as well as the probability of the respondent (or persons with similar characteristics) successfully completing tasks with a similar level of difficulty. For PIAAC a response probability (RP) value of 0.67 was chosen, meaning that the respondent (or persons with similar characteristics) had a 67 per cent chance of successfully completing tasks with a similar level of difficulty.

        For each respondent in PIAAC, ten plausible values (scores) were generated for the domains measured. While simple population estimates for any domain can be produced by choosing at random only one of the ten plausible values, this publication uses an average of the ten values. For example in order to report an estimate of the total number of people at Level 1 for literacy, the weighted estimate of the number of respondents at Level 1 for each of the ten plausible values for literacy individually, was calculated. The ten weighted estimates were then summed. Finally, this result was divided by ten to obtain the estimate of the total number of people at Level 1 for literacy. The process was repeated for each skill level.

        This process must be performed for each skill level by each variable category (e.g. males) when producing estimates for other tabulations. For example in order to report an estimate of the total number of males at Level 1 for literacy, the weighted estimate of the number of males at Level 1 for each of the ten plausible values for literacy individually was calculated. Then the ten weighted estimates were summed. Finally, this result was divided by ten to obtain the estimate of the total number of males at Level 1 for literacy. The process was then repeated for each skill level.

        All estimates presented in this publication are obtained by using all ten plausible values in combination, as described above.

        In order to minimise respondent burden, respondents did not complete exercises in all three of the skill domains. Respondents completed exercise tasks in only one or two of these domains, depending on the assessment path they followed. Refer to the appendix titled Pathways through the self-enumerated exercise for further information about the possible assessment paths. To address this, PIAAC used multiple imputation methodology to obtain proficiency scores for each respondent for the skill domains for which the respondent was not required to do an exercise. Problem solving in technology-rich environments scores were not imputed for respondents who were sequenced to the paper-based Core booklet (i.e. they had no computer experience, or they did not agree to do the exercise on the computer, or they did not pass the computer-based Core Stage 1). The effect of the significant imputation variability, due to the use of multiple possible assessment tasks and the complex scaling procedures on the estimation, can be reliably estimated and is included in the calculated standard errors (SEs). See the Data quality (Technical Note) for further information about the reliability of the estimates.

        In this report, proficiency levels have a descriptive purpose. They are intended to aid the interpretation and understanding of the reporting scales by describing the attributes of the tasks that adults with particular proficiency scores can typically successfully complete. In particular, they have no normative element and should not be understood as “standards” or “benchmarks” in the sense of defining levels of proficiency appropriate for particular purposes (e.g. access to post-secondary education or fully participating in a modern economy) or for particular population groups

        Levels of difficulty

        Further information to assist with the interpretation of the skill levels is available in the OECD publication, The Survey of Adult Skills: Reader's Companion. The technical manual is available from the OECD website at www.oecd.org.

        Literacy

        Literacy is defined as the ability to understand, evaluate, use and engage with written texts to participate in society, to achieve one’s goals, and to develop one’s knowledge and potential. Literacy encompasses a range of skills from the decoding of written words and sentences to the comprehension, interpretation and evaluation of complex texts. It does not, however, involve the production of text (writing). Information on the skills of adults with low levels of proficiency is provided by an assessment of reading components that covers text vocabulary, sentence comprehension and passage fluency.

        Below level 1 (lower than 176)

        The tasks at this level require the respondent to read brief texts on familiar topics to locate a single piece of specific information. There is seldom any competing information in the text and the requested information is identical in form to information in the question or directive. The respondent may be required to locate information in short continuous texts. However, in this case, the information can be located as if the text were non-continuous in format. Only basic vocabulary knowledge is required, and the reader is not required to understand the structure of sentences or paragraphs or make use of other text features. Tasks below Level 1 do not make use of any features specific to digital texts.

        Level 1 (176 to 225)

        Most of the tasks at this level require the respondent to read relatively short digital or print continuous, non-continuous, or mixed texts to locate a single piece of information that is identical to or synonymous with the information given in the question or directive. Some tasks, such as those involving non-continuous texts, may require the respondent to enter personal information onto a document. Little, if any, competing information is present. Some tasks may require simple cycling through more than one piece of information. Knowledge and skill in recognising basic vocabulary determining the meaning of sentences, and reading paragraphs of text is expected.

        Level 2 (226 to 275)

        At this level, the medium of texts may be digital or printed, and texts may comprise continuous, non-continuous, or mixed types. Tasks at this level require respondents to make matches between the text and information, and may require paraphrasing or low-level inferences. Some competing pieces of information may be present. Some tasks require the respondent to:

        • cycle through or integrate two or more pieces of information based on criteria;
        • compare and contrast or reason about information requested in the question; or
        • navigate within digital texts to access-and-identify information from various parts of a document.
           

        Level 3 (276 to 325)

        Texts at this level are often dense or lengthy, and include continuous, non-continuous, mixed, or multiple pages of text. Understanding text and rhetorical structures become more central to successfully completing tasks, especially navigating complex digital texts. Tasks require the respondent to identify, interpret, or evaluate one or more pieces of information, and often require varying levels of inference. Many tasks require the respondent to construct meaning across larger chunks of text or perform multi-step operations in order to identify and formulate responses. Often tasks also demand that the respondent disregard irrelevant or inappropriate content to answer accurately. Competing information is often present, but it is not more prominent than the correct information.

        Level 4 (326 to 375)

        Tasks at this level often require respondents to perform multiple-step operations to integrate, interpret, or synthesise information from complex or lengthy continuous, non-continuous, mixed, or multiple type texts. Complex inferences and application of background knowledge may be needed to perform the task successfully. Many tasks require identifying and understanding one or more specific, non-central idea(s) in the text in order to interpret or evaluate subtle evidence-claim or persuasive discourse relationships. Conditional information is frequently present in tasks at this level and must be taken into consideration by the respondent. Competing information is present and sometimes seemingly as prominent as correct information.

        Level 5 (376 and higher)

        At this level, tasks may require the respondent to search for and integrate information across multiple, dense texts; construct syntheses of similar and contrasting ideas or points of view; or evaluate evidence based arguments. Application and evaluation of logical and conceptual models of ideas may be required to accomplish tasks. Evaluating reliability of evidentiary sources and selecting key information is frequently a requirement. Tasks often require respondents to be aware of subtle, rhetorical cues and to make high-level inferences or use specialised background knowledge.

        Numeracy

        Numeracy is defined as the ability to access, use, interpret and communicate mathematical information and ideas in order to engage in and manage the mathematical demands of a range of situations in adult life. To this end, numeracy involves managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple ways.

        Below level 1 (lower than 176)

        Tasks at this level require the respondents to carry out simple processes such as counting, sorting, performing basic arithmetic operations with whole numbers or money, or recognising common spatial representations in concrete, familiar contexts where the mathematical content is explicit with little or no text or distractors.

        Level 1 (176 to 225)

        Tasks at this level require the respondent to carry out basic mathematical processes in common, concrete contexts where the mathematical content is explicit with little text and minimal distractors. Tasks usually require one-step or simple processes involving counting, sorting, performing basic arithmetic operations, understanding simple per cents such as 50%, and locating and identifying elements of simple or common graphical or spatial representations.

        Level 2 (226 to 275)

        Tasks at this level require the respondent to identify and act on mathematical information and ideas embedded in a range of common contexts where the mathematical content is fairly explicit or visual with relatively few distractors. Tasks tend to require the application of two or more steps or processes involving calculation with whole numbers and common decimals, per cents and fractions; simple measurement and spatial representation; estimation; and interpretation of relatively simple data and statistics in texts, tables and graphs.

        Level 3 (276 to 325)

        Tasks at this level require the respondent to understand mathematical information that may be less explicit, embedded in contexts that are not always familiar and represented in more complex ways. Tasks require several steps and may involve the choice of problem-solving strategies and relevant processes. Tasks tend to require the application of number sense and spatial sense; recognising and working with mathematical relationships, patterns, and proportions expressed in verbal or numerical form; and interpretation and basic analysis of data and statistics in texts, tables and graphs.

        Level 4 (326 to 375)

        Tasks at this level require the respondent to understand a broad range of mathematical information that may be complex, abstract or embedded in unfamiliar contexts. These tasks involve undertaking multiple steps and choosing relevant problem-solving strategies and processes. Tasks tend to require analysis and more complex reasoning about quantities and data; statistics and chance; spatial relationships; and change, proportions and formulas. Tasks at this level may also require understanding arguments or communicating well-reasoned explanations for answers or choices.

        Level 5 (376 and higher)

        Tasks at this level require the respondent to understand complex representations and abstract and formal mathematical and statistical ideas, possibly embedded in complex texts. Respondents may have to integrate multiple types of mathematical information where considerable translation or interpretation is required; draw inferences; develop or work with mathematical arguments or models; and justify, evaluate and critically reflect upon solutions or choices.

        Problem solving in technology-rich environments (PSTRE)

        Problems solving in technology-rich environments is defined as using digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks. PIAAC focuses on abilities to solve problems for personal, work and civic purposes by setting up appropriate goals and plans, accessing and making use of information through computers and computer networks.

        No computer experience

        Adults in this category reported having no prior computer experience; therefore, they did not take part in computer-based assessment but took the paper-based version of the assessment, which does not include the problem solving in technology-rich environments domain.

        Failed ICT core

        Adults in this category had prior computer experience but failed the ICT core test, which assesses basic ICT skills, such as the capacity to use a mouse or scroll through a web page, needed to take the computer-based assessment. Therefore, they did not take part in computer-based assessment, but took the paper-based version of the assessment, which does not include the problem solving in technology-rich environments domain.

        "Opted out" of taking computer-based assessment

        Adults in this category opted to take the paper-based assessment without first taking the ICT core assessment, even if they reported some prior experience with computers. They also did not take part in the computer-based assessment, but took the paper-based version of the assessment, which does not include the problem solving in technology-rich environments domain.

        Below level 1 (lower than 241)

        Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning, or transforming of information. Few steps are required and no sub-goal has to be generated.

        Level 1 (241 to 291)

        At this level, tasks typically require the use of widely available and familiar technology applications, such as e-mail software or a web browser. There is little or no navigation required to access the information or commands required to solve the problem. The problem may be solved regardless of the respondent’s awareness and use of specific tools and functions (e.g. a sort function). The tasks involve few steps and a minimal number of operators. At the cognitive level, the respondent can readily infer the goal from the task statement; problem resolution requires the respondent to apply explicit criteria; and there are few monitoring demands (e.g. the respondent does not have to check whether he or she has used the appropriate procedure or made progress towards the solution). Identifying content and operators can be done through simple match. Only simple forms of reasoning, such as assigning items to categories, are required; there is no need to contrast or integrate information.

        Level 2 (291 to 340)

        At this level, tasks typically require the use of both generic and more specific technology applications. For instance, the respondent may have to make use of a novel online form. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) can facilitate the resolution of the problem. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, though the criteria to be met are explicit. There are higher monitoring demands. Some unexpected outcomes or impasses may appear. The task may require evaluating the relevance of a set of items to discard distractors. Some integration and inferential reasoning may be needed.

        Level 3 (340 and higher)

        At this level, tasks typically require the use of both generic and more specific technology applications. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) is required to make progress towards the solution. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, and the criteria to be met may or may not be explicit. There are typically high monitoring demands. Unexpected outcomes and impasses are likely to occur.
        The task may require evaluating the relevance and reliability of information in order to discard distractors. Integration and inferential reasoning may be needed to a large extent.

        Comparability of time series

        Data released in the previous ALLS and SAL publications are not comparable with PIAAC data for the following reasons:

        • The literacy and numeracy scores previously published for ALLS and SAL have been remodelled to make them consistent with PIAAC. These scores were originally based on a model with a response probability (RP) value of 0.8 but are now based on a model with a RP value of 0.67. The latter value was used in PIAAC to achieve consistency with the OECD survey Programme for International Student Assessment (PISA), in the description of what it means to be performing at a particular level of proficiency. The new RP value does not affect the score that was calculated for a respondent. However, it does affect the interpretation of the score. Therefore, users of this data should refer to the new skill level descriptions provided (above) in this PIAAC publication when performing time-series comparisons.
        • The prose and document literacy scales from ALLS and SAL have been combined to produce a single literacy scale which is comparable to the PIAAC literacy scale.
        • The numeracy scores from ALLS have been recalculated using a model that incorporates the results of all countries that participated in ALLS. (The previous model was based only on countries that participated in the first round of ALLS). This has resulted in some minor changes to the ALLS numeracy scores. SAL did not collect a numeracy domain which is comparable with ALLS and PIAAC.
           

        These remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes.

        Technical note - data quality

        Reliability of the estimates

        1 Two types of error are possible in an estimate based on a sample survey: sampling error and non-sampling error. Since the estimates in this publication are based on information obtained from a sample, they are subject to sampling variability. That is, due to randomness in the composition of the sample, the estimates may differ from those population values that would have been produced if all dwellings had been included in the survey. One measure of the likely difference is given by the standard error (SE). There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all dwellings had been included, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

        2 In contrast to most other Australian Bureau of Statistics (ABS) surveys, PIAAC estimates also include significant imputation variability, due to the use of multiple possible assessment tasks and the complex scaling procedures. The effect of this on the estimation can be reliably estimated and is included in the calculated SEs. An accepted procedure for estimating the imputation variance using plausible values is to measure the variance of the plausible values (with an appropriate scaling factor) as follows:

        \(\large{v a r_{i m p}\left(\hat{\theta}_{m ea n}\right)=\left(1+\frac{1}{M}\right) \frac{\sum_{i=1}^{M}\left(\hat{\theta}_{i}-\hat{\theta}_{{mean}}\right)^{2}}{M-1}}\)

        where:

        \(\large{\widehat{\theta}_{\text {mean}}=\text{ the mean estimate of the plausible values}}\)

        \(\large{i=1-10 \text{ respectively, for the plausible values } \hat{\theta}_{1} \text{ to } \tilde{\theta}_{10}} \)

        \(\large{M=\text { the total number of plausible values used }(\mathrm{M}=10 \text { for PIAAC })}\)

        3 Together, the sampling variance and imputation variance can be added to provide a suitable measure of the total variance, and total SE. This SE indicates the extent to which an estimate might have varied by chance because only a sample of persons was included, and/or because of the significant imputation used in the literacy scaling procedures.

        4 There are a number of more convenient ways of expressing the sampling variability than the SE. One is called the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate:

        \(\large{R S E \%=\left(\frac{\text{SE}}{\text { Estimate }}\right) \times 100}\)

        5 Another way of expressing the sampling variability is called the margin of error (MOE) which may be more useful for proportion estimates, in particular where the estimated proportion is large or small. MOEs are provided for all proportion estimates at the 95% confidence level. At this confidence level the MOE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MOE from the population value. The 95% margin of error is obtained by multiplying the SE by 1.96:

        \(\large MOE=SE \times 1.96\)

        6 The estimate combined with the MOE defines a range, known as a confidence interval, which is expected to include the true population value with a given level of confidence, known as a confidence interval. The confidence interval can easily be constructed from the MOE of the same level of confidence by taking the estimate plus or minus the MOE of the estimate. This range should be considered when using the estimates to make assertions about the population or to inform decisions.

        7 Whilst the MOEs in this publication are calculated at the 95% confidence level, they can easily be converted to a 90% confidence level by multiplying the MOE by 1.654/1.96 or to a 99% confidence level by multiplying by a factor of 2.576/1.96.

        8 The 95% MOE can also be calculated from the RSE by:

        \(\large{M O E=\left(\frac{R S E \% \times Estimate}{100}\right) \times 1.96}\)

        9 Sampling error for estimates from PIAAC 2011-2012 have been calculated using the Jackknife method of variance estimation. This involves the calculation of 60 'replicate' estimates based on 60 different subsamples of the obtained sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the estimate.

        10 A data cube (spreadsheet) containing tables produced for this publication and the calculated RSEs for each of the estimates, and MOEs for each proportion estimate, is available from the Data downloads section of the publication.

        11 Estimates with RSEs less than 25% are considered sufficiently reliable for most purposes. Estimates with RSEs between 25% to 50% have been included and are annotated to indicate they are subject to high sample variability relative to the size of the estimate and should be used with caution. In addition, estimates with RSEs greater than 50% have also been included and annotated to indicate they are usually considered unreliable for most purposes. All cells in the data cube with RSEs greater than 25% contain a comment indicating the size of the RSE. These cells can be identified by a red indicator in the corner of the cell. The comment appears when the mouse pointer hovers over the cell.

        Calculation of Standard Error

        12 SEs can be calculated using the estimates (counts or proportions) and the corresponding RSEs. For example, the estimated number of persons in Australia aged 15 to 74 years that have scores at Level 3 on the literacy scale is 6,379,600 and the RSE for this estimate is 1.8%. The SE is calculated by:

        \(\large{\begin{aligned} S E \ of \ estimate &=\left(\frac{R S E}{100}\right) \times estimate \\ &=\left(\frac{1.8}{100}\right) \times 6,379,600 \\ &=0.018 \times 6,379,600 \\ &=114,800 \ (\text {rounded to nearest } 100) \\ \ \end{aligned}}\)

        13 Therefore, there are about two chances in three that the actual number of persons that have scores at Level 3 on the literacy scale is in the range of 6,264,800 to 6,494,400 and about 19 chances in 20 that the value was in the range 6,150,000 to 6,609,200. This example is illustrated in the diagram below.

        Calculation of standard error example

        Proportion and percentages

        14 Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. The formula is only valid when the numerator is a subset of the denominator:

        \(\large{RSE\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}}\)

        15 The proportion of Australians aged 15 to 74 years who have scores at Level 3 on the literacy scale is 39% and the associated RSE is 2.4% and associated MOE is +/- 1.8 percentage points. Hence there are about two chances in three that the true proportion of Australians aged 15 to 74 years who have scores at Level 3 on the literacy scale is between 38.1% and 39.9%, and 19 chances in 20 that the true proportion is within 1.8 percentage points from 39%, that is between the interval 37.2% and 40.8%.

        16 The RSEs of proportions within the data cubes have been provided. Calculations of RSEs for other proportions using the above formula should be seen as only indicative of the true RSE.

        Differences

        17 Published estimates may also be used to calculate the difference between two survey estimates (numbers or proportions). Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

        \(\large{S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}}\)

        18 An approximate MOE of the difference between two estimates (x-y) may be calculated by the following formula:

        \(\large{MOE(x-y) \approx \sqrt{[M O E(x)]^{2}+[M O E(y)]^{2}}}\)

        19 These formula will only be exact for differences between separate and uncorrelated characteristics or sub populations and only provides an indication for the differences likely to be of interest in this publication.

        Significance testing

        20 A statistical significance test for any comparisons between estimates can be performed to determine whether it is likely that there is a difference between two corresponding population characteristics. The approximate standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula in paragraph 17. The standard error is then used to create the following test statistic:

        \(\Large{\left(\frac{|x-y|}{S E(x-y)}\right)}\)

        21 If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic. Any calculations using the above formula should be seen as only indicative of a statistically significant difference.

        Glossary

        Show all

        Note that some common concepts used in other ABS publications that differ for PIAAC are indicated by the use of "For PIAAC".

        Apprenticeship or other training scheme

        For PIAAC, an apprenticeship or other training scheme is where a person has signed a legal contract with an employer to undertake on-the-job training to become a trades person. An apprenticeship includes additional studies.

        Computer

        For PIAAC, a computer refers to a mainframe, desktop or laptop computer, or any other device that can be used to do such things as sending or receiving email messages, processing data or text, or finding things on the internet. This includes mobile phones and other hand-held electronic devices that are used to connect to the internet, check emails etc.

        Computer-based exercise

        Self-enumerated exercises for the respondent to complete on the computer which measured literacy, numeracy or problem solving in technology-rich environments skills. Respondents were directed to complete a computer-based exercise if they had indicated that they had prior computer experience.

        Core stage

        The core stage of the self-enumerated exercise was designed to assess the respondent's capacity to undertake the main exercise. The computer-based path contained Core Stage 1 which determined if the respondent had the necessary basic computer skills (such as clicking, typing, scrolling, dragging, highlighting and using pull-down menus) to proceed with the computer-based path, and Core Stage 2 which determined if the respondent had the basic literacy and numeracy skills to proceed to the main exercise. The paper-based path had a core booklet to determine if the respondent had the basic literacy and numeracy skills to continue to the main exercise.

        Correspondence or distance courses

        Correspondence or distance courses require communicating with a teacher or trainer by exchanging printed or electronic media, or through technology that allows communication in real time. Such a course may include periodic live-in or residential sessions. It may be also be referred to as studying 'externally'.

        Document literacy

        This measure of literacy was collected in ALLS 2006 and SAL 1996. It is defined as the knowledge and skills required to locate and use information contained in various formats including job applications, payroll forms, transportation schedules, maps, tables and charts. The document literacy scores from ALLS and SAL will be combined with the prose literacy scores (from ALLS and SAL), and remodelled to produce a combined literacy scale comparable to the PIAAC literacy scale.

        Employed

        For PIAAC, people who, during the reference week:

        • did paid work for one hour or more for an employer or in their own business; or
        • were away from a job or business that they plan to return to; or
        • did unpaid work for at least one hour for a business that they or a relative owns.
           

        Equivalised gross household income

          Equivalised gross household income is household income adjusted by the application of an equivalence scale to facilitate comparison of income levels between households of differing size and composition, reflecting that a larger household would normally need more income than a smaller household to achieve the same standard of living. Equivalised gross household income is derived by calculating an equivalence factor according to the 'modified OECD' equivalence scale, and then dividing income by the factor. The equivalence factor is built up by allocating points to each person in a household (1 point to the first adult, 0.5 points to each additional person who is 15 years and over, and 0.3 to each child under the age of 15) and then summing the equivalence points of all household members. Equivalised gross household income can be viewed as an indicator of the economic resources available to a standardised household. For a lone person household it is equal to household income. For a household comprising more than one person, it is an indicator of the household income that would be needed by a lone person household to enjoy the same level of economic wellbeing.

          Fixed term contract

          A fixed term contract refers to a contract that has a specific end date or is for a specific event.

          Formal education / qualification

          Refers to studies that, when completed, result in formal qualifications at primary, secondary, university or post-secondary level.

          Indefinite contract

          An indefinite contract is a contract or agreement (e.g. a Collective or Enterprise Agreement) with no set end or finish date.

          Information and Communications Technology (ICT) skills

          Respondents who indicated they had prior computer experience provided information about the frequency of their computer and internet usage and, if applicable, the level of computer skills required for their current/last job.

          Job-related

          For PIAAC, activities that are job-related do not necessarily refer to one specific job, but to employment in general.

          Labour force status

          A classification into the categories of employed, unemployed and out of the labour force (international terminology) or not in labour force (Australian terminology). The tables of this publication contain labour force data which applies concepts more closely aligned with the ABS Labour Force Survey. However, there is a subtle difference in the concept of 'Unemployed', which in turn impacts on the estimates for 'Out of labour force'. See the Unemployed definitions provided below for further detail.

          Last job or business

          This is relevant for people who do not have current employment, but have recent work experience in the 12 months prior to the interview, or who left paid work within the five years prior to the interview.

          Literacy

          Literacy, as defined by the OECD for the PIAAC survey, is understanding, evaluating, using and engaging with written texts to participate in society, to achieve one's goals, and to develop one's knowledge and potential. Refer to the appendix titled Scores and skill levels for further information about literacy skill levels.

          Main English speaking countries

          The main English speaking countries (excluding Australia) are: Canada, Republic of Ireland, New Zealand, South Africa, United Kingdom and United States of America.

          ​​​​​​​Main exercise

          The main exercise was a set of self-enumerated tasks which were designed to measure the respondent's skills in the domains of literacy, numeracy or problem solving in technology-rich environments. The main exercise could be conducted by either a computer-based exercise or paper-based exercise. The problem solving in technology-rich environments domain was only assessed in the computer-based exercises. Respondents proceeded to the main exercise if they passed the core stage.

          Main job or business

          This refers to the job or business where the person was employed for the most hours during the reference week. If the person had two jobs or businesses where they worked the same amount of time, this refers to the job or business where they earned the most.

          Marginally attached to the labour force

          People who were not in the labour force in the reference week, wanted to work and:

          • were actively looking for work but did not meet the availability criteria to be classified as unemployed; or
          • were not actively looking for work but were available to start work within four weeks.
             

          The criteria for determining those in the labour force are based on activity (i.e. working or looking for work) and availability to start work during the reference week. The criteria associated with marginal attachment to the labour force, in particular the concepts of wanting to work and reasons for not actively looking for work, are more subjective. Hence, the measurement against these criteria is affected by the respondent's own interpretation of the concepts used. An individual respondent's interpretation may be affected by their work aspirations, as well as family, economic and other commitments.

          Missing

          People who did not receive a proficiency score because they were not able to answer more than five questions in the background questionnaire, as they were unable to speak or read the language of the assessment, had difficulty reading or writing, or had a learning or mental disability.

          Non-school qualification

          Non-school qualifications are awarded for educational attainments other than those of pre-primary, primary or secondary education. They include qualifications at the Postgraduate Degree level, Master Degree level, Graduate Diploma and Graduate Certificate level, Bachelor Degree level, Advanced Diploma and Diploma level, and Certificates I, II, III and IV levels. Non-school qualifications may be attained concurrently with school qualifications.

          Not in the labour force / out of labour force

          People who were not in the categories 'employed' or ' unemployed'.

          Numeracy

          Numeracy, as defined by the OECD for the PIAAC survey, is the ability to access, use, interpret, and communicate mathematical information and ideas, in order to engage in and manage the mathematical demands of a range of situations in adult life. This definition should be paired with the definition of numerative behaviour which is managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple ways. Refer to the appendix titled Scores and skill levels for further information about numeracy skill levels.

          Observation module

          The observation module was a series of questions that the interviewer answered when the interview was complete and the interviewer had left the respondent's home. The questions collected information about the interview setting such as any events that might have interrupted or distracted the respondent during the exercise.

          Organisation for Economic Co-operation and Development (OECD)

          The OECD is an international organisation that works with governments to understand what drives economic, social and environmental change in order to promote policies that will improve economic and social well-being. For PIAAC, the OECD coordinated an international consortium of organisations to manage the survey across the 24 countries that participated in the survey.

          Out of the labour force / not in the labour force

          People who were not in the categories 'employed' or 'unemployed'.

          Paid work

          Paid work is any work for pay or profit, even for as little as one hour per week. Pay includes cash payments or 'payment in kind' (payment in goods or services rather than money), whether payment was received in the week the work was done or not. Also counted as working for pay is anyone who receives wages for on-the-job training that involves the production of goods or services.

          Paper-based exercise

          Self-enumerated exercises for the respondent to complete in paper booklets which measured literacy or numeracy as well as basic reading skills. Respondents were directed to complete a paper-based exercise if they did not have prior computer experience (as determined by their answers in the background questionnaire), if they did not pass the computer-based Core Stages, or they refused to take the computer-based exercise.

          Plausible value

          For each respondent in PIAAC, ten plausible values (scores) were generated for the domains measured. While simple population estimates for any domain can be produced by choosing at random only one of the ten plausible values, this publication uses an average of the ten values. For example in order to report an estimate of the total number of people at Level 1 for literacy, the weighted estimate of the number of respondents at Level 1 for each of the ten plausible values for literacy individually, was calculated. The ten weighted estimates were then summed. Finally, this result was divided by ten to obtain the estimate of the total number of people at Level 1 for literacy. The process was repeated for each skill level. Refer to the appendix titled Scores and skill levels for further information about the calculation of estimates using all ten plausible values in combination.

          Problem Solving in Technology-Rich Environments (PSTRE)

          PSTRE, as defined by the OECD for the PIAAC survey, is using digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks. PIAAC focuses on abilities to solve problems for personal, work and civic purposes by setting up appropriate goals and plans, accessing and making use of information through computers and computer networks. Refer to the appendix titled Scores and skill levels for further information about PSTRE skill levels.

          Prose literacy

          This measure of prose literacy was collected in ALLS 2006 and SAL 1996. It is defined as the knowledge and skills needed to understand and use various kinds of information from text including editorials, news stories, brochures and instruction manuals. The prose literacy scores from ALLS and SAL have been combined with the document literacy scores (from ALLS and SAL), and have been remodelled to produce a combined literacy scale comparable to the PIAAC literacy scale.

          Reading Components

          The Reading Components booklet measured basic reading skills and contained three parts: word meaning, sentence processing and basic passage comprehension.

          Reference week

          The week preceding the week in which the interview was conducted.

          Response probability

          This is the probability of a respondent providing a correct answer to an item in the self-enumerated assessment. It is a function of two things: an item's characteristics (e.g. level of difficulty) and a respondent's characteristics (e.g. age, education). For PIAAC, a model with a response probability value of 0.67 was used. Therefore, the proficiency score in a skill domain reflects the level of difficulty of items that a respondent, as well as other people with a similar profile, are able to answer correctly 67% of the time.

          Self-employed

          Self-employed includes people who have their own business or are partners in a business as well as freelancers. A self-employed person may or may not have employees.

          Self-enumerated exercise

          The self-enumerated exercise was designed to measure skills in the areas of literacy, numeracy or problem solving in technology-rich environments. The respondents completed the exercise either on a computer or in paper booklets. The exercise consisted of a core stage, a main exercise, and some respondents completed a Reading Components booklet. There were no time limits and no assistance was allowed, See the Explanatory Notes for further information about the self-enumerated exercise.

          Skill domain

          The three skill domains measured in PIAAC are literacy, numeracy and problem solving in technology-rich environments.

          Skill level

          To facilitate analysis, the proficiency scores for literacy and numeracy have been grouped into five skill levels, and the problem solving in technology-rich environments scores have been grouped into three levels, with Level 1 being the lowest measured level of literacy. The levels indicate specific sets of abilities, and therefore, the thresholds for the levels are not equidistant. As a result, the ranges of scores in each level are not identical. Refer to the appendix titled Scores and skill levels for a detailed description of the skill levels for each skill domain.

          Unemployed - Australian data item

          People aged 15 to 74 years who were not employed, were available for work in the reference week, and at any time in the four weeks up to the end of the reference week:

          • had actively looked for full-time or part-time work; or
          • were waiting to start a new job.
             

          Unemployed - international data item

            The definition of unemployed, using the labour force concept defined for the international PIAAC survey, is people aged 15 to 74 years who were not employed, and:

            • had actively looked for full-time or part-time work at any time in the four weeks up to the end of the reference week and were available for work within two weeks, or
            • will be starting a job within three months and could have started within two weeks had the job been available then.
               

            Unpaid work

              Unpaid work is any task directly contributed to the operation of a business for which the person did not receive or expected to receive any pay, payment in kind or profit.

              Valid skip

              Responses in this category represent respondents who did not get asked the question for the data item because the question was not relevant for that respondent.

              Vocational education and training (VET)

              VET is a program of study that is intended to develop proficiency in skills relevant to the workplace or entry to further education. These courses are typically associated with preparatory, operative, trades/skilled and para-professional education and training. VET may also be referred to as 'school based traineeship' and includes subjects that lead to a certificate or statement of attainment. Students in some schools can receive a VET qualification while still attending school.

              Quality declaration - summary

              Institutional environment

              For information on the institutional environment of the Australian Bureau of Statistics (ABS), including the legislative obligations of the ABS, financing and governance arrangements, and mechanisms for scrutiny of ABS operations, please see ABS Institutional Environment.

              Relevance

              The 2011–2012 Programme for the International Assessment of Adult Competencies (PIAAC) is an international survey coordinated by the Organisation for Economic Co-operation and Development (OECD) which aims to:

              • understand the current skills and competencies of the adult population;
              • assess the performance of current education and training systems in providing the required skill base for the economy; and
              • develop policies and programs to improve the skills adults need to participate successfully in society in the 21st century.


              PIAAC provides information on skills and competencies for people aged 15 to 74 years in the three domains of:

              • literacy;
              • numeracy; and
              • problem solving in technology-rich environments.


              PIAAC also collected information on topics including education and training, labour force activities, income, and skills used at work and everyday life.

              PIAAC is the third survey of international comparisons of adult literacy skills conducted in Australia. Its predecessors were the Adult Literacy and Life Skills Survey (ALLS) 2006 and Survey of Aspects of Literacy (SAL) 1996. Internationally, SAL was known as the International Adult Literacy Survey (IALS). Data from PIAAC, ALLS and SAL are used to inform policy matters including the Council of Australian Governments (COAG) National Agreement for Skills and Workforce Development.

              The Explanatory Notes section of this publication contains information about the scope of the survey as well as a list of the Australian and international classifications used.

              The OECD published international results on 8 October 2013 in the OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. The report is available from the OECD website at www.oecd.org.

              Timeliness

              PIAAC was conducted throughout Australia from October 2011 to March 2012.

              Additional data cubes containing state and territory data are to be appended to this product in 2014. Users can subscribe to receive Email Notifications to be advised when updates are available for this product. From the attached link, select 4. Social Statistics, sub-topic 42. Education, then select the product 4228.0 Programme for the International Assessment of Adult Competencies.

              Accuracy

              The initial sample for PIAAC consisted of 14,442 private dwellings. Of the 11,532 households that remained in the survey after sample loss, 8,446 (73%) were fully responding or provided sufficient detail for literacy scores to be determined.

              PIAAC was designed to provide reliable estimates at the national level and for each state and territory.

              Refer to the Explanatory Notes for more detailed information about PIAAC's sample design, reliability of estimates and data quality. The Data quality (Technical Note) also provides further information about the reliability of the estimates.

              Coherence

              Data previously released in the ALLS and SAL publications are not directly comparable with PIAAC data due to:

              • changes in the interpretation of the skill levels;
              • combining the prose and document literacy scales into a single literacy scale; and
              • the numeracy scores from ALLS have been recalculated using a model to incorporate the results of all countries which participated in the second round of ALLS.
                 

              Data based on remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes to allow direct comparison.

              Caution however is advised when comparing results from ALLS and SAL with data from PIAAC. While the re-modelled data should facilitate comparability over time, analysis undertaken by the ABS and internationally has shown that in some cases the observed trend is difficult to reconcile with other known factors and is not fully explained by sampling variability. For more information see the Explanatory Notes section of Programme for the International Assessment of Adult Competencies (PIAAC), 2011-2012 (cat. no. 4228.0).

              PIAAC expands on the previous surveys by assessing skills in the domain of 'problem solving in technology-rich environments'. This domain is a new edition in PIAAC and is not comparable to the problem solving scale derived in ALLS.

              New data items in PIAAC collected information about skill use and practices at work, including:

              • the use of generic skills in the workplace including communication, presentation and team-working skills; and
              • skill practices at work, specifically reading, writing, mathematics and ICT skill activities.
                 

              The Explanatory Notes section of this publication contains more detailed information on the differences between the surveys over time, and also the comparability with other ABS surveys.

              Interpretability

              This publication contains data cubes and summary commentary of the main findings to assist with the interpretation of the results of the survey.

              Further information about the definitions, classifications and other technical aspects associated with these statistics are available from the Methodology page of this publication (including a Glossary and Appendices).

              For more comprehensive information about the background and conceptual information of PIAAC, refer to the OECD website at www.oecd.org.

              Accessibility

              Data cubes (spreadsheet) containing estimates, proportions and associated relative standard errors (RSEs) and margin of errors (MOEs) are available from the Data downloads section of this publication. Additional data cubes containing state and territory data are to be appended to this product in 2014. Users can subscribe to receive Email Notifications to be advised when updates are available for this product. From the attached link, select 4. Social Statistics, sub-topic 42. Education, then select the product 4228.0 Programme for the International Assessment of Adult Competencies.

              A basic confidentialised unit record data file (CURF) is available on CD-ROM from Microdata: Programme for the International Assessment of Adult Competencies (PIAAC) (cat. no. 4228.0.30.001).

              Further information about microdata is available from the Microdata Entry Page on the ABS web site.

              The OECD published international results on 8 October 2013 in the OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. The report is available from the OECD website at www.oecd.org.

              Data are available on request. Note that detailed data can be subject to high relative standard errors which in some cases may result in data being confidentialised.

              For further information about these or related statistics, contact the National Information and Referral Service on 1300 135 070.

              Abbreviations

              Show all

              ABSAustralian Bureau of Statistics
              ABSCQAustralian Bureau of Statistics Classification of Qualifications
              ACTAustralian Capital Territory
              ALLSAdult Literacy and Life Skills Survey
              ANZSCOAustralian and New Zealand Standard Classification of Occupations
              ANZSICAustralian and New Zealand Standard Industrial Classification
              ASCEDAustralian Standard Classification of Education
              ASCLAustralian Standard Classification of Languages
              ASGSAustralian Statistical Geography Standard
              CDCollection Districts
              CD-ROMCompact Disc Read-only Memory
              COAGCouncil of Australian Governments
              CURFConfidentialised Unit Record File
              DEEWRDepartment of Education, Employment and Workplace Relations
              ERPEstimated Resident Population
              ETSEducational Testing Service
              ICTInformation and Communication Technology
              ISCEDInternational Standard Classification of Education
              ISICInternational Standard Industrial Classification of All Economic Activities
              ISCOInternational Standard Classification of Occupations
              ISOInternational Organization for Standardization
              LitLiteracy
              MOEMargin of Error
              n.a.not available
              n.f.d.not further defined
              NSWNew South Wales
              NumNumeracy
              NTNorthern Territory
              OECDOrganisation for Economic Co-operation and Development
              PIAACProgramme for the International Assessment of Adult Competencies
              PSTREProblem Solving in Technology-Rich Environments
              QldQueensland
              RADLRemote Access Data Laboratory
              RPResponse probability
              RSERelative Standard Error
              SASouth Australia
              SACCStandard Australian Classification of Countries
              SALSurvey of Aspects of Literacy
              SEStandard Error
              Tas.Tasmania
              USAUnited States of America
              VETVocational Education and Training
              Vic.Victoria
              WAWestern Australia
              Back to top of the page