Qualifications and work methodology

This is not the latest release View the latest release
Reference period

Explanatory notes


1 The statistics presented in this publication were compiled from data collected in the Australian Bureau of Statistics' (ABS) Multipurpose Household Survey (MPHS). The MPHS is conducted each month throughout Australia as a supplement to the monthly Labour Force Survey (LFS) and is designed to provide annual statistics for a number of small, self-contained topics.

2 Unlike most MPHS topics, Qualifications and Work, Australia (Q&W) was enumerated from January to December 2015, and spanned both the 2014-15 and 2015-16 MPHS cycles. Information on demographic characteristics, labour force participation and income were also collected over the same period and together this data provides an in-depth understanding of the impact of obtaining non-school qualifications on working life.

3 Q&W was previously enumerated on the 2010–11 MPHS under the title: Learning and Work, Australia, 2010-11 (cat no. 4235.0). The title was changed for this iteration to better reflect the information collected. The collection focuses on study of 'Qualifications', whereas 'Learning' is an umbrella term that includes formal, (study towards a qualification), non-formal (structured study that does not result in a qualification) and informal (non-structured, self paced) study.

Scope and coverage


4 The scope of the LFS is restricted to people aged 15 years and over and excludes the following:

  • members of the Australian permanent defence forces;
  • certain diplomatic personnel of overseas governments usually excluded from census and estimated resident populations;
  • overseas residents in Australia;
  • members of non-Australian defence forces (and their dependants).

5 In addition, the MPHS excludes the following from scope:

  • households in Indigenous communities;
  • people living in non-private dwellings (e.g. hotels, university residences, students at boarding schools, patients in hospitals, inmates of prisons and residents of other institutions (e.g. retirement homes, homes for persons with disabilities)).


6 In the LFS, coverage rules are applied which aim to ensure that each person is associated with only one dwelling and hence has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more details.

Data collection

7 Qualifications and Work was a topic on the MPHS that was conducted as a supplement to the LFS each month from January - December 2015. Each month, one eighth of the dwellings in the LFS sample were rotated out of the survey. All of these dwellings were then selected for the MPHS. After the LFS had been fully completed for each person in scope and coverage, a person aged 15 years or over was selected at random (based on a computer algorithm) and asked the MPHS questions, including Q&W, in a personal interview. If the randomly selected person was aged 15 to 17 years, permission was sought from a parent or guardian before conducting the interview. If permission was not given, the parent or guardian was asked the questions on behalf of the 15 to 17 year old. Data were collected using Computer Assisted Interviewing (CAI), whereby responses were recorded directly onto an electronic questionnaire in a notebook computer, usually during a telephone interview.

8 The publication Labour Force, Australia (cat. no. 6202.0) contains information about survey design, sample redesign, scope, coverage and population benchmarks relevant to the monthly LFS, which also applies to supplementary surveys. It also contains definitions of demographic and labour force characteristics, and information about telephone interviewing relevant to both the monthly LFS and supplementary surveys.

Sample size

9 The initial sample for Q&W was around 44,887 private dwellings. Of the 37,613 private dwellings that remained in the survey after sample loss (e.g. vacant or derelict dwellings, dwellings under construction and dwellings selected in the survey that had no residents in scope for the LFS), 27,846 or 74% fully responded to the survey. In 2010-11 the topic was collected on a half sample of 13,366 fully responding households. The sample size was increased for the 2015 survey to allow for more detailed data analysis.

Weighting, benchmark and estimation


10 Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each covered sample unit which for the MPHS can be either a person or a household. The weight is a value which indicates how many population units are represented by the sample unit.

11 The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 600, then the person would have an initial weight of 600 (i.e. they represent 600 people).

Population benchmarks

12 The initial weights were then calibrated to align with independent estimates of the population of interest, referred to as 'benchmarks', in designated categories of sex by age by area of usual residence. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distribution of the population rather than the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons/households which may occur due to either the random nature of sampling or non-response.

13 For person estimates, Q&W was benchmarked to the in scope estimated residential population (ERP) in each state and territory, at June 2015. The Q&W estimates do not (and are not intended to) match estimates for the total Australian population obtained from other sources.


14 Survey estimates of counts of persons are obtained by summing the weight of persons with the characteristics of interest.


15 To minimise the risk of identifying individuals in aggregate statistics, a technique called perturbation is used to randomly adjust cell values. Perturbation involves a small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics. After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals.

16 Perturbation has been applied to the 2015 Q&W data. Data from the previous cycle (2010-11) have not been perturbed.

Reliability of estimates

17 All sample surveys are subject to error which can be broadly categorised as either:

  • sampling error
  • non-sampling error.

18 Sampling error is the difference between the published estimates, derived from a sample of persons, and the value that would have been produced if the total population (as defined for the scope of the survey) had been included in the survey. For more information refer to the Technical Note.

19 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing.

Data quality

Interpretation of results


20 In this survey, any in-scope person born overseas is deemed a migrant. The tables about 'Adult migrants' refer to people who were aged 15–64 years at the time of the survey and were at least 15 years of age when they arrived in Australia. Consequently, these tables exclude all migrants who were under 15 years of age on arrival (regardless of their age at the time of the survey).

21 When considering the tables about adult migrants, it is important to note that they focus only on the highest non-school qualification completed at particular points in time (i.e. before arrival in Australia, after arrival and at the time of the survey). As a result, the total number of adult migrants with a non-school qualification at the time of the survey is not the addition of migrants with a non-school qualification before arrival and migrants with a qualification after arrival. For example, an adult migrant may have obtained a Certificate IV before arrival in Australia and then have attained a Bachelor Degree after arrival. Consequently, all tables about migrants must be considered separately.

22 If the year of arrival and the year the qualification was completed were reported as being the same year, then the qualification was considered to have been completed after arrival in Australia.

Data comparability

Comparability of time series

23 In 2010-11, the Learning and Work survey was conducted on half the MPHS sample. In 2015, the Qualifications and Work topic was run on the full MPHS sample resulting in approximately twice the sample from 2010-11. In general, increasing the sample size will reduce the sample error, allowing for more detailed data analysis.

24 In 2011-12, persons who live in very remote areas that are not part of the Indigenous communities were interviewed for the first time for MPHS. As such these people are included in 2015 estimates but excluded from the 2010-11 estimates. Approximately 0.4% (73,116) of persons in scope for Q&W in 2015 live in very remote areas that are not part of an Indigenous community. The inclusion has minimal impact on Australian or state and territory, including Northern Territory level estimates, where it is estimated that most people living in very remote areas live in Indigenous communities and therefore were out of scope in both 2010-11 and 2015.

25 After each Census, population estimates are normally revised back five years to the previous Census year. As announced in the June 2012 issue of Australian Demographic Statistics (cat. no. 3101.0), intercensal error between the 2006 and 2011 Censuses was larger than normal due to improved methodologies used in the 2011 Census Post Enumeration Survey. The intercensal error analysis indicated that previous population estimates for the base Census years were over-counted. An indicative estimate of the size of the over-count is that there should have been 240,000 fewer people at June 2006 and as a result, Estimated Resident Population estimates have been revised for the last 20 years rather than the usual five. Consequently, estimates of particular populations for Q&W 2015 may be lower than those published in 2010-11 as the previous estimates have not been revised to take account of the revised ERP. Therefore, comparisons of estimates between the two surveys should not be made. However, for comparable data items, comparison of rates or proportions between years is appropriate (See Table 2).

Comparability with other ABS surveys

26 Some comparisons can be made with other selected education and training publications. Care should be taken when comparing data from different surveys due to the different scopes, definitions and methodologies used. In the interpretation of the results of this survey, consideration should be given to the representativeness of the sample. This is affected by the response rate (which is generally lower for surveys conducted as personal interview) and also the fact that the survey covers only people living in private dwellings.

27 Additionally, estimates from Q&W may differ from the estimates for the same or similar data items produced from other ABS collections for several reasons. For example, all sample surveys are subject to different sampling errors so users should take account of the relative standard errors (RSEs) on estimates where comparisons are made. Differences also exist in scope and/or coverage, reference periods reflecting seasonal variations, non-seasonal events that may have impacted on one period but not another, or because of underlying trends in the phenomena being measured.

28 Differences can occur as a result of using different collection methodologies. This is often evident in comparisons of similar data items reported from different ABS collections where, after taking account of definition and scope differences and sampling error, residual differences remain. These differences are often the result of the mode of the collections, such as whether data are collected by an interviewer or self-enumerated by the respondent and whether the data are collected from the person themselves or from a proxy respondent. Differences may also result from the context in which questions are asked, i.e. where in the interview the questions are asked and the nature of preceding questions. The impacts on data of different collection methodologies are difficult to quantify. As a result, every effort is made to minimise such differences.

Comparison to monthly LFS statistics

29 Since Q&W was conducted as a supplement to the LFS, data items collected in the LFS are also available in this publication. However, there are some important differences between the two surveys. The Q&W sample is a subset of the LFS sample (refer to the Data Collection section above) with a response rate of 74%. Also, the scope of Q&W differs from the scope of the LFS (refer to the Scope and Coverage section above). Due to these differences between the samples, Q&W data are weighted as a separate process to the weighting of LFS data. Differences may therefore be found in the estimates for those data items collected in the LFS and published as part of Q&W.

Comparison to other education surveys

30 The Survey of Education and Work (SEW) (cat. no. 6227.0) has some similarities with Q&W. Conducted annually, SEW provides a range of indicators about educational participation and attainment, and data on people's transition between education and work. Comparison of SEW and Q&W data should be undertaken with caution due to different collection methodologies, scope and sample size. SEW is based on a household interview with any responsible adult who responds on behalf of all persons aged 15-74 years in the household. Whereas Q&W is conducted as a personal interview with one randomly selected person, aged 15 years or over, in the household. As such, the Q&W survey has a smaller sample size of 27,846 completed interviews, whereas SEW 2015 had close to 40,000 completed interviews.



31 Education data are coded to the Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0). The ASCED is a national standard classification which can be applied to all sectors of the Australian education system including schools, vocational education and training and higher education. The ASCED comprises two classifications: Level of Education and Field of Education.

32 Level of Education is defined as a function of the quality and quantity of learning involved in an educational activity. There are nine broad levels, 15 narrow levels and 64 detailed levels. For definitions of these levels see the Australian Standard Classification of Education, 2001 (cat. no. 1272.0).

33 Field of Education is defined as the subject matter of an educational activity. Fields of education are related to each other through the similarity of subject matter, through the broad purpose for which the education is undertaken, and through the theoretical content which underpins the subject matter. There are 12 broad fields, 71 narrow fields and 356 detailed fields. For detailed definitions of these fields see the Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0). Field of Education is only output at the broad level for the Learning and Work topic.

Level of highest educational attainment

34 Level of highest educational attainment was derived from information on highest year of school completed and level of highest non-school qualification. The derivation process determines which of the 'non-school' or 'school' attainments will be regarded as the highest. Usually the higher ranking attainment is self-evident, but in some cases some secondary education is regarded, for the purposes of obtaining a single measure, as higher than some certificate level attainments.

35 The following decision table is used to determine which of the responses to questions on highest year of school completed (coded to ASCED Broad Level 6) and level of highest non-school qualification (coded to ASCED Broad Level 5) is regarded as the highest. It is emphasised that this table was designed for the purpose of obtaining a single value for level of highest educational attainment and is not intended to convey any other ordinality.

Decision table: level of highest educational attainment

Highest year of school completedLevel of highest non-school qualification
Inadequately described / L.n.d.Cert n.f.d.Cert III & IV n.f.d.Cert IVCert IIICert I & II n.f.d.Cert IICert IN.S
Sec. Education n.f.dL.n.d.L.n.d.Cert III & IV n.f.d.Cert IVCert IIIL.n.d.L.n.d.L.n.d.N.S.
Senior Sec. Education n.f.dL.n.d.L.n.d.Cert III & IV n.f.d.Cert IVCert IIISenior Sec. n.f.d.Senior Sec. n.f.d.Senior Sec. n.f.d.N.S.
Year 12L.n.d.L.n.d.Cert III & IV n.f.d.Cert IVCert IIIYear 12Year 12Year 12N.S.
Year 11L.n.d.L.n.d.Cert III & IV n.f.d.Cert IVCert IIIYear 11Year 11Year 11N.S.
Junior Sec. Education n.f.dL.n.d.L.n.d.Cert III & IV n.f.d.Cert IVCert IIIL.n.d.L.n.d.L.n.d.N.S.
Year 10L.n.d.L.n.d.Cert III & IV n.f.d.Cert IVCert IIIYear 10Year 10Year 10N.S.
Year 9 and belowL.n.d.Cert n.f.d.Cert III & IV n.f.d.Cert IVCert IIICert I & II n.f.d.Cert IICert IN.S.
Never attended schoolL.n.d.Cert n.f.d.Cert III & IV n.f.d.Cert IVCert IIICert I & II n.f.d.Cert IICert IN.S.
N.S.N.S.N.S.Cert III & IV n.f.d.Cert IVCert IIIN.S.N.S.N.S.N.S.

Cert = Certificate
L.n.d. = Level not determined
n.f.d. = not further defined
N.S. = Not Stated
Sec. = Secondary

36 The decision table is also used to rank the information provided in a survey about the qualifications and attainments of a single individual. It does not represent any basis for comparison between differing qualifications. For example, a person whose highest year of school completed was Year 12, and whose level of highest non-school qualification was a Certificate III, would have those responses crosschecked on the decision table and would as a result have their level of highest educational attainment output as Certificate III. However, if the same person answered 'certificate' to the highest non-school qualification question, without any further detail, it would be crosschecked against Year 12 on the decision table as Level not determined. The decision table, therefore, does not necessarily imply that one qualification is 'higher' than the other. Education Variables, June 2014 (cat. no. 1246.0).

Products and services


37 Tables, in Excel spreadsheet format, can be accessed from Data downloads section. The spreadsheets present tables of estimates and proportions, and their corresponding relative standard errors (RSEs).

Microdata record file

38 In addition to the data available in the Excel spreadsheets, other tables will be able to be produced using TableBuilder (TB). TB is an online tool for creating tables and graphs from survey data. General information about this new product, including cost, can be found on the TableBuilder page.

Data available on request

39 Special tabulations are available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey incorporating data items, populations and geographic areas selected to meet individual requirements. These can be provided in printed or electronic form. All enquiries should be made to the National Information and Referral Service on 1300 135 070.


40 ABS surveys draw extensively on information provided freely by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated. Without it the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

Previous surveys

41 The Q&W Survey was first conducted and published in 2010-11 under the title Learning and Work, Australia 2010-11 (cat no. 4235.0). This is the first publication under the title of Qualifications and Work, Australia (Q&W).

Next survey

42 The ABS intends to conduct the Q&W survey again in 2018-19.

Related publications

43 Current publications and other products released by the ABS are available from the ABS website. The ABS also issues a daily upcoming release advice on the website that details products to be released in the week ahead.

Technical note - data quality

Reliability of the estimates

1 The estimates in this publication are based on information obtained from a sample survey. Any data collection may encounter factors, known as non-sampling error, which can impact on the reliability of the resulting statistics. In addition, the reliability of estimates based on sample surveys are also subject to sampling variability. That is, the estimates may differ from those that would have been produced had all persons in the population been included in the survey.

Non-sampling error

2 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing.

Sampling error

3 One measure of the likely difference is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

4 Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate.

\(RSE\%=\left(\frac{S E}{e s t i m a t e}\right) \times 100\)

5 RSEs for count estimates have been calculated using the Jackknife method of variance estimation. This involves the calculation of 30 'replicate' estimates based on 30 different subsamples of the obtained sample. The variability of estimates obtained from these subsamples is used to estimate the sample variability surrounding the count estimate.

6 The Excel spreadsheets in the Data downloads section contain all the tables produced for this release and the calculated RSEs for each of the estimates.

7 Only estimates (numbers or percentages) with RSEs less than 25% are considered sufficiently reliable for most analytical purposes. However, estimates with larger RSEs have been included. Estimates with an RSE in the range 25% to 50% should be used with caution while estimates with RSEs greater than 50% are considered too unreliable for general use. All cells in the Excel spreadsheets with RSEs greater than 25% contain a comment indicating the size of the RSE. These cells can be identified by a red indicator in the corner of the cell. The comment appears when the mouse pointer hovers over the cell.

Calculation of standard errors

10 Standard errors can be calculated using the estimates (counts or percentages) and the corresponding RSEs. See What is a Standard Error and Relative Standard Error, Reliability of estimates for Labour Force data for more details.

Proportions and percentages

11 Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when x is a subset of y:

\(R S E\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}\)


12 The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

\(S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}\)

13 While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.

Significance testing

14 A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Differences section. This standard error is then used to calculate the following test statistic:

\(\left(\frac{x-y}{S E(x-y)}\right)\)

15 If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.


Show all

Quality declaration - summary

Institutional environment








Show all

Back to top of the page