Education and Work, Australia methodology

This is not the latest release View the latest release
Reference period
May 2019

Explanatory notes


This publication contains results from the 2019 Survey of Education and Work (SEW) conducted throughout Australia in May 2019 as a supplement to the monthly Labour Force Survey (LFS).

The SEW provides annual information on a range of key indicators of educational participation and attainment of people aged 15-74 years, along with data on people's transitions between education and work.

The annual time series allows for ongoing monitoring of the level of education of Australia's population including participation in:

  • current and previous study;
  • type of educational institution attended;
  • highest year of school completed;
  • level and field of highest non-school qualification;
  • characteristics of people's transitions between education and work; and
  • selected characteristics of apprentices and trainees.

The publication Labour Force, Australia (cat. no. 6202.0) contains information about survey design, sample redesign, scope, coverage and population benchmarks relevant to the monthly LFS, which also apply to supplementary surveys such as the SEW. It also contains definitions of demographic and labour force characteristics.

Concepts, sources, and methods

The conceptual framework used in Australia's LFS aligns closely with the standards and guidelines set out in Resolutions of the International Conference of Labour Statisticians. Descriptions of the underlying concepts and structure of Australia's labour force statistics, and the sources and methods used in compiling these estimates, are presented in Labour Statistics: Concepts, Sources and Methods (cat. no. 6102.0.55.001).

In July 2014, the LFS survey questionnaire underwent a number of developments. For further information see Information Paper: Questionnaire Used in the Labour Force Survey, July 2014.


The scope of the SEW is restricted to people aged 15-74 years who were usual residents of private dwellings and non-institutionalised special dwellings excluding:

  • members of the permanent defence forces;
  • certain diplomatic personnel of overseas governments, customarily excluded from the Census of Population and Housing and estimated resident populations;
  • overseas residents in Australia;
  • members of non-Australian defence forces (and their dependants);
  • institutionalised people (e.g. patients in hospitals, residents of retirement homes, residents of homes for people with disabilities, inmates of prisons);
  • Indigenous communities; and
  • boarding school pupils.

Boarding school pupils have been excluded from the scope of the SEW since 2005, but were included in earlier collections.

Since 2009, SEW has included people living in 'very remote' areas who are not in Indigenous Communities. Prior to SEW 2009, all people living in 'very remote' parts of Australia were excluded. Nationally, less than 1% of people in scope of SEW live in 'very remote' areas that are not Indigenous Communities. In the Northern Territory, this proportion is higher, at around 8%.

In 2013, the scope of SEW was extended to include all people aged 65-74 years for the first time. From 2009 to 2012, people aged 65-74 years who were in the labour force, or were marginally attached to the labour force were included.

Persons who are permanently unable to work were included in the scope of SEW for the first time in 2013. There were an estimated 456,733 people who reported being permanently unable to work in May 2019.


In the LFS, coverage rules are applied which aim to ensure that each person is associated with only one dwelling and has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more details.

Data from the SEW is available by State, Greater Capital City Statistical Area, Section of State, Remoteness area and Statistical Area Level 4, subject to confidentiality constraints. Geography has been classified according to the Australian Statistical Geography Standard (ASGS), July 2016. For a list of these publications see the ABS Geography Publications page.

How the data is collected

Information was collected from respondents over a two week period in May 2019. The data were collected through interviews, conducted either face-to-face or over the telephone, or respondents were able to provide their information over the internet via a self-completed form. All information in the SEW was obtained from any person in the household aged 15 years or over who was asked to respond on behalf of all people in the household in scope of the survey. If the responsible adult was unable to supply all of the details for another individual in the household, a personal interview was conducted with that particular individual.

Approximately 91% of the selected households were fully responding to the Monthly Population survey in May 2019, which resulted in 38,683 completed interviews.

Key education concepts

Australian Standard Classification of Education (ASCED)

Education data are coded to the Australian Standard Classification of Education, 2001 (cat. no. 1272.0). The ASCED is a national standard classification which can be applied to all sectors of the Australian education system, including schools, vocational education and training, and higher education. It includes:

  • Level of Education, defined as a function of the quality and quantity of learning involved in an educational activity. There are nine broad levels, 15 narrow levels and 64 detailed levels of education.
  • Field of Education, defined as the subject matter of an educational activity. Fields of education are related to each other through the similarity of subject matter, through the broad purpose for which the education is undertaken, and through the theoretical content which underpins the subject matter. There are 12 broad fields, 71 narrow fields and 356 detailed fields of education

Level of education of current study

Since 2014, people identified in the Labour Force Survey as currently studying a school level qualification have been asked in the Survey of Education and Work (SEW) whether they are currently studying for any non-school qualifications. If they are still attending school, their level of study is recorded as their current year of schooling, not their non-school qualification.

Level of highest education attainment

Level of highest educational attainment identifies the highest achievement a person has attained in any area of formal study. It is derived from highest year of school completed and level of highest non-school qualification. The derivation process determines which of the 'school' or 'non-school' attainments will be regarded as the highest. Usually the higher ranking attainment is self-evident, but in some cases some secondary education is regarded, for the purposes of obtaining a single measure, as higher than some certificate level attainments.

There are two types of measures used in this publication a to determine level of highest educational attainment: 'Non-School Priority' and 'Standard Education Priority'.

  • 'Non-School Priority' is where all non-school qualifications are considered of higher ranking than secondary education. For example, a person whose highest year of school completed was Year 12, and whose level of highest non-school qualification was a Certificate I, would have their level of highest education attainment output as Certificate I. This concept is used in Table 10 of this publication.
  • 'Standard education priority' is where some school qualifications are ranked higher than some non-school qualifications. For example, years 10, 11 and 12 are ranked higher than Certificates I, II and n.f.d. The standard education priority was designed for the purpose of obtaining a single value for level of highest educational attainment and is not intended to convey any other hierarchy.

The following decision table shows which responses to 'highest year of school completed' and 'level of highest non-school qualification' are regarded as the highest. For example, a person a person's level of highest educational attainment if they had a Yr 12 Certificate and a Certificate III would be 'Certificate III'. However, if the same person answered 'certificate' to the highest non-school qualification question, their level of highest educational attainment would be output as 'Level not determined'.

Decision table - level of highest educational attainment

Highest year of school completed   Level of highest non-school qualification
Cert IVCert IIICert III & IV n.f.d.Cert IICert ICert I & II n.f.d.Cert n.f.d.Inadequately described L.n.d.Not Stated
Year 12Cert IVCert IIICert III & IV n.f.d.Year 12Year 12Year 12L.n.d.L.n.d.N.S.
Year 11Cert IVCert IIICert III & IV n.f.d.Year 11Year 11Year 11L.n.d.L.n.d.N.S.
Senior Sec. Education n.f.dCert IVCert IIICert III & IV n.f.d.Senior Sec. n.f.d.Senior Sec. n.f.d.Senior Sec. n.f.d.L.n.d.L.n.d.N.S.
Year 10Cert IVCert IIICert III & IV n.f.d.Year 10Year 10Year 10L.n.d.L.n.d.N.S.
Year 9 and belowCert IVCert IIICert III & IV n.f.d.Cert IICert ICert I & II n.f.d.Cert n.f.d.L.n.d.N.S.
Sec. Education n.f.dCert IVCert IIICert III & IV n.f.d.L.n.d.L.n.d.L.n.d.L.n.d.L.n.d.N.S.
Junior Sec. Education n.f.dCert IVCert IIICert III & IV n.f.d.L.n.d.L.n.d.L.n.d.L.n.d.L.n.d.N.S.
Not statedCert IVCert IIICert III & IV n.f.d.N.S.N.S.N.S.N.S.N.S.N.S.
Never attended schoolCert IVCert IIICert III & IV n.f.d.Cert IICert ICert I & II n.f.d.Cert n.f.d.L.n.d.N.S.

Cert = Certificate
L.n.d = Level not determined
n.f.d = not further defined
N.S. = Not Stated
Sec. = Secondary

For ease of interpretability, the layout of this table has been modified from Education Variables, June 2014 (cat. no. 1246.0), however the ranking of different levels of attainment has not changed.

Engagement in employment and education

The term 'engagement' is used when assessing a person's level of participation in employment and education. The following table shows the ways in which people can be 'Fully engaged', 'Partially engaged', or 'Not engaged'.

Employment StatusEducation Status
Full-time studyPart-time studyNot Studying
Full-time employmentFully engagedFully engagedFully engaged
Part-time employmentFully engagedFully engagedPartially engaged
Unemployed looking for full-time workFully engagedPartially engagedNot engaged
Unemployed looking for part-time workFully engagedPartially engagedNot engaged
Not in the labour forceFully engagedPartially engagedNot engaged

How the data is processed

Estimation methods


Weighting is the process of adjusting results from a sample survey to estimate characteristics of the total population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many people in the population are represented by the sample person.

The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of the unit being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).

Population benchmarks

The initial weights are then calibrated to align with independent estimates of the population, referred to as benchmarks. The population included in the benchmarks is the survey scope. This calibration process ensures that the weighted data conform to the independently estimated distribution of the population described by the benchmarks rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories which may occur due to either the random nature of sampling or non-response.

The survey was benchmarked to the estimated resident population (ERP) aged 15-74 years living in private dwellings and non-institutionalised special dwellings in each state and territory. People living in remote Indigenous communities were excluded.


Survey estimates of counts of people are obtained by summing the weights of people with the characteristics of interest.


To minimise the risk of identifying individuals in aggregate statistics, a technique called perturbation is used to randomly adjust cell values. Perturbation involves small random adjustment of the statistics which have a negligible impact on the underlying pattern. This is considered the most satisfactory technique for avoiding the release of identifiable data while maximising the range of information that can be released. After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values in Data Cubes to derive a total may give a slightly different result to the published totals. The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as TableBuilder.

Reliability of estimates

All sample surveys are subject to error which can be broadly categorised as either sampling or non-sampling error. For more information refer to the Technical Note.

Seasonal factors

The estimates are based on information collected in May 2019, and due to seasonal factors (such as school terms, semesters, or intake periods for other qualifications), they may not be representative of other months of the year.

Data comparability

Comparability of time series

In addition to the changes in scope listed in the 'Scope' section, there are a number of other changes to be aware of with regard to how SEW has been collected and reported over time.

Size of the sample

Supplementary surveys are not always conducted on the full LFS sample. Since August 1994 the sample for supplementary surveys has been restricted to no more than seven-eighths of the LFS sample. Since it was introduced, this survey has been conducted on various proportional samples and therefore sampling errors associated with previous supplementary surveys may vary from the sampling error for this survey.

Classification changes

Since 2007, industry data in the SEW have been classified according to the Australian and New Zealand Standard Industrial Classification, 2006 (cat. no. 1292.0). Prior to this, they were classified according to the Australian and New Zealand Standard Industrial Classification, 1993 (cat. no. 1292.0) and are therefore not directly comparable to data for 2007 and subsequent years.

Since 2007, occupation data in the SEW have been classified according to the Australian and New Zealand Standard Classifications of Occupations, First Edition, Revision 1 (cat. no. 1220.0). Prior to this, they were classified according to the Australian Standard Classifications of Occupations, Second Edition, 1997 (cat. no. 1220.0) and are therefore not directly comparable to 2007 and subsequent years.

Apprenticeship/traineeship scope

Prior to 2008, only people aged 15-54 years were included in the apprenticeship/traineeship survey questions. In 2008, the age scope was extended to include people aged 55-64 years and in 2009, the scope was further extended to include people aged 65-74 years for these questions. In 2008, the definition for apprentices and trainees changed from those employed as apprentices/trainees to include only those with a formal contract under the Australian Apprenticeships scheme. Therefore data on apprentices from previous years are not directly comparable to 2008 and subsequent data.

Other comparability issues

  • The May 2013 SEW was the first supplementary survey to incorporate an online data collection method, where the option was offered to just over one-quarter of the SEW sample. Since the May 2014 SEW this option has been offered to all respondents. For more information see the article Transition to Online Collection of the Labour Force Survey.
  • Revisions are made to population benchmarks for the LFS after each five-yearly Census of Population and Housing. The latest revision based on the 2016 Census of Population and Housing has been in use since November 2018. See Labour Force, Australia (cat. no. 6202.0) for more information.
  • As announced in the June 2012 issue of Australian Demographic Statistics (cat. no. 3101.0), intercensal error between the 2006 and 2011 Censuses was larger than normal due to improved methodologies used in the 2011 Census Post Enumeration Survey. The intercensal error analysis indicated that previous population estimates for the base Census years were over-counted. An indicative estimate of the size of the over-count is that there should have been 240,000 fewer people at June 2006, 130,000 fewer in 2001 and 70,000 fewer in 1996. As a result, Estimated Resident Population estimates have been revised for the last 20 years rather than the usual five. Consequently, estimates of particular populations derived since SEW 2014 may be lower than those published for previous years as the SEW estimates have not been revised. Therefore, comparisons of SEW estimates since 2014 with previous years should not be made. However, for comparable data items, comparison of rates or proportions between years is appropriate.
  • Since 2014, data in the SEW has been randomly adjusted to avoid the release of confidential statistics. Discrepancies may occur between sums of the component items and totals. See the Confidentiality section for more information on perturbation.

Comparability with other ABS surveys

Since the SEW is conducted as a supplement to the LFS, data items collected in the LFS are also available in SEW. However, there are some important differences between the two surveys. The SEW sample is a subset of the LFS sample (see the Overview section of these Explanatory Notes) and has a response rate which is slightly lower than the LFS response rate for the same period. Also, the scope of the SEW differs slightly to the scope of the LFS (refer to the Scope section above). Due to these differences between the samples, the SEW data are weighted as a separate process to the weighting of LFS data. Differences may therefore be found in the estimates collected in the LFS and published as part of the SEW, when compared with estimates published in the May 2019 issue of Labour Force, Australia (cat. no. 6202.0). From September 2016, the ABS has published education data from the LFS as part of the Labour Force publication Labour Force, Australia, Detailed, Quarterly (cat. no. 6291.0.55.003). For more information on the differences between SEW and LFS in relation to education data items see the Fact Sheet: Expanded education data from the Labour Force Survey (in cat. no. 6291.0.55.003). Estimates from the SEW may differ from the estimates produced from other ABS collections, for several reasons. The SEW is a sample survey and its results are subject to sampling error. Results may differ from other sample surveys, which are also subject to sampling error. Users should take account of the measures of error on all published statistics where comparisons are made. Refer to the Technical Note for more information about how error is measured for the SEW.

Differences may also exist in the scope and/or coverage of the SEW compared to other surveys. Differences in estimates, when compared to the estimates of other surveys, may result from different reference periods reflecting seasonal variations, non-seasonal events that may have impacted on one period but not another, or because of underlying trends in the phenomena being measured.

Finally, differences can occur as a result of using different collection methodologies. This is often evident in comparisons of similar data items reported from different ABS collections where, after taking account of definition and scope differences and sampling error, residual differences remain. These differences are often the result of the mode of the collections, such as whether data are collected by an interviewer or self-enumerated by the respondent and whether the data are collected from the person themselves or from a proxy respondent. Differences may also result from the context in which questions are asked, i.e. where in the interview the questions are asked and the nature of preceding questions. The impacts on data of different collection methodologies are difficult to quantify. Every effort is made to minimise such differences.

How the data is released


A number of data cubes (spreadsheets) containing all tables produced for this publication are available from the Data downloads section of the Education and work topic page. The data cubes present tables of estimates and proportions, and their associated measures of error.


For users who wish to undertake more detailed analysis of the data, the survey microdata will be released through the TableBuilder product (see Microdata: Education and Work, Australia (cat. no. 6227.0.30.001) for more detail). Microdata can be used by approved users to produce customised tables and analysis from the survey data. Microdata products are designed to ensure the integrity of the data whilst maintaining the confidentiality of the respondents to the survey. More information can be found at How to Apply for Microdata.


Detailed microdata may also be available on DataLab for users who want to undertake interactive (real time) complex analysis of microdata in the secure ABS environment. For more details, refer to About the DataLab.

Custom tables

Customised statistical tables to meet individual requirements can be produced on request. These are subject to confidentiality and sampling variability constraints which may limit what can be provided. Enquiries on the information available and the cost of these services should be made to the National Information and Referral Service on 1300 135 070.

History of changes

Results of similar surveys have been published in previous issues. These surveys were conducted annually from February 1964 to February 1974, in May 1975 and 1976, in August 1977 and 1978, and annually in May since 1979. Results of previous surveys were published in Transition from Education to Work, Australia (cat. no. 6227.0) from 1964 to 2000. Since May 2001, the results of the survey have been published in Education and Work, Australia (cat. no. 6227.0).

The ABS intends to conduct this survey again in May 2020.

Technical note

Reliability of estimates

Two types of error are possible in estimates based on a sample survey:

  • non-sampling error
  • sampling error

Non-sampling error

Non-sampling error is caused by factors other than those related to sample selection. It is any factor that results in the data values not accurately reflecting the true value of the population.

It can occur at any stage throughout the survey process. Examples include:

  • selected people that do not respond (e.g. refusals, non-contact)
  • questions being misunderstood
  • responses being incorrectly recorded
  • errors in coding or processing the survey data

Sampling error

Sampling error is the expected difference that can occur between the published estimates and the value that would have been produced if the whole population had been surveyed. Sampling error is the result of random variation and can be estimated using measures of variance in the data.

Standard error

One measure of sampling error is the standard error (SE). There are about two chances in three that an estimate will differ by less than one SE from the figure that would have been obtained if the whole population had been included. There are about 19 chances in 20 that an estimate will differ by less than two SEs.

Relative standard error

The relative standard error (RSE) is a useful measure of sampling error. It is the SE expressed as a percentage of the estimate:

\(R S E \%=\Large{\left(\frac{S E}{\text {estimate}}\right) }\normalsize{\times 100}\)

The smaller the estimate, the higher the RSE. Very small estimates are subject to high SEs (relative to the size of the estimate) which reduces their value for most uses. Only estimates with RSEs less than 25% are considered reliable for most purposes.

Estimates with larger RSEs, between 25% and less than 50% have been included in the publication, but are flagged to indicate they are subject to high SEs. These should be used with caution. Estimates with RSEs of 50% or more have also been flagged and are considered unreliable for most purposes. RSEs for these estimates are not published.

Margin of error for proportions

Another useful measure is the margin of error (MOE), which shows the largest possible difference (due to sampling error) that could exist between the estimate and what would have been produced had all people been included in the survey, at a given level of confidence. It is useful for understanding and comparing the accuracy of proportion estimates. Confidence levels can vary (e.g. typically 90%, 95% or 99%), but in this publication, all MOEs are provided for estimates at the 95% confidence level. At this level, there are 19 chances in 20 that the estimate will differ from the population value by less than the provided MOE. The 95% MOE is obtained by multiplying the SE by 1.96.

\(M O E=S E \times 1.96\)

Depending on how the estimate is to be used, an MOE of greater than 10% may be considered too large to inform decisions. For example, a proportion of 15% with an MOE of plus or minus 11% would mean the estimate could be anything from 4% to 26%.

Confidence intervals

The estimate combined with the MOE defines a range, known as a confidence interval. This range is likely to include the true population value with a given level of confidence. A confidence interval is calculated by taking the estimate plus or minus the MOE of that estimate. It is important to consider this range when using the estimates to make assertions about the population or to inform decisions. Because MOEs in this publication are provided at the 95% confidence level, a 95% confidence interval can be calculated around the estimate, as follows:

\(95 \% \text { Confidence Interval }=(\text {estimate}-M O E, \text { estimate }+M O E)\)

Measures of error in this publication

This publication reports the relative standard error (RSE) for estimates of counts ('000) and the margin of error (MOE) for estimates of proportions (%). These measures are included in the datacubes available under the Data downloads section on the education and work topic page. In the first datacube (Tables 1-20: Education and Work), time series tables include both RSE of proportion and MOE of proportion, as do tables 21- 34. For years prior to 2018, MOE of proportion has been calculated using rounded figures and the result may have slightly less precision than the MOE of proportion calculated for years after 2017.

In the first datacube (Tables 1-20: Education and Work), estimates of proportions with a MOE greater than 10% are annotated to indicate they are subject to high sample variability and particular consideration should be given to the MOE when using these estimates. In addition, estimates with a corresponding standard 95% confidence interval that includes 0% or 100% are annotated to indicate they are usually considered unreliable for most purposes.

Calculating measures of error

Proportions or percentages formed from the ratio of two count estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when the numerator (x) is a subset of the denominator (y):

\({RSE}\left(\Large\frac{x}{y}\right) \approx\sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}\)

When calculating measures of error, it may be useful to convert RSE or MOE to SE. This allows the use of standard formulas involving the SE.

The SE can be obtained from RSE or MOE using the following formulas:

\(S E=\Large\frac{R S E \% \times \text { estimate }}{100}\)

\(S E=\Large\frac{M O E}{1.96}\)

The RSE can also be used to directly calculate a MOE with a 95% confidence level:

\(M O E=\Large\frac{R S E \% \times e s t i m a t e \times 1.96}{100}\)

Calculating differences

The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x - y) may be calculated by the following formula:

\(S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}\)

While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.

Significance testing

When comparing estimates between surveys or between populations within a survey, it is useful to determine whether apparent differences are 'real' differences or simply the product of differences between the survey samples.

One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:



\(S E(y) \approx \Large\frac {R S E(y) \times y}{100}\)

If the value of the statistic is greater than 1.96, we can say there is good evidence of a statistically significant difference at 95% confidence levels between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.


Show all

Quality declaration - summary

Institutional environment








Show all

Back to top of the page