6227.0.55.002 - Experimental estimates of education and training performance measures based on data pooling, Survey of Education and Work, 2007 to 2010, Sep 2011  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 09/09/2011  First Issue
   Page tools: Print Print Page Print all pages in this productPrint All

METHODOLOGY

THE POTENTIAL BENEFITS OF DATA POOLING

The ABS Survey of Education and Work (SEW) is conducted in May of each year as a supplementary survey of the Monthly Population Survey (MPS) of which the main component is the Labour Force Survey (LFS). SEW provides a range of key educational participation and attainment measures of people aged 15-74 years, along with information on the transition between education and work. Education and work data have been available annually since 1979.

SEW data have been used for many years by the Ministerial Council for Education, Early Childhood Development and Youth Affairs (MCEECDYA) and more recently the Council of Australian Governments (COAG) as the data source for key performance measures of youth participation and attainment in education and training. SEW provides accurate point in time estimates at the national level, and reasonably accurate estimates for states and territories. People in Aboriginal and Torres Strait Islander communities in very remote areas, however, are not included in the survey. This has minimal impact only on estimates for most states and territories, with the exception of the Northern Territory where people in these communities comprise about 15% of the territory population.

The estimates at finer levels of disaggregation are subject to a higher degree of sampling error. Furthermore, apart from point-in-time or level estimates clients are increasingly interested in being able to detect small movements in key performance measures from year to year at the national and jurisdictional level and for sub-populations. This is difficult given the relatively slow rate of change in some indicators, especially those associated with attainment, the narrow age bands of interest and the size of the current sample. Increasing the sample size would be extremely expensive and arguably is impractical as the SEW is collected as a supplement of the MPS which has an established sampling methodology. A potential solution is to pool or combine SEW data over time and/or pool it with other existing surveys that collect education data to increase effective sample size and thereby reduce the sampling error of the educational performance measures.

SEW provides data for four key national reporting measures of participation and attainment. These are:

  • NEA (National Education Agreement) Indicator 7. The proportion of the 20 to 24 year old population having attained at least a year 12 or equivalent or AQF Certificate Level II or above
  • NEA Indicator 9. Proportion of young people (15-19 years) participating in post school employment, education or training
  • NEA Indicator 10. The proportion of 18 to 24 year olds engaged in full time employment, education or training at or above Certificate III
  • NASWD (National Agreement on Skills and Workforce Development) Indicator 2. Proportion of 20–64 year olds who do not have qualifications at or above a Certificate III.

DIFFERENT OPTIONS FOR DATA POOLING

The potential advantage of pooling is to increase the sample size and reduce the sampling error of the measures or variables of interest without having to undertake additional and costly sampling. Pooling generally assumes that each survey is an independent non-overlapping sample of the same population and that the characteristics of the population and the variables of interest do not change substantially from one survey to another. The ABS Analytical Services Section conducted an initial assessment on five different options for survey data to be used in data pooling:
  • Option A: Combining current SEW with one or more previous SEWs
  • Option B: Combining SEW with another MPS Supplementary Survey
  • Option C: Combining SEW with a Special Social Survey
  • Option D: Combining SEW with the Multi-Purpose Household Survey (MPHS)
  • Option E: Combining SEW with another SEW, MPS Supplementary Survey and MPHS.

Option A - Combining current SEW with one or more previous SEWs

Advantages

The advantage of Option A is its simplicity. It combines successive surveys that have a similar survey design, scope, coverage, sampling unit, reporting method, mode of survey, reference period and weighting method. All the performance measures of education are included in these surveys, and since 2001, similar wording has been used for the education questions. Given that the education variables are conceptually the same and similarly measured there is little need to harmonise or adjust the measures before pooling. Deriving combined or pooled estimates is simpler as composite or adjusted survey weights for pooled data can more easily be computed.

Disadvantages

By pooling successive cycles of the same survey we lose the independence of the annual time series for the data unlike the situation where several concurrent surveys are pooled together. It becomes more difficult to interpret the estimate obtained by combining or pooling several consecutive SEWs as estimates obtained in this way are no longer a point estimate but are a smoothed measure of the combined period.

Option B - Combining SEW with another MPS Supplementary Survey

Advantages

Option B allows independent pooled estimates to be provided over a shorter period, for example, one year. Given that the surveys involved are supplements to the MPS they are all based on similar methodology and survey design.

Disadvantages

The increase in sample size is modest as given the nature of the MPS there is considerable overlap in the sample. The MPS sample is comprised of eight staggered groups which each spend eight successive months in the survey. Each month, one of the groups moves out of the sample and a new group moves in. Each monthly sample, therefore, retains seven out of eight of the respondents interviewed in the previous month. Furthermore, in Option B there are differences in opportunities for engagement in the reference period that must be taken into account with surveys conducted in different months in the year. The SEW for example is conducted in May, when most students will be enrolled or in work, while a survey conducted early in the year may find students between study and work. Moreover, the gains in sample size may differ by age group due to differences in the scope of the surveys with some of the surveys having different sampling distributions. For example, the Job Search Experience Survey (JSE) only interviews persons who are unemployed or employed and started their current job in the previous 12 months. All these issues mean that this approach requires complex re-weighting.

Option C - Combining SEW with a Special Social Survey (SSS)

Advantages

Option C, combining SEW with an SSS, such as the National Health Survey or the Survey of Disability, Ageing and Carers, would increase the effective sample and reduce RSEs.

Disadvantages

The reductions in RSEs are small while variation in scope, reporting method, response rates, frequency and timing of these surveys require substantial adjustment or harmonisation of data prior to pooling. There is also substantial risk of introducing additional non-sampling error in the combined estimates under this option: e.g. differences in scope/coverage between surveys and the way response variables are defined/measured etc. Furthermore, with these surveys conducted every 3-6 years, pooled estimates would only be provided several years apart.

Option D - Combining SEW with the Multi-Purpose Household Survey (MPHS)

Advantages

Option D involves combining SEW with the MPHS, itself a supplement of the monthly MPS. As the MPHS and SEW are both conducted as supplements of the MPS, they have the same survey design, sampling unit, coverage, mode and weighting method. This option allows the annual time series to be maintained for pooled data.

Disadvantages

The additional sample made possible by the inclusion of the MPHS is small as it is only conducted on one-third of the out-going MPS rotation group each month and there is overlap between the MPHS sample enumerated from June to December and the May SEW.

Option E - Combining SEW with another SEW, MPS Supplementary Survey and MPHS

Advantages

Estimates from the May SEW would be pooled with an additional SEW conducted in October, and with all MPS Supplementary Surveys and the MPHS conducted in the current year. All these surveys are supplements of the monthly MPS and hence consistent in terms of survey design, sampling unit, reporting method, mode of survey, questionnaire wording and weighting method. Combining the surveys doubles the sample size and has a similar impact in reducing RSEs to Option A. Option E’s advantage over Option A is that it maintains the annual time series in the education data.

Disadvantages

Option E does no more than double the sample because the October SEW would be carried out only five months subsequent to the May SEW with a consequent overlap of the sample reducing the effective gain in sample size. Option E's main disadvantage is in terms of increased complexity of combining data from different sources with varying scope and enumeration periods. An additional SEW would involve substantial costs, which would have to be user-funded.


ABS RECOMMENDED APPROACH FOR SEW DATA POOLING

After assessing these options, the ABS concluded that only Options A and E provided a sufficient increase to the effective sample and consequent reduction in RSEs to be worth pursuing for data pooling. Option A, combining the most recent SEW with one or more previous SEWs, was preferred for its overall advantages in increasing the effective sample size and relative simplicity. These were seen as outweighing the disadvantage that annual estimates would not be produced. Moreover there was only limited support from the stakeholders for a user-funded SEW.


ALTERNATIVE METHODS FOR COMBINING/POOLING CONSECUTIVE SURVEYS

Estimates for a variable of interest can be prepared either by using an aggregate ‘composite’ or a unit level ‘pooled’ approach. Composite estimates are produced as a weighted average of the individual year aggregate estimates of interest, while pooled estimates are derived by merging unit level data from the relevant surveys into a single dataset, adjusting the individual unit weights in the merged dataset and producing estimates using these new weights. The composite approach is used where there are significant differences in population characteristics or the variable of interest. The pooled approach is generally used where the population does not change significantly from one year to the next, where the survey samples are independent of each other and where they measure more or less the same variables.

Composite methods are simpler requiring, in theory, less computation and time to produce estimates, while methods based on the pooled approach are slightly more computationally intensive, requiring larger amounts of information and time to produce the estimates. The methods were compared in terms of information and time required and most importantly in terms of accuracy. For reasons of efficiency and theory the primary methods considered were:
  • The aggregate composite minimum variance estimator takes a weighted average of the estimators from the surveys to be combined with a weighting factor based on minimising the combined variance. This method ensures that the survey with the lower variance is given a higher weight in the combined estimate relative to other surveys to be combined.
  • Pooled weights calibrated to end period population/labour benchmarks using the Generalised Regression Weighting (GREG) method. The GREG method is the standard survey weighting procedure used by the ABS for population surveys. This method applied to SEW involves calibrating the initial weights in the pooled dataset to post-stratified external population and labour force benchmark totals of the end survey period using a GREG SAS macro to derive the final weights (and associated replicate weights). These weights are then used to produce the estimates of interest and their associated RSEs. Calibrating the weights of each survey in the combined dataset to demographic/labour force benchmark totals of the most recent survey makes this survey the reference period. The pooled (and composite) estimates produced with respect to the time point of the most recent survey data used can be described as lagging estimates, since sample from earlier surveys combined with sample from the most recent time point is employed as the total sample. The degree of lag in the estimates depends on the long term trend behaviour of the population characteristic being estimated. This is the most comprehensive method of weighting among the methods considered but it is computationally more complex and, in theory but not necessarily in practice, requires more time and resources to be implemented. (See also section on the GREG Method below).

Testing the alternative methods

A key assumption in data pooling is that the surveys to be pooled are from the same population and that the characteristics of the population and the variables of interest have not changed substantially from one survey to another. This assumption was tested by examining distributions across the surveys for selected variables, testing for the significance of difference in key education measures across the surveys and tests for survey effect for the variables being measured. Over the period 2006 to 2009, education-related outcomes for the major socioeconomic and demographic groups were broadly stable. The major concern was that a MPS sample size decline in 2009 led to a slightly larger error for pooled estimates compared to the case if no reductions had occurred. This should be a passing problem as the full MPS sample was reinstated for the 2010 SEW.

A test for survey effect was also conducted to see if the estimates in the pooled data set were affected by the period or the survey in which they were collected. The model results showed no consistent pattern and the survey effects that were found, although significant, were small. Overall the surveys were found comparable and pooling feasible.

In preliminary testing with three trial measures both methods produced moderately better estimates (in terms of reduced RSEs) than single year estimates. The simpler composite method generally produced lower RSEs for two measures but higher ones for the remaining one than the pooled method. Nevertheless, the RSEs for the two methods were strongly correlated and the differences between them were on average small.

In choosing the optimal method it is important to choose one based not just on the most efficient results but on wider theoretical and practical considerations. Ideally the method chosen should produce reliable and improved estimates without requiring too much information, computation, time and effort. Each method has its own advantages and disadvantages.

The composite method appears simple to use and in testing of three trial measures produced on average marginally lower RSEs than the unit level GREG estimator method. The composite method just requires data on the estimates and their associated RSEs. A problem with this method is that the weighting factor has to be calculated separately for each variable included in the analysis and it can be time consuming to implement if many time periods are involved and if more than two surveys are being combined.

Furthermore the estimates of the weighting factor under this particular method vary over time, between measures and between the levels at which the measures are produced. This means that estimates are not being produced on a consistent basis and as such comparisons between the values across national, jurisdictional and Socio-Economic Indexes for Areas (SEIFA) quintile level may not be valid or appropriate. This is of particular concern given this study's objective of providing more accurate measures, especially of trends over time, of key indicators of participation and attainment.

The GREG method is a more efficient method of combining population samples across surveys. It automatically handles the multiple benchmark classes used by the ABS and pooled data can be weighted consistently with the post-stratified survey weighting procedure used by the ABS for all the independent survey samples. Unlike the aggregate composite estimator method, the GREG estimator method requires only one set of weights for the different pooling periods, so comparisons across measures are seamlessly coherent. Since the weights under this method are calibrated to the benchmark totals of the end-period survey, the reference point for the estimates is the most recent survey. This is in contrast to the composite method where the reference period is difficult to determine or interpret. The way the sample weights are benchmarked under the GREG method also helps to capture the variation in data better, particularly at the jurisdictional level and below, since the calibration of the weight is done at a final level through the use of population benchmarks at jurisdictional and sub-jurisdictional level.

The main drawback of the GREG method is that more information, data and computing are required to derive the weights for the pooled dataset with implications for time, effort and resources. At present initial weights have to be derived outside of the final SEW data file and two sets of benchmark data (demographic and labour force) are required before this information can be fed through the GREG SAS macro to derive the pooled weights.

However, in reality the task of obtaining the benchmarked data is relatively simple. Since the final survey weights in each SEW file are already calibrated to the relevant demographic/labour force data for that year the required benchmark data for the pooled dataset can simply be obtained from the relevant end-period SEW itself. If the initial weights and their corresponding replicate weights are included along with the final weights and their corresponding replicate weights, which are already in the final SEW data files, no outside data will be needed for analysis.

While the GREG method may appear more computationally intensive, requiring more data and information, since the data required are already included in the SEW datasets, then the procedure for weighting and producing the estimates under this method becomes routine. Given its overall analytical and practical advantages, the GREG estimator method has been adopted as the preferred approach to data pooling.

The lagged GREG estimator method

The lagged GREG estimator method obtains the final survey weights through the calibration of the initial survey weights (which are based on the initial probability of selection) to a set of external population/labour force benchmarks using the ABS GREG SAS macro. With SEW data this involves merging the surveys to be pooled into one dataset and then calibrating in the initial weights in the combined data set to the set of external population/labour force benchmarks for the end reference date. For example if SEW 2009 and SEW 2010 are to be pooled then the two datasets are merged together into one dataset and their initial weights then calibrated to the population/labour force benchmark totals of SEW 2010.
For the SEW the following two benchmarks are used to produce the weights using the GREG SAS macro:
  • Demographic benchmark (State by Part of state by Sex by Age group). The categories/ groups for this benchmark are: 8 (State) x 2 (Capital cities/ Rest of state) x2 (Sex) x 10 (Age group 15-19, 20-24, 25-29, 30-34, 35-39, 40-44, 45-49, 50-54,55-64, 65+).
  • Labour Force benchmark (Labour force status by Age group). The categories for this benchmark are: 3 (Labour force status – employed, unemployed, not in the labour force) x 13 (Age group – 15, 16, 17, 18, 19, 20-24, 25-29, 30-34, 35-39, 40-44, 45-54, 55-64, 65+).
It should be noted that age groups differ for the two benchmarks being finer at younger ages and wider at older ages for the labour force than the demographic benchmark.
In brief the steps involved in calibrating weights in the pooled SEW dataset to demographic/labour force benchmarks involves the following steps:
  1. Obtain initial weights, corresponding replicate weights and dwelling cluster identifier from the May Labour Force Survey (of which the SEW forms a module) for the relevant SEW surveys
  2. Merge the records obtained from the May LFS with the records in the SEW merged file
  3. Obtain Demographic benchmark data (Benchmark 1) for the end of the required pooled periods (i.e. 2010 for 2009-10)
  4. Obtain Labour force benchmark data (Benchmark 2) for each of the required pooled periods (i.e. 2010 for 2009-10)
  5. Adjust the initial weights and the associated replicate weights in the merged SEW data file. This requires multiplying the initial weights by 8/7/(#surveys) (to rate up the population total to account for the fact that SEW is only conducted on 7/8ths of the LFS sample and dividing this by the number of surveys being pooled together)
  6. Use GREG SAS Macro to calibrate the adjusted initial weights (and the associated replicate weights) to the required benchmark totals to derive the final pooled weights for each of the required totals to derive the final pooled weights for each of the required periods.
  7. Use ABS %Table SAS Macro to derive the variances, SEs and RSEs for the measures from the pooled dataset for each of the relevant periods.

WEIGHTING

The weights in the pooled data are benchmarked to produce estimates of selected population characteristics of the last surveyed year included in each dataset because this is the year for which estimates are being made. The method is equivalent to the current SEW weighting and estimation method for a single year. However, since the pooled data are two combined years of SEW sample, as such the estimates produced are ‘lagging’ with respect to estimates that may in principle have been produced from a comparable independent survey, of the same sample size of the pooled dataset, conducted in the reference year of the benchmarks. An advantage of the individual year estimate is that it applies to the particular year concerned; whereas a pooled sample reflects the combined behaviour of the population (subject to socioeconomic, migration factors etc) of the years included and hence may be best understood as a lagging estimator for the reference year representing the situation somewhere during the time period between the two surveys depending on the weighting given to each.

The denominator for the key measures of participation and attainment, which are estimates of proportion, is based on the total number of people in the age group of interest, for the year to which the survey is benchmarked, that is the last year included in the estimates. Given that one of the benchmarks is 5-year age groups, the total population of 5-year age groups in the 2009-2010 pooled data is identical to those in the 2010 single-year estimates – see Table 10. However, the numerator of the estimates of proportion will reflect all the years included in the combined data sets. This means the estimate of the population with that characteristic will reflect the data of both years and the adjusted weights required to minimise RSEs but only the denominator population of 2010.

The ABS benchmarking method for population surveys, as used here, is a generalised regression technique which is a model-assisted method of estimation. The choice of benchmark variables for weighting and the model assumption about the independence of educational attainment variables on the labour force, age and gender has a small impact on the estimates produced. The SEW weights for an individual year are calibrated to align with independent estimates of the population by sex, 5-year age group, state or territory of usual residence, section of state or territory and labour force status. For pooled data across several years, extra model assumptions that the characteristics of interest and benchmark variables (eg. labour force proportions) do not depend on the year of collection mean that the pooled data is not a simple average of the single year estimates. On occasion this can lead to the final pooled result being outside the proportion estimates derived from the single year data that make up the pooled data set. For example, the estimate derived from the two-year pooled result including 2009 data could be fractionally above or below the estimates from the data of the two single years that go into it because of the different overall SEW sample size in 2009 compared to adjacent years.

The data pooling method could also be varied by using a different set of initial weights. For example, using the final weights for the individual SEW years as initial weights for the pooled data may reduce the non-response differences in sample take between the two years. However, it would reduce the benefits of lower RSEs for fine output categories under the model assumption that educational attainment changes slowly over two years. It is also possible that education questions will be added in the future to other LFS supplements where the initial LFS weights are readily available and hence including such initial weights in this report may give a better indication of the data performance that may occur for future pooled estimates.

The extra implicit assumptions with pooling data means that more care should be taken with computed levels of significance. The estimate of significance used here is based on the generally accepted p-value of 0.05, which is an estimate related solely to sampling error. There are many other factors in a pooling project, particularly in the weighting, that could cause a marginal increase or decrease in the value of an estimator. Most of the factors are model assumptions related to non sampling error, e.g. assumptions on change of characteristics and non-response across two years, and hence will not directly effect the variance estimate. Hence care should be taken when the calculated p-value is near 0.05.

In summary, the presented pooling results arise from one choice of initial weights and population benchmarks. As such that choice and the validity of model assumptions about population behaviour and survey response across multiple years will have some effect on the estimates produced. The method, however, is identical to the method for SEW weighting of individual years. In analysing the results for 2007-2010, some very small impacts on the comparability of pooled data and individual year time series data have been observed. The general behaviour of the two time series however was consistent with the pooled data exhibiting lower RSEs for the time point estimates.


APPROACH TO PRODUCING POOLED SERIES

In pooling data, two alternative procedures need to be considered for pooled series: non-overlapping pooled estimates; and rolling pooled estimates. Non-overlapping estimates compare pooled data from an initial set of surveys with data from a subsequent set of surveys. Rolling estimates combine adjacent data that overlaps in the middle. Non-overlapping estimates are computationally simpler and the trends produced are easier to interpret. However, regular estimates cannot be produced as frequently, for example trends using two-year pooled data would require four years of data for two data points, and four-year pooled data would require eight years of data.

The advantage of rolling estimates is that estimates can be produced each year maintaining the time series in the data. The disadvantage is that it is difficult to interpret what changes in the data are measuring. An estimate of the change between two sets of rolling pooled data will include some cases that are in both sets of data. In effect this will reduce the difference but more importantly the standard error of the change between the two consecutive rolling estimates will require evaluation of the resulting covariance between the two estimates. Consequently, while rolling estimates may be appropriate for producing level estimates on an annual basis rolling estimates are not appropriate for measuring change. For these reasons non-overlapping data have been preferred.

For further information, see Research Paper: Weighting and Standard Error Estimation for ABS Household Surveys (Methodology Advisory Committee), Jul 1999 (cat. no. 1352.0.55.029)