1504.0 - Methodological News, Mar 2018  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 22/03/2018   
   Page tools: Print Print Page Print all pages in this productPrint All

COMPLEMENTARY MEASURES FOR NON-RESPONSE FOLLOW UP

Response rate alone is not an ideal measure to work with for understanding the quality of a data collection process. When the response rate is high we can be confident in the quality of outputs but it can be very expensive and operationally inefficient to strive to achieve high response rates. When the response rate is not as high it is not obvious what the implication is for output quality as lower levels of response don't always correspond to lower quality outputs. Blindly seeking increased response rates through extra acquisition effort can have impacts on the data acquisition workforce and other surveys in the field. A number of alternate quality measures have evolved within the research field of Adaptive and Responsive Design. The ABS is currently looking into whether we can use alternate measures to complement the response rate and organise our non-response follow up procedures around a suite of measures. High response rates would still justify ending the follow up process but lower response rates could also be accepted provided other measures hit target values that give us confidence in the quality of outputs.

One measure for understanding the extent of bias that non-response can introduce is the R-indicator (Schouten et al 2009). This measures the variation in response propensities where the propensities can be estimated using correlates of both the response indicator and the key output variables. Partial R-indicators allow for identification of where the sample is most unbalanced and hence inform decisions on where to prioritise follow up efforts (Schouten & Shlomo 2017). Unfortunately the response propensities that the R-Indicator relies on have been difficult to estimate accurately. To avoid having to estimate these we have been developing a Balance Indicator that makes use of past data to partition the sample into homogeneous groups. The indicator only requires the average propensity for the group and the variation in these averages tells us about the potential for non-response bias.

We found that the indicator works well for household surveys where each responding unit is of roughly equal importance to estimates. However, this is not the case for business surveys, where skewed populations ensure groups cannot be of both equal size and equal importance to estimates. Weighting the group by its expected share of the population estimate can address the skewness but makes the indicator specific to the variable used in the weighting. Explicit imputation is another challenge for business surveys. Explicit imputation is used extensively to deal with non-response and cannot be ignored when assessing non-response bias. Multiple Imputation has been proposed as a way of understanding the extra variation that non-response is causing (Wagner 2010). If all the non-response is in parts of the sample where we can impute well then we are less concerned than if it is in parts where we can not impute well. The multiple imputation captures the impact to sampling error but we will still need to understand how imputation is influencing our measure of non-response bias.

The non-response follow up strategy will need to work together with the weighting and estimation procedures and data acquisition operational requirements. If we can accurately estimate the extent of non-response bias it would be sensible to try and remove this via a weighting adjustment. The balance between removing bias through data collection and removing bias through weight adjustment will depend on the size of the data collection budget and a careful assessment of data quality to ensure the output is fit for purpose. For a given budget, the goal in data collection is to allocate follow up effort in a way that will reduce the reliance on the subsequent weight adjustment. The non-response follow up strategy aims to use a suite of measures to define an acceptable region for stopping follow up and then prioritise units for follow up in a way that gives us the best chance of moving into that region in the most efficient manner. This strategy poses a challenge for business processes because data collection procedures will need to be sufficiently flexible to ensure dynamic adjustments to collection requirements can be absorbed in a timely manner.

References
Schouten, B.,F. Cobben, and J. Bethlehem. (2009) "Indicators for the Representativeness of Survey Response" Survey Methodology 35: 101-114

Schouten, B. and Shlomo, N. (2017) "Selecting Adaptive Survey Design Strata with Partial R-indicators" International Statistical Review 85, 1: 143-163

Wagner, J. (2010) "The fraction of missing information as a tool for monitoring the quality of survey data" Public Opinion Quarterly 74: 223–243.

Further information
For more information, please contact Carl Mackin Carl.Mackin@abs.gov.au or Daniel Fearnley Daniel.Fearnley@abs.gov.au

The ABS Privacy Policy outlines how the ABS will handle any personal information that you provide to us.