Intensive Follow Up Prioritisation Methods for Business Surveys
High quality survey based official statistics depend on responses from high and representative proportions of the units sampled in those surveys. An important step in the data collection process is to follow up survey non respondents with the aim of increasing the survey response rate. The cost of Intensive Follow Up (IFU) of survey non respondents is significant for many business and household surveys. IFU resources are often only sufficient enough to get a response from some of the survey non respondents. Given that survey non response can increase the Relative Standard Error (RSE) and creates the potential for bias of the survey estimates, it is clear that the choice of prioritisation of survey non respondents for IFU can affect the quality of survey estimates. Optimising this prioritisation also provides an opportunity to reduce IFU costs by following up less units without significantly affecting the RSE and bias.
The Methodology and Industry Statistics Divisions of the ABS partnered to complete a simulation study to gain some insight into the effectiveness of a range of IFU prioritisation methods. The simulation involved repeatedly drawing business survey samples from a population and iteratively simulating (non) response and IFU within each sample. This provided a framework for simulating a range of IFU prioritisation options and comparing their performances. The simulation was based on the population and survey design parameters of ABS agricultural collections.
Two main options were considered for prioritising the follow up of a given set of non respondents:
- Random Prioritisation. Non respondents were prioritised according to a random number.
- Dynamic Prioritisation. Non respondents of strata with higher per unit contributions to variance and imputation rate were given more priority.
Variants of the above IFU prioritisation options were explored by:
- Varying the intensity of the IFU effort at the top of the non respondent priority list. One extreme was to evenly spread the effort across the entire list (if possible) while the opposite extreme was to allocate all of the effort to the top portion of the list.
- Subsampling. Randomly excluding 50% of the non respondents from IFU and focussing the IFU effort on the remaining 50% of non respondents. This also involved drawing a larger initial sample to ensure that the final number of respondents was sufficient to meet RSE targets.
The performance of these options was compared with respect to a number of cost and quality measures. The simulation results suggested that IFU based on dynamic prioritisation outperformed IFU based on random prioritisation in terms of bias and RSE for a given cost. Dynamic prioritisation combined with IFU subsampling provided further benefits, although further work is required to understand the impact of subsampling on response behaviours over time.
For more information, please contact Noel Hansen at
methodology@abs.gov.au.
The
ABS Privacy Policy outlines how the ABS will handle any personal information that you provide to us.