1014.0 - Trust in ABS and ABS Statistics - A survey of informed users and the general community, 2015  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 20/10/2015  First Issue
   Page tools: Print Print Page Print all pages in this productPrint All

METHODOLOGY

  • The 2015 CTASS was administered by the Social Research Centre (SRC) on behalf of the Australian Bureau of Statistics (ABS) and involved two surveys conducted via Computer Assisted Telephone Interviewing (CATI): a survey of the general community and of informed users of ABS statistics.
  • The questionnaires were modified from the 2010 questionnaires by the ABS with consideration given to maintaining comparability with the 2010 surveys. SRC reviewed the questionnaires and provided recommendations based on recent best practice and available concordances and population benchmarks.
  • The research was undertaken in accordance with the Privacy Act (1988) and the Australian Privacy Principles contained therein, the Privacy (Market and Social Research) Code 2014, the Australian Market and Social Research Society’s Code of Professional Practice, and ISO 20252 standards.

GENERAL COMMUNITY SURVEY
  • The 2015 CTASS design entailed the conduct of 2,200 interviews with Australians aged 15 years and over. A dual-frame (using both landline and mobile numbers) sample frame was used. The blend of mobile phone interviews was 50%.
  • The randomly generated sample lists were purchased from SamplePages, one of the two main vendors supplying samples to the market and social research industry in Australia.
  • For the landline sample, a ‘best estimate’ of postcode is assigned to each record at the number generation and testing stage, based on information available about the geographic area serviced by each individual telephone exchange. Therefore, to ensure a nationally representative sample, random sampling was conducted within 15 geographic strata (State / Territory, Capital City / Rest of State).
  • For the mobile phone sample, phone numbers were generated and tested based on the known mobile phone number prefixes. No geographic information is currently available to researchers for mobile phone numbers generated in this way, therefore, the mobile sample was drawn as one sample strata.
  • A summary of the distribution of interviews is presented in Table 13.


TABLE 13: DISTRIBUTION OF INTERVIEWS FOR THE GENERAL COMMUNITY CTASS


StrataRegion
QUOTA
n
%

Landline
1
Sydney
227
10.3
2
Rest of NSW
128
5.8
3
Melbourne
209
9.5
4
Rest of VIC
69
3.1
5
Brisbane
106
4.8
6
Rest of QLD
112
5.1
7
Adelaide
64
2.9
8
Rest of SA
20
0.9
9
Perth
88
4.0
10
Rest of WA
24
1.1
11
Hobart
11
0.5
12
Rest of TAS
13
0.6
13
Darwin
7
0.3
14
Rest of NT
4
0.2
15
ACT
18
0.8

Mobile
16
Mobile
1 100
50.0

Total
All respondents
2 200
100

  • For within-household respondent selection of the landline sample, a next birthday method of selection was used in 50% of cases and a modified Westat selection process in the remainder.
  • As detailed in Table 14, a total of 69,108 calls were placed during June and July 2015 to a sample pool of 18,509 sample records to achieve 2,200 survey interviews. The co-operation rate for the survey (total interviews completed of the total number of interviews and refusals) was 61%. Non-response analysis was conducted and weighting applied to correct for variations between respondents and non-respondents.
  • The general community survey recorded an average interview length of 13.3 minutes.

TABLE 14: KEY FIELD STATISTICS – GENERAL COMMUNITY



Field
Key statistics

Target interviews
2 200
Interviews achieved
2 200
Average interview duration (minutes)
13.3 mins
Co-operation rate (sample yield)
60.6%
Response rate (APPOR 3)*
18.5%
Total sample records used
18 509
Total calls placed
69 108
Fieldwork start date
2 June 2015
Fieldwork finish date
5 July 2015

*American Association for Public Opinion Research (AAPOR) Response Rate 3.

  • To ensure estimates made from the general community survey were representative of Australians aged 15 years or older, weights were calculated for each respondent in the dataset. Initial weights were calculated as the inverse of the product of the probability of selection (accounting for the overlapping mobile and landline populations) and of the probability of response (based on a regression model incorporating auxiliary data available for both respondents and non-respondents). The initial weights were then adjusted so that they satisfied population benchmarks for age, gender, state, education, country of birth, and telephone status.
  • For comparison with the 2010 CTASS results, it was necessary to obtain a dual-frame estimate for the 2010 survey, given that only a landline estimate was available in 2010 (as the sample frame at that time was the Electronic White Pages and mobile numbers were not included in the sample). The ABS therefore developed a weighting strategy to allow for the calculation of a dual-frame estimate. The efficacy of the adjustments relies heavily on the assumptions of “constant proportionality” between landline and mobile phone responses. As these assumptions cannot be tested, time series results presented in this report should therefore be treated as indicative only.
  • Statistical significance testing was conducted using the well-known Kish approximations (IBM, 2011; Kish, 1965; Potthoff et al, 1992)1. Caution should be used when drawing conclusions about reported significant differences for sub-groups where the effective sample size is not an adequate representation.
  • Subgroup categories (for example male and female for gender) were derived from sample details (such as state for the landline sample) or questions asked in the survey. The one exception was socio-economic status (SES) which was derived from the education and occupation questions asked in the survey with concordance to data available from the Australian Council for Educational Research (ACER)2.


INFORMED USERS SURVEY
  • The informed users survey set out to complete around 140 interviews from a list of 191 users of ABS statistics (see Table 15). The list comprised of three different sample types: academics, economists and journalists. No quotas were set by sample type, however, additional attempts were made to maximise responses from the two smallest sample groups, economists and journalists.
  • The final numbers achieved within sample groups were reflective of the sampling approach applied to each. As such, journalists and academics, who had consented to participating in the survey after receiving prior notification from the ABS, recording higher participation rates than economists who were approached cold.

TABLE 15: DISTRIBUTION OF INTERVIEWS FOR THE INFORMED USERS



Sample records provided
Interviews achieved
Proportion of sample

Type
No.
No.
No.
Academic
163
131
80
Journalist
2
2
100
Economist
26
9
35
Total
191
142
74


  • As detailed in Table 16, a total 553 calls were placed to achieve 142 survey interviews. The co-operation rate for the survey was 99%. The informed users survey recorded an average interview length of 13.1 minutes.

TABLE 16: KEY FIELD STATISTICS – INFORMED USERS



Field
Key statistics

Target interviews
140
Interviews achieved
142
Average interview duration (minutes)
13.1 mins
Co-operation rate (sample yield)
98.6%
Response rate (APPOR 3)
77.3%
Total sample records used
191
Total calls placed
553
Fieldwork start date
10 June 2015
Fieldwork finish date
24 June 2015


  • For reporting purposes, the two smallest user groups (economists, n=9; and journalists, n=2) have been combined to protect the privacy of the smallest group and to ensure their individual responses are not identifiable. Weighting was not applied to the informed users data.


ENDNOTES
1. IBM Corporation (2011). IBM SPSS Data Collection Survey Reporter 6.0.1 User’s Guide. Kish, Leslie (1965). Survey Sampling, New York: Wiley. ISBN 978-0471109495. Potthoff, R. F., Woodbury, M. A. and K. G. Manton (1992) “Equivalent sample size” and “Equivalent degrees of freedom” refinements for inference using survey weights under superpopulation models. Journal of the American Statistical Association 87, 383-396.

2. McMillan, J., Jones, F. L. and Beavis, A. (2009) A New Scale for Measuring Socioeconomic Status in Educational Research: Development and validation of the Australian Socioeconomic Index 2006 (AUSEI06). Paper presented at the 2009 AARE International Education Research Conference, Canberra: National Convention Centre