Attendance at Selected Cultural Venues and Events, Australia methodology

Latest release
Reference period
2017-18 financial year
Released
26/03/2019
Next release Unknown
First release

Explanatory notes

Introduction

1 This publication contains results from the Cultural Attendance Survey, a topic on the Multipurpose Household Survey (MPHS) conducted throughout Australia from July 2017 to June 2018. The MPHS, undertaken each financial year by the Australian Bureau of Statistics (ABS), is a supplement to the monthly Labour Force Survey (LFS) and is designed to collect statistics for a number of small, self-contained topics. In 2017-18, the topics were:

  • Patient Experience
  • Cultural Attendance
  • Cultural Participation
  • Crime Victimisation.
     

2 This publication covers the Cultural Attendance topic (also referred to as the Cultural Attendance Survey) and presents details about attendance at selected cultural venues and events including libraries and archives, art galleries, museums, cinemas, live music concerts, theatre, dance and other performing arts. This publication also presents information about the characteristics of participants and the frequency of attendance for the 12 months prior to interview. Data for this topic has previously been collected on the MPHS in 2013–14. Information on labour force characteristics, education, income and other demographics were also collected. For the first time, the Cultural Attendance Survey 2017-18 also collected attendance data for children aged 5-14 years.

Scope and coverage

3 For the first time, the scope of the 2017-18 Cultural Attendance Survey included both children aged 5-14 years and people aged 15 years and over who were usual residents of private dwellings and excludes:

  • members of the Australian permanent defence forces
  • certain diplomatic personnel of overseas governments customarily excluded from Census and estimated resident population counts
  • overseas residents in Australia
  • members of non-Australian defence forces (and their dependants) stationed in Australia
  • persons living in non-private dwellings such as hotels, university residences, boarding schools, hospitals, nursing homes, homes for people with disabilities, and prisons
  • persons resident in the Indigenous Community Strata (ICS)
  • children residing with parents or guardians who were all out of scope.
     

4 The scope for MPHS included households residing in urban, rural, remote and very remote parts of Australia, except the ICS.

5 In the LFS, rules are applied which aim to ensure that each person in coverage is associated with only one dwelling, and hence has only one chance of selection in the survey. See Labour Force, Australia (cat. no. 6202.0) for more detail.

Data collection

6 Each month, one eighth of the dwellings in the LFS sample were rotated out of the survey. These dwellings were selected for the MPHS. In these dwellings, after the LFS had been fully completed for each person in scope and coverage, a usual resident aged 15 years or over was selected at random (based on a computer algorithm) and asked the additional MPHS questions in a personal interview. The publication Labour Force, Australia (cat. no. 6202.0) contains information about survey and sample design, scope, coverage and population benchmarks relevant to the monthly LFS, and consequently the MPHS. This publication also contains definitions of demographic and labour force characteristics, and information about telephone interviewing.

7 In the MPHS, if the randomly selected person was aged 15 to 17 years, permission was sought from a parent or guardian before conducting the interview. If permission was not given, the parent or guardian was asked the questions on behalf of the 15 to 17 year old (proxy interview).

8 If the randomly selected person was aged 18 years or over, they were asked additional questions to determine whether they were a parent or guardian for any children aged 5-14 years who were usual residents of the household. If the respondent was a parent or guardian, they were asked questions about cultural attendance for up to two of their children aged 5-14 years. Children in scope were randomly selected based on a computer algorithm at the time of interview.

9 Data were collected using Computer Assisted Interviewing (CAI), whereby responses were recorded directly onto an electronic questionnaire in a notebook computer, with interviews conducted either face-to-face or over the telephone. The majority of interviews were conducted over the telephone.

Sample size

10 After taking into account sample loss, the response rate for the Cultural Attendance Survey was 71.1%. In total, information was collected from 28,243 fully responding persons. This includes 464 proxy interviews for people aged 15 to 17 years, where permission was not given by a parent or guardian for a personal interview, and 7,225 children aged 5-14 years whose parent or guardian was randomly selected to completed the MPHS.

Weighting, benchmarks and estimation

Weighting

11 Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates the number of persons in the population represented by the sample person. 

12 The first step in calculating weights for each unit is to assign an initial weight, which is the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 600, then the person would have an initial weight of 600 (that is, they represent 600 people).

Benchmarks

13 The initial weights are calibrated to align with independent estimates of the population of interest, referred to as 'benchmarks'. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distribution of the population rather than the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons/households which may occur due to either the random nature of sampling or non-response.

14 The survey was benchmarked to the Estimated Resident Population (ERP) living in private dwellings in each state and territory at December 2017, excluding people living in Indigenous communities. These benchmarks are based on the 2016 Census.

Estimation

15 Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest.

Confidentiality

16 To minimise the risk of identifying individuals in aggregate statistics, a technique is used to randomly adjust cell values. This technique is called perturbation. Perturbation involves a small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics. After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals. The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as TableBuilder.

Reliability of estimates

17 All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error. For more information refer to the Technical Note.

18 Sampling error is the difference between the published estimate, derived from a sample of dwellings, and the value that would have been produced if all dwellings in scope of the survey had been included. 

19 Non-sampling error may occur in any collection, whether it is based on a sample or a full count of the population such as a census. Sources of non-sampling error include: non-response; errors in reporting by respondents or recording of answers by interviewers; and errors in coding and processing data. Every effort was made to reduce the non-sampling error by: careful design and testing of the questionnaire; training and supervision of interviewers; follow-up of respondents; and extensive editing and quality control procedures at all stages of data processing.

Data quality

20 Information recorded in this survey is 'as reported' by respondents, and may differ from that which might be obtained from other sources or via other methodologies. This factor should be considered when interpreting the estimates in this publication.

21 A small proportion of respondents were resident in areas with no Socio-economic Indexes for Areas (SEIFA) scores allocated. For the purposes of the Cultural Attendance Survey, these records have had a SEIFA decile imputed, based on the deciles of the surrounding areas. For information on SEIFA, see the Socio-economic Indexes for Areas (SEIFA) section below.

Data comparability

Comparability of time series

22 Cultural attendance data for persons aged 15 years and over has previously been collected by the ABS through: the Survey of Attendance at Selected Cultural/Leisure Venues, a supplementary survey to the Monthly Population Survey (MPS) in June 1991, March 1995, and April 1999; the General Social Survey (GSS) in 2002, 2006 and 2010; and in the 2005-06, 2009-10 and 2013-14 MPHS. Caution should be taken when comparing across ABS surveys as estimates may differ due to differences in survey mode, methodology and questionnaire design.

23 The ABS seeks to maximise consistency and comparability over time by minimising changes to surveys. Sound survey practice, however, requires ongoing development to maintain and improve the integrity of the data. Key differences between 2017-18 and 2013-14 data are listed below.

24 The following content was collected in 2013-14 and not in 2017-18:

  • Attendance at zoological parks, wildlife parks and aquariums
  • Attendance at botanic gardens.
     

25 The following content was collected in 2017-18 and not in 2013-14:

  • Children's attendance at libraries or archives
  • Children's attendance at art galleries
  • Children's attendance at museums
  • Children's attendance at cinemas or drive-ins
  • Children's attendance at live music concerts or performances
  • Children's attendance at operas or musicals
  • Children's attendance at theatre performances
  • Children's attendance at dance performances
  • Children's attendance at other performing arts.
     

26 Questions about Attendance at live music concerts or performances and Attendance at other performing arts changed in 2017-18. These changes prevent comparability with data on related topics from previous years. In 2017-18, respondents were asked whether they had been to any live music concerts or performances in the last 12 months. In 2013-14, respondents were asked separately about their attendance at classical music concerts; musicals; operas; popular music concerts; and popular music performances in a pub, club or cafe. The changes to questions about attendance at live music concerts or performancess between 2013-14 and 2017-18 remove the comparability of questions about attendance at other performing arts.

27 For 2017-18, the Cultural Attendance Survey included children aged 5-14 years, and thus no time series data is available for this content. Caution should be taken when comparing across ABS surveys as estimates may differ from those obtained from other surveys (such as the Children's Participation in Cultural and Leisure Activities Survey) due to differences in survey mode, methodology and questionnaire design.

Comparability to monthly LFS statistics

28 Since the Cultural Attendance Survey is conducted as a supplement to the LFS, data items collected in the LFS are also available in this publication. However, there are some important differences between the two surveys. The LFS had a response rate of over 90% compared to the MPHS response rate of 71.1%. The scope of the Cultural Attendance Survey and the LFS (refer to the Scope and Coverage section above) also differ. Due to the differences between the samples, data from the Cultural Attendance Survey and the LFS are weighted separately. Differences may therefore be found in the estimates for those data items collected in the LFS and published as part of the Cultural Attendance Survey.

Classifications

Geography

29 Australian geographic data are classified according to the Australian Statistical Geography Standard (ASGS): Volume 1 - Main Structure and Greater Capital City Statistical Areas, July 2011 (cat. no. 1270.0.55.001). Remoteness areas are classified according to the Australian Statistical Geography Standard (ASGS): Volume 5 - Remoteness Structure, July 2011 (cat. no. 1270.0.55.005).

Country of birth

30 Country of birth data are classified according to the Standard Australian Classification of Countries (SACC), Second Edition (cat. no. 1269.0).

Industry

31 Industry data are classified according to the Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (Revision 2.0) (cat. no. 1292.0).

Occupation

32 Occupation data are classified according to the Australian and New Zealand Standard Classifications of Occupations, 2013, Version 1.2 (cat. no. 1220.0).

Education

33 Education data are classified according to the Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0). The ASCED is a national standard classification which can be applied to all sectors of the Australian education system including schools, vocational education and training and higher education. The ASCED comprises two classifications: Level of Education and Field of Education.

Socio-economic Indexes for Areas (SEIFA)

34 The 2017–18 survey uses the 2011 Socio-economic Indexes for Areas (SEIFA).

35 SEIFA is a suite of four summary measures that have been created from 2011 Census information. Each index summarises a different aspect of the socio-economic conditions of people living in an area. The indexes provide more general measures of socio-economic status than is given by measures such as income or unemployment alone. 

36 For each index, every geographic area in Australia is given a SEIFA number which shows how disadvantaged that area is compared with other areas in Australia.

37 The index used in the Cultural Attendance publication is the Index of Relative Socio-economic Disadvantage, derived from Census variables related to disadvantage such as low income, low educational attainment, unemployment, jobs in relatively unskilled occupations and dwellings without motor vehicles. 

38 SEIFA uses a broad definition of relative socio-economic disadvantage in terms of people's access to material and social resources, and their ability to participate in society. While SEIFA represents an average of all people living in an area, it does not represent the individual situation of each person. Larger areas are more likely to have greater diversity of people and households.

39 For more detail, see the following:

Products and services

40 Data Cubes containing all tables for this publication in Excel spreadsheet format are available from the Data downloads section. The spreadsheets present tables of estimates and proportions, and their corresponding relative standard errors (RSEs). Survey microdata from the Cultural Attendance topic will be released through the TableBuilder product. For more details, please refer to the TableBuilder information, Cultural Activities, Australia (cat. no. 4921.0.55.001).

41 Special tabulations of the data are available on request. Subject to confidentiality and sampling variability constraints, tabulations can be produced from the survey incorporating data items, populations and geographic areas (including state and territory level data), tailored to individual requirements. These are provided in electronic form. A list of data items from the 2017-18 Cultural Attendance Survey is available from the Data downloads section. All enquiries should be made to the National Information and Referral Service on 1300 135 070, or email client.services@abs.gov.au.

42 For further information about these and related statistics, contact the National Information and Referral Service on 1300 135 070, or email client.services@abs.gov.au. The ABS Privacy Policy outlines how the ABS will handle any personal information that you provide to us.

Future surveys

43 The ABS is conducting the MPHS again during the 2018-19 financial year. The 2018-19 MPHS topics are:

  • Qualifications and Work
  • Barriers and Incentives to Labour Force Participation
  • Retirement and Retirement Intentions
  • Patient Experience
  • Crime Victimisation.
     

44 The next Cultural Attendance Survey is scheduled to occur in 2021-22.

Acknowledgements

45 ABS surveys draw extensively on information provided by individuals, businesses, governments and other organisations. Their continued cooperation is very much appreciated and without it, the wide range of statistics published by the ABS would not be available. Information received by the ABS is treated in strict confidence as required by the Census and Statistics Act 1905.

Related publications

46 Current publications and other products released by the ABS are available from the ABS website. The ABS also issues a daily upcoming release advice on the website that details products to be released in the week ahead.

Technical note

Reliability of the estimates

1 The estimates in this publication are based on information obtained from a sample survey. Any data collection may encounter factors, known as non-sampling error, which can impact on the reliability of the resulting statistics. In addition, the reliability of estimates based on sample surveys are also subject to sampling variability. That is, the estimates may differ from those that would have been produced had all persons in the population been included in the survey. This is known as sampling error.

Non-sampling error

2 Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers and errors in coding and processing data. Every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing. It is not possible to quantify the non-sampling error.

Sampling error

3 One measure of sampling error is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

4 Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate. The RSE is a useful measure in that it provides an immediate indication of the percentage error likely to have occurred due to sampling and therefore avoids the need to also refer to the size of the estimate.

\(\Large{R S E \%=\left(\frac{S E}{e s t i m a t e}\right) \times 100}\)

5 Only estimates (numbers or percentages) with RSEs less than 25% are considered sufficiently reliable for most analytical purposes. However, estimates with larger RSEs have been included. Estimates with an RSE in the range 25% to 50% should be used with caution while estimates with RSEs greater than 50% are considered too unreliable for general use. All cells in the Excel spreadsheets with RSEs greater than 25% have been annotated and footnoted.

6 The Excel spreadsheets in the Data downloads section contain all the tables produced for this release and the calculated RSEs for each of the estimates.

Calculations of standard errors

7 Standard errors can be calculated using the estimates (counts or percentages) and the corresponding RSEs. See What is a Standard Error and Relative Standard Error, Reliability of estimates for Labour Force data for more details.

Standard errors of proportions and estimates

8 Proportions and percentages formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when x is a subset of y:

\(\Large{R S E\left(\frac{x}{y}\right) \approx \sqrt{[R S E(x)]^{2}-[R S E(y)]^{2}}}\)

Comparisons of estimates

9 The difference between two survey estimates (counts or percentages) can also be calculated from published estimates. Such an estimate is also subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

\(\Large{S E(x-y) \approx \sqrt{[S E(x)]^{2}+[S E(y)]^{2}}}\)

10 While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.

Significance testing

11 A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Comparison of estimates section. This standard error is then used to calculate the following test statistic:

\(\LARGE\frac{|x-y|}{S E(x-y)}\)

where

\(\Large{S E(y)=\Large\frac{R S E(y) \times y}{100}}\)

12 If the value of this test statistic is greater than 1.96 then there is evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations with respect to that characteristic.

Glossary

Show all

Quality declaration

Institutional environment

Relevance

Timeliness

Accuracy

Coherence

Interpretibility

Accessibility

Abbreviations

Show all

Back to top of the page