4430.0.30.002 - Microdata: Disability, Ageing and Carers, Australia, 2003 (Reissue) Quality Declaration 
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 22/07/2005  Reissue
   Page tools: Print Print Page Print all pages in this productPrint All

This document was added or updated on 02/10/2012.

alt="" SURVEY METHODOLOGY

INTRODUCTION
SCOPE AND COVERAGE
SAMPLE DESIGN AND SELECTION PROCEDURES
DATA COLLECTION FOR THE HOUSEHOLD COMPONENT
DATA COLLECTION FOR THE CARED-ACCOMMODATION COMPONENT
ESTIMATION PROCEDURES – PERSONS
ESTIMATION PROCEDURES – HOUSEHOLDS
DATA QUALITY
SAMPLING ERROR
NON-SAMPLING ERROR
RESPONSE RATES


INTRODUCTION

The 2003 SDAC had two components - the household component and the cared-accommodation (establishment) component. The household component covered people who lived in:

  • private dwellings such as houses, flats, home units, townhouses, tents, and other structures used as private places of residence at the time of the survey, including dwellings in retirement villages which had no nursing home or hospital care on site
  • non-private dwellings such as hotels, motels, boarding houses, educational and religious institutions, guest houses, construction camps, short-term caravan parks, youth camps and camping grounds, staff quarters, and self-care components of retirement villages which had a cared-accommodation component.
The cared-accommodation component covered residents of hospitals, nursing homes, aged care and disability hostels and other homes such as children's homes, who had been, or were expected to be, living there for at least three months.


SCOPE AND COVERAGE

Scope of the survey

The survey included people in both urban and rural areas in all States and Territories, except for those living in remote and sparsely settled parts of Australia. For most individual States and Territories the exclusion of these people has only a minor impact on any aggregate estimates that are produced because they only constitute a small proportion of the population. However, this is not the case for the Northern Territory where such persons account for over 20% of the population. The survey included people in both private and non-private dwellings, including people in cared-accommodation establishments but excluding those in gaols and correctional institutions.

The scope of the survey was all persons except:
  • visitors to private dwellings
  • certain diplomatic personnel of overseas governments, customarily excluded from the Census and estimated resident population figures
  • those whose usual residence was outside Australia
  • members of non-Australian defence forces (and their dependants) stationed in Australia.

Coverage

Coverage rules were applied which aimed to ensure that each person eligible for inclusion in scope was associated with only one dwelling and thus had only one chance of selection.

The household component and the cared-accommodation component of the survey
each had their own coverage rules, as follows.
  • Usual residents of selected private dwellings were included in the survey unless they were away on the night of enumeration, and had been away or were likely to be away for four months or more. This was designed to avoid multiple selection of a person who might be spending time, for instance, in a nursing home, and be eligible for selection there.
  • Visitors to private dwellings were excluded as the expectation was that most would have their chance of selection at their usual residence.
  • Boarding school pupils were excluded from coverage but other persons in non-private dwellings in the scope of the survey were included if they had lived there, or were likely to live there, for four months or more.
  • Occupants of cared-accommodation establishments in the scope of the survey were included if they had been, or were expected to be, a usual resident of an establishment for three months or more. Persons who did not meet the three months residence criterion, such as patients in short-stay emergency care hospitals, were excluded from the cared-accommodation component.

SAMPLE DESIGN AND SELECTION PROCEDURES

Multistage sampling techniques were used to select the sample for the survey. The actual
sample included:
  • 14,019 private dwellings
  • 303 non-private dwelling units
  • 564 cared-accommodation establishments.
The final number of responding persons was 36,241 for the household component and 5,145 for the cared accommodation component.

Private dwelling selection

The area based selection of the private dwelling sample ensured that all sections of the population living within the geographic scope of the survey were represented. Each State and Territory was divided into geographically contiguous areas called strata. Strata are formed by initially dividing Australia into regions, within State or Territory boundaries, which basically correspond to the Statistical Division or Subdivision levels of the Australian Standard Geographical Classification. These regions are then divided into Local Government Areas (LGAs) in State Capital City Statistical Divisions (metropolitan regions), and into major urban centres as well as minor urban and rural parts in non-metropolitan regions. Each stratum contains a number of Population Census Collection Districts (CDs) containing on average about 250 dwellings. The sample was selected to ensure that each dwelling within the same stratum had the same probability of selection.

In capital cities and other major urban or high population density areas the sample was selected in three stages:
  • a sample of CDs was selected from each stratum with probability proportional to the number of dwellings in each CD
  • each selected CD was divided into groups of dwellings or blocks of similar size, and one block was selected from each CD, with the probability proportional to the number of dwellings in the block
  • within each selected block a list of all private dwellings was prepared and a systematic random sample of dwellings was selected.
In strata with low population density each stratum was initially divided into units, usually corresponding to towns or LGAs or combinations of both. One or two units were then selected from each stratum with probability of selection proportional to the number of dwellings in each unit. Within selected units, the sample of dwellings was arrived at in the same manner as outlined for high population density areas. The effect of this approach is that the sample was not necessarily selected from each LGA. Rather, those selected represented neighbouring LGAs of similar geographical characteristics.

Cared accommodation and other non-private dwelling selection

The sample of non-private dwellings was selected separately from the sample of private dwellings to ensure they were adequately represented. Non-private dwellings (including cared accommodation establishments) in each State and Territory were listed and sampled directly from these lists. Each non-private dwelling was given a chance of selection proportional to the average number of persons it accommodated. In order to identify the occupants to be included in the survey, all the occupants in each non-private dwelling were listed and then a random selection technique was applied.
DATA COLLECTION FOR THE HOUSEHOLD COMPONENT

Data for the household component of the survey were collected by trained interviewers mainly using personal computer assisted interviewing (CAI). There were a number of stages. First, an interviewer conducted a computer assisted interview with any responsible adult (ARA) in the household, to:
  • collect details of the composition of the household
  • collect demographic information (age, sex, birthplace, social marital status, relationship) about household members
  • identify people in the household who were of particular interest for this survey, so that they could be personally interviewed. These were people who:
    • had long-term health conditions
    • had a disability
    • were aged 60 years and over
    • regularly provided informal care in core activities to someone who was older or had a disability, and were considered to provide a greater level of care than others to that care recipient (possible primary carers).
For those people in the household who were not in these particular groups, a computer-assisted interview was then conducted using ARA methodology to collect information on education, labour force participation, income and housing.

Personal computer-assisted interviews were conducted with people aged 15 and over in the identified groups. Proxy interviews were conducted with parents of children with disabilities or people aged 15-17 where parental consent for personal interview was not given. People who were prevented by their disability from responding personally were also interviewed by proxy (i.e. another person in the household who answered for them).

Where there were language differences (including the need to use sign language), another member of the household was asked to interpret on behalf of, and with the permission of, the respondent. In some cases, arrangements were made to supply an interviewer conversant in the respondent's preferred language.

People who were confirmed as primary carers in their personal interview were also asked to complete a short self-enumerated paper questionnaire during the interview. This method allowed them to provide information on more sensitive issues, as the care recipient would often be present at the interview.

Interviewers for the household component of the survey were recruited from trained interviewers with previous experience in Australian Bureau of Statistics (ABS) household surveys. They were required to participate in CAI training, then in specific training for the survey, using laptop computers. Training emphasised understanding the survey concepts and definitions, and the necessary procedures to ensure a consistent approach to data collection.

Prior to enumeration, a letter and brochure were sent out to each household selected for the survey. These documents provided information about the purpose of the survey and how it would be conducted. Both contained the ABS guarantee of confidentiality, and the brochure also provided answers to some of the more commonly asked questions.
DATA COLLECTION FOR THE CARED-ACCOMMODATION COMPONENT

Overview

The cared-accommodation component completes the picture of the prevalence of health conditions, disability and levels of specific limitation or restriction in Australia. It also provides an indication of the balance between cared accommodation and community care for people with a disability, by age.

In the 1981 and 1988 surveys, interviews were held with residents of cared accommodation. Many of these were not able to respond for themselves, and it was necessary to try and arrange for family members, who may not have been living nearby, to come and provide proxy interviews. Often it was not possible to find anyone who knew enough to provide the required information.

For the 1993 survey the approach changed. A mail-back paper form was used, with a staff contact person as the respondent. The data collected were limited to the information a staff member could be expected to know from records. This method was also used for the 1998 and 2003 surveys.

Questionnaires

The administrators of selected cared-accommodation establishments were sent a letter informing them of the selection of their establishment in the survey. This letter also provided information on:
  • the purpose of the survey
  • how the data would be used
  • the ABS guarantee of confidentiality
  • the two-stage approach to data collection.
Three mail-back paper forms were developed for the cared-accommodation (establishment) component of the survey:
  • the Contact Information Form
  • the Selection Form
  • the Establishment Component Questionnaire, referred to below as the personal questionnaire.

Contact Information Form

The Contact Information Form (CIF) was sent, with the initial letter, to the administrators of selected cared-accommodation establishments. The purpose of the CIF was to confirm:
  • a suitable contact officer
  • the type of health establishment
  • the number of occupants in the establishment.

Selection Form

After receipt of the CIF, the ABS dispatched the Selection Form and personal questionnaires to nominated contact officers. The Selection Form provided instructions on how to list and select a random sample of residents from the establishment.

Personal Questionnaire

Personal questionnaires were completed by staff of the health establishments. Information provided was based on staff members' knowledge of the selected residents and on medical, nursing and administrative records.

Details of data collected and the relevant populations are in the Data item list in the Downloads tab. The personal questionnaire was field tested to ensure:
  • that there was minimum concern about the sensitivity or privacy aspects of the information sought
  • the questions could be answered by the contact officer named on the CIF on behalf of the residents in the sample
  • the information from the questionnaires could be processed into the data required.
The range of data collected in this component was smaller than in the household component. Topics such as income, or responses based on self-perception, were not suitable for collection. Others, such as home help, were not relevant to those living in cared accommodation.
ESTIMATION PROCEDURES – PERSONS

The estimation procedures developed for this survey ensure that survey estimates of the Australian population conform to independent benchmarks of the Australian population as at June 2003 at state by part of state or territory by age group by sex level.

For the calculation of person estimates, one benchmark was used to weight both the household and cared-accommodation components of the survey. For the common questions, the two components were combined to represent the whole population, whereas for the differing questions each survey represented only its population.

Benchmarks

The benchmark used was all persons in Australia excluding persons living in sparsely settled areas of the Northern Territory. Conceptually as persons in sparsely settled areas did not have a chance of selection in SDAC they should be removed from the population benchmarks. However, this is difficult to do accurately and so the benchmarks used include persons resident in sparsely settled areas except in the Northern Territory. The effect on survey estimates from this is considered to be negligible as the relative proportion of the States' population resident in sparse areas is very small.

Weighting methodology

Expansion factors or 'weights' were added to each respondent's record to enable the data provided by each person to be expanded to provide estimates relating to the whole population within the scope of the survey.

The first step of the weighting procedure was to assign an initial person weight to each fully responding person. The initial person weight was calculated as the inverse probability of the person's selection in the sample, and takes into account which component of the survey the respondent was selected in, i.e. the household component or the cared accommodation component.

The next step in the weighting procedure was calibrating the initial person weights to a set of person level population benchmarks. The calibration to benchmarks ensures that the sample survey estimates agree with independent measures of the population at specific levels of disaggregation. In addition, the calibration reduces the impact of differential non-response bias at the specific levels of disaggregation, and also reduces sampling error.
ESTIMATION PROCEDURES – HOUSEHOLDS

This survey was also designed to produce estimates of numbers of households. Only respondents living in private dwellings were given household weights. The estimation procedures developed for the household estimates ensure that survey estimates of the Australian population of households conform to independent benchmarks of the Australian population of households as at June 2003 at state by part of state or territory by household composition level (where household composition is determined by the number of adults and children in a household).

Benchmarks

The benchmark used was all private dwelling households in Australia, excluding those households in sparsely settled areas of the Northern Territory.


DATA QUALITY

All reasonable attempts have been taken to ensure the accuracy of the results of the survey. Nevertheless, two potential sources of error – sampling and non-sampling error – should be kept in mind when interpreting results of the survey.
SAMPLING ERROR

Since the estimates are based on information obtained from a sample of the population, they are subject to sampling error (or sampling variability). Sampling error refers to the difference between the results obtained from the sample population and the results that would be obtained if the entire population were enumerated. Factors which affect the magnitude of sampling error include:
  • sample design: the design chosen attempted to make the survey results as accurate as possible while remaining within operational and cost constraints
  • sample size: the larger the sample on which the estimate is based, the smaller the sampling error will be
  • population variability: the extent to which people differ on the characteristics being measured. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and therefore the smaller the sampling error.

Standard error

One measure of sampling variability is the standard error (SE). The SE is based on the 'normal' distribution and allows predictions about the accuracy of data. For example, there are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if the population were fully enumerated. The relative standard error (RSE) is the SE expressed as a percentage of the estimate to which it relates.

Very small estimates may be subject to such high RSEs as to detract seriously from their value for most reasonable purposes. Only estimates with RSEs less than 25% are considered sufficiently reliable for most purposes. Estimates with RSEs between 25% and 50% are included in Australian Bureau of Statistics (ABS) publications, but are preceded by the symbol * as a caution to indicate that they are subject to high RSEs. Estimates with RSEs greater than 50% are considered highly unreliable and are preceded by a ** symbol.
NON-SAMPLING ERROR

Additional sources of error which are not related to sampling variability are referred to as non-sampling errors. This type of error is not specific to sample surveys and can occur in a census. The main sources of non-sampling error are:
  • errors related to scope and coverage
  • response errors such as incorrect interpretations or wording of questions, interviewer bias, etc.
  • processing errors such as mistakes in the recording or coding of the data obtained
  • non-response bias.
Each of these sources of error is discussed in the following paragraphs.

Errors related to scope and coverage

Some dwellings may have been incorrectly included or excluded from this survey. An example of this form of error might be an unclear distinction concerning the private and non-private status of dwellings. All efforts were made to overcome such situations by constant updating of lists both before and during the survey.

There are also difficulties in applying the coverage or scope rules. Particular attention was paid to questionnaire design and interviewer training to ensure such cases were kept to a minimum.

Response errors

In this survey response errors may have arisen from three main sources: deficiencies in questionnaire design and methodology; deficiencies in interviewing technique; and inaccurate reporting by respondents.

For example, errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or by poor questionnaire sequence guides causing some questions to be missed. In order to overcome problems of this kind, individual questions and the overall questionnaire were thoroughly tested before being finalised for use in the survey.

Lack of uniformity in interviewing standards will also result in non-sampling errors. Thorough training programs, and regular supervision and checking of interviewers' work, were used to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the electronic survey collection instrument.

Processing errors

Processing errors may occur at any stage between initial collection of the data and final compilation of statistics. Specifically, in this survey, processing errors may have occurred at the following stages in the processing system:
  • clerical checking and coding – errors may have occurred during the checking of questionnaires and during coding of various items by office processors
  • data transfer – errors may have occurred during the transfer of data from the original questionnaire to the data file
  • editing – computer editing programs may have failed to detect errors which reasonably could have been corrected
  • manipulation of data – inappropriate edit checks, inaccurate weights in the estimation procedure and incorrect derivation of data items from raw survey data can also introduce errors into the results.
A number of steps were taken to minimise processing errors at various stages of the cycle. For example, detailed coding instructions were developed and staff engaged in coding were trained in the various classifications and procedures used.

Edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present and that specific values lay within certain ranges. In addition, at various stages during the processing cycle, tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the contents of the data file, to identify unusual values which may have significantly affected estimates, and illogical relationships not previously picked up by edits.

Non-response bias

Non-response occurs when people cannot or will not provide information, or cannot be contacted. It can be total (none of the questions answered) or partial (some of the questions may be unanswered due to inability to answer or recall information etc.). This can introduce a bias to the results obtained in that non-respondents may have different characteristics from those persons who responded to the survey. The size of the bias depends upon these differences and the level of non-response.

It is not possible to accurately quantify the nature and extent of the differences between respondents and non-respondents in the survey; however every effort was made to reduce the level of non-response bias through careful survey design and estimation procedures.
RESPONSE RATES

Response rates for both the household and cared-accommodation components were high. Of the 16,039 private dwellings and special dwelling units in the effective sample, 89% were either fully responding or adequate complete. Of the 592 health establishments in the cared-accommodation component, 542 (92%) were fully responding.

TABLE 4.1 HOUSEHOLD COMPONENT, Response rates

No.
%
Fully or adequately responding
Fully responding
12 071
75.3
Adequately complete
2 251
14.0
Total
14 322
89.3
Non-response
Full refusal
276
1.7
Full non-contact
756
4.7
Other
685
4.3
Total
1 717
10.7

Total
16 039
100.0


TABLE 4.2 CARED- ACCOMMODATION COMPONENT, Response rates

No.
%
Fully-responding establishments
542
91.6
Partly-responding establishments
22
3.7
Non-responding establishments
28
4.7

Total
592
100.0