4430.0.30.002 - Microdata: Disability, Ageing and Carers, Australia, 2009 Quality Declaration 
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 09/03/2012  Reissue
   Page tools: Print Print Page Print all pages in this productPrint All




The scope of SDAC was persons in both urban and rural areas in all states and territories, living in both private and non-private dwellings (including persons in cared-accommodation), but excluding:
    • diplomatic personnel of overseas governments
    • persons whose usual residence was outside Australia
    • members of non-Australian defence forces (and their dependents) stationed in Australia
    • persons living in very remote areas.
The coverage of SDAC was the same as the scope except that the following (small) populations were not enumerated for operational reasons:
    • persons living in Indigenous communities in non-very remote areas
    • persons living in boarding schools
    • persons living in gaols or correctional institutions.
Rules were applied to maximise the likelihood that each person in coverage was associated with only one dwelling and thus had one chance of selection.

Usual residents of selected private dwellings and non-private dwellings (excluding persons in cared-accommodation) were included in the survey unless they were away on the night of enumeration and had been away or were likely to be away for seven months or more. This was designed to avoid multiple selection of a person who might be spending time, for instance, in a nursing home, to be eligible for selection there.

Visitors to private dwellings were excluded from coverage as the expectation was that most would have their chance of selection at their usual residence.

Occupants of cared-accommodation establishments in the scope of the survey were enumerated if they had been, or were expected to be, a usual resident of an establishment for three months or more.


Multi-stage sampling techniques were used to select the sample for the survey. After sample loss, the household sample included approximately 27,600 private dwellings and 200 non-private dwellings, while the cared-accommodation sample included approximately 1,100 establishments.

After exclusions due to scope and coverage, the final sample comprised 64,213 persons for the household component and 9,470 persons for the cared-accommodation component.DATA COLLECTION METHODOLOGY

Different data collection methods were used for the household component and the cared-accommodation component.

The household component covered persons in:
    • private dwellings such as houses, flats, home units and townhouses
    • non-private dwellings such as hotels, motels, boarding houses, short-term caravan parks, and self-care components of retirement villages.
Smaller disability homes (with fewer than six persons) were considered to be private dwellings.

The cared-accommodation component covered residents of hospitals, nursing homes, hostels and other homes such as children's homes, who had been, or were expected to be, living there or in another health establishment for three months or more.

Household component

Data for the household component of the survey were collected by trained interviewers, who conducted computer-assisted personal interviews.

A series of screening questions were asked of a responsible adult in a selected household to establish whether the household included:
    • people with a disability
    • people aged 60 years and over
    • people who were carers of persons with a core-activity limitation, living either in the same household or elsewhere, or who provided any care to persons living elsewhere.
Where possible, a personal interview was conducted with people identified in any of the above populations. Proxy interviews were conducted for:
    • children aged less than 15 years
    • those aged 15 to 17 years whose parents did not permit them to be personally interviewed
    • those with a disability that prevented them from having a personal interview.
People with a disability were asked questions relating to:
    • help and assistance needed and received for communication, mobility, self-care, cognition or emotion, health care, household chores, property maintenance, meal preparation, reading and writing tasks, and transport activities
    • their computer and Internet use
    • participation in community activities
    • schooling restrictions (persons aged 5 to 20 years (or their proxies)) and employment restrictions (persons aged 15-64 years).
People aged 60 years and over without a disability were asked questions about:
    • need for, and receipt of, help for household chores, property maintenance, meal preparation, reading and writing tasks, and transport activities
    • their computer and Internet use
    • participation in community activities.
Persons who confirmed they were the primary carer of a person with a disability or an older person were asked about:
    • the assistance they provided
    • the assistance they could call on
    • their employment experience
    • their attitudes to, and experience of, their caring role.
Basic demographic and socio-economic information was collected for all people in the household. Most of this information was provided by a responsible adult in the household.

Cared-accommodation component

The cared-accommodation component was enumerated in two stages using a mail-based methodology directed to administrators of selected establishments.

The first stage required completion of a Contact Information Form to establish the name of a contact officer, the current number of occupants within the establishment and the type of establishment.

The second stage required the nominated contact officer to select occupants in their establishment, following the instructions provided. A separate questionnaire was completed for each selected occupant.

The range of data collected in the cared-accommodation component was smaller than in the household component as some topics were not suitable for collection through a paper questionnaire or were irrelevant to those residing in cared-accommodation.WEIGHTING, BENCHMARKING AND ESTIMATION


Weighting is the process of adjusting results from a sample survey to infer results for the total population. To do this, a 'weight' is allocated to each enumerated person. The weight is a value which indicates how many population units are represented by the sample unit.

The first step in calculating weights for each person is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 300, then the person would have an initial weight of 300 (that is, they represent 300 people).

The responses from persons in the cared-accommodation component and persons in the private dwelling and non-cared accommodation components of the survey were weighted together in order to represent the entire in-scope population.


The weights were calibrated to align with independent estimates of the population, referred to as benchmarks, in designated categories of sex by age by area of usual residence. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distribution of the population rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over or under-enumeration of particular categories of persons which may occur due to either the random nature of sampling or non-response.

The survey was benchmarked to the estimated resident population (ERP) in each state and territory, excluding those living in very remote areas of Australia, at 30 June 2009. The SDAC estimates do not (and are not intended to) match estimates for the total Australian population obtained from other sources (which may include persons living in very remote parts of Australia).


Survey estimates of counts of persons are obtained by summing the weights of persons with the characteristic of interest. Estimates of non-person counts (e.g. days away from work) are obtained by multiplying the characteristic of interest with the weight of the reporting person and aggregating.RELIABILITY OF ESTIMATES

All sample surveys are subject to error which can be broadly categorised as either sampling error or non-sampling error.

Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process. For example, persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers; or there may be errors when coding or processing the survey data.

Sampling error

One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE) which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed and about 19 chances in 20 (95%) that the difference will be less than two SEs.

Another measure of the likely difference is the Relative Standard Error (RSE), which is obtained by expressing the SE as a percentage of the estimate.

Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In ABS publications, estimates with an RSE of 25% to 50% are preceded by an asterisk (e.g. *15.7) to indicate that the estimate should be used with caution. Estimates with RSEs over 50% are indicated by a double asterisk (e.g.**2.8) and should be considered unreliable for most purposes.

In addition to the main weight (as outlined earlier), each record on the CURF also contains 60 'replicate weights'. The purpose of these replicate weights is to enable the calculation of the standard error on each estimate produced. This method is known as the 60 group Jackknife variance estimator.

The basic concept behind this replication approach is to select different sub-samples repeatedly (60 times) from the whole sample. For each of these sub-samples the statistic of interest is calculated. The variance of the full sample statistics is then estimated using the variability among the replicate statistics calculated from these sub-samples. As well as enabling variances of estimates to be calculated relatively simply, replicate weights also enable unit record analyses such as chi-square and logistic regression to be conducted which take into account the sample design.

Further information about RSEs and how they are calculated can be referenced in the 'Technical Note' section in 2009 Disability, Ageing and Carers, Australia: Summary of Findings (cat. no. 4430.0).

Non-sampling error

Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. One of the main sources of non-sampling error is non-response by persons selected in the survey. Non-response occurs when persons cannot or will not cooperate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends upon the rate of non-response and the extent of the difference between the characteristics of those persons who responded to the survey and those that did not.

Every effort was made to reduce non-response and other non-sampling errors by careful design and testing of the questionnaire, training and supervision of interviewers, and undertaking extensive editing and quality control procedures at all stages of data processing.