4442.0.55.001 - Microdata: Family Characteristics, Australia , 2009-10 Quality Declaration 
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 27/04/2012   
   Page tools: Print Print Page Print all pages in this productPrint All




The scope of the 2009-10 FCS included all usual residents in private dwellings, except:

  • diplomatic personnel of overseas governments, and their dependants, excluded from censuses and surveys of Australian residents
  • members of non-Australian defence forces stationed in Australia, and their dependants
  • persons living in private dwellings in very remote parts of Australia
  • persons living in non-private dwellings such as hotels, university residences, students at boarding schools, patients in hospitals, residents of homes (e.g.
    retirement homes, homes for persons with disabilities, women's shelters), and inmates of prisons.

The survey was conducted across urban and rural areas in all states and territories, but excluded people living in very remote parts of Australia who would otherwise have been within the scope of the survey. The exclusion of these people will only have a minor impact on any aggregate estimates that are produced for states and territories, with the exception of the Northern Territory where people living in very remote areas account for approximately 24% of the total number of people in the population aged 15 years and over.


ABS interviewers conducted personal interviews by either telephone or at selected dwellings, from July 2009 to June 2010. Each month a sample of dwellings were selected for the MPHS from the responding households in the last rotation group for the MPS. In these dwellings, after the MPS had been fully completed for each person, a usual resident aged 15 years and over was selected at random and asked the additional MPHS questions in a personal interview. Information was collected using Computer Assisted Interviewing (CAI), whereby responses are recorded directly onto an electronic questionnaire in a notebook computer.

The survey collected information from the randomly selected person about the household and about every person in the household, including all children in the household. There were 35,525 person records for the survey.

Where the randomly selected respondent was aged 15–17 years, and a parent/guardian or other responsible adult aged 18 years and over was resident in the household, permission was sought from the parent or other adult to interview the young person. Regardless of whether permission was granted, details of Family Characteristics and household income (excluding the income of the selected person) was collected from the parent or other adult.

The survey collected information about parent-child relationships beyond the usual residence of the child. The survey collected information about resident children aged 0–17 years in the household who had a natural parent living in another household. In addition, the survey identified whether respondents were parents who had natural children aged 0–17 years living elsewhere with the child's other natural parent.WEIGHTING, BENCHMARKING AND ESTIMATION


Weighting is the process of adjusting results from the sample survey to infer results for the total in-scope population. To do this, a ‘weight’ is allocated to each sample unit (i.e. a person, a family or a household). The weight is a value which indicates how many population units are represented by the sample unit.

The first step in calculating weights for each sample unit is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. For example, if the probability of a person being selected in the survey was 1 in 600, then the person would have an initial weight of 600 (that is, they represent 600 people).


The initial weights were calibrated to align with independent estimates of the population of interest, referred to as ‘benchmarks’ in designated categories of age by sex by area of usual residence. Weights calibrated against population benchmarks to ensure that the survey estimates conform to the independently estimated distribution of the population, rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over- or under-enumeration of particular groups of persons which may occur due to either the random nature of sampling or survey non-response.

For person estimates, the 2009-10 Family Characteristics Survey was benchmarked to the Estimated Resident Population (ERP) as at 31st March 2010 in each state and territory, excluding the ERP living in very remote areas of Australia. For household estimates, the Family Characteristics Survey was benchmarked independently calculated estimates of the total number of households in Australia. The estimates from this survey do not (and are not intended to) match estimates for the total Australian person/household population obtained from other ABS sources (which may include persons in very remote parts of Australia).

The survey estimates conform to person benchmarks by State, part-of-State, age and sex, and to household benchmarks by State, part-of-State and household composition (number of adults and children usually resident in the household). These benchmark variables are the same as those used in the 2006-07, 2003, and 1997 Family Characteristics surveys. The only change has been in the age groups for which some collapsing was required for each collection. The impact of this change on estimates not involving age is minimal.


Survey estimates of counts of persons, families or households are obtained by summing the relevant weight for persons, families or households with the characteristic of interest. For more information, refer to the 'Weights and estimation' section in Using the CURF Data.RELIABILITY OF ESTIMATES

All sample surveys are subject to error which can be broadly categorised as either sampling and non-sampling error.

Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error can occur at any stage throughout the survey process For example persons selected for the survey may not respond (non-response); survey questions may not be clearly understood by the respondent; responses may be incorrectly recorded by interviewers, or there may be errors when coding or processing the survey data.

Sampling error

One measure of the likely difference between an estimate derived from a sample of persons and the value that would have been produced if all persons in scope of the survey had been included, is given by the Standard Error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was included. There are about two chances in three (67%) that a sample estimate will vary by less than one SE from the number that would have been obtained if all persons had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

Another measure of the sampling error is the Relative Standard Error (RSE) which is obtained by expressing the SE as a percentage of the estimate.

Generally, only estimates (numbers, percentages, means and medians) with RSEs less than 25% are considered sufficiently reliable for most purposes. In ABS publications, estimates with an RSE of 25% to 50% are preceded by an asterisk (e.g. *15.7) to indicate that the estimate should be used with caution. Estimates with RSEs over 50% are indicated by a double asterisk (e.g. **2.8) and should be considered unreliable for most purposes. Instructions on how to calculate SEs and RSEs can be found in the 'Standard Errors' section in Using the CURF Data.

In addition to the main weight (as outlined earlier), each record on the CURF also contains 30 'replicate weights'. The purpose of these replicate weights is to enable the calculation of the sample error on each estimate produced.

The basic concept behind the replication approach is to select different sub-samples repeatedly (30 times) from the whole sample. For each of these sub-samples the statistic of interest is calculated. The variance of the full sample statistics is then estimated using the variability among the replicate statistics calculated from these sub-samples. As well as enabling variances of estimates to be calculated relatively simply, replicate weights also enable unit record analyses such as chi-square and logistic regression to be conducted which take into account the sample design.

Further information about RSEs and how they are calculated can be referenced in the 'Technical Note' of the summary publication Family Characteristics, Australia, 2009-10 (cat. no. 4442.0). RSEs for the estimates in the tables presented in this publication are available in spreadsheet format, on the ABS website <https://www.abs.gov.au>, as an attachment to that publication.

Non-sampling error

Non-sampling error may occur in any collection, whether it is based on a sample or a full count such as a census. One of the main sources of non-sampling error is non-response by persons selected in the survey. Non-response occurs when persons cannot or will not co-operate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends upon the rate of non-response and the extent of the difference between the characteristics of those persons who responded to the survey and those that did not.

Every effort was made to reduce non-response and other non-sampling errors in the Family Characteristics to a minimum by careful design and testing of the questionnaire, training and supervision of interviewers, and undertaking extensive editing and quality control procedures at all stages of data processing.

One advantage of the CAI technology used to conduct interviews is that is potentially reduces non-sampling error by enabling edits to be applied as the data are being collected. The interviewer is alerted immediately if information entered into the computer is either outside the permitted range for a particular question, or contradictory to information previously recorded during the interview. These edits allow the interviewer to query respondents and resolve issues during the interview. CAI sequencing of questions is also automated so that respondents are only asked relevant questions and in the appropriate order, thereby eliminating interviewer sequencing errors.