4720.0 - National Aboriginal and Torres Strait Islander Social Survey: Users' Guide, 2008  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 26/02/2010   
   Page tools: Print Print Page Print all pages in this productPrint All

INTERPRETATION OF RESULTS


OVERVIEW

Care has been taken to ensure that the results of this survey are as accurate as possible. All interviews were conducted by trained ABS officers. Extensive reference material was developed for use in the field enumeration and intensive training was provided to interviewers. There remain, however, other factors which may have affected the reliability of results, and for which no specific adjustments can be made. The following factors should be considered when interpreting the estimates for this survey:

  • information recorded in this survey is 'as reported' by respondents, and therefore may differ from information available from other sources or collected using different methodologies;
  • responses may be affected by imperfect recall or individual interpretation of survey questions; and
  • some respondents may have provided responses that they felt were expected, rather than those that accurately reflected their own situation.

Every effort has been made to minimise such issues through the development and use of culturally appropriate survey methodology.

For a number of survey data items, some respondents were unwilling or unable to provide the required information. Where responses for a particular data item were missing for a person or household they were recorded in a 'not known', 'not stated' or 'refusal' category for that data item. Where these categories apply, they are listed in the data item list.

This chapter provides information on:

RELIABILITY OF SURVEY ESTIMATES

All sample surveys are subject to error, which can be broadly categorised as either:
Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys.

Non-sampling error may occur in any data collection, whether it is based on a sample or a full count (eg Census). Non-sampling error may occur at any stage throughout the survey process. Examples of non-sampling error include:
  • persons selected for the survey may not respond (non-response);
  • survey questions may not be clearly understood;
  • responses may be incorrectly recorded by interviewers;or
  • errors in coding or processing survey data.

Sampling and non-sampling errors should be considered when interpreting results of the survey. Sampling errors are considered to occur randomly, whereas non-sampling errors may occur randomly and/or systematically.


Achieved sample

The table below provides the number of fully responding persons in each state and territory for both the community and non-community sample. More information on sample design is provided in the Survey design chapter.

SURVEY RESPONSE, By sample and state or territory of usual residence

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia
Fully-responding persons
no.
no.
no.
no.
no.
no.
no.
no.
no.

Community sample(a)
-
-
559
161
315
-
1 232
-
2 267
Non-community sample
1 969
2 252
1 471
1 130
1 666
1 082
1 035
435
11 040

(a) NSW, Vic, Tas and the ACT were treated the same throughout and therefore do not have separate community samples. See the Survey design chapter for more information.


SAMPLING ERROR

Sampling error is the expected random difference that could occur between the published estimates, derived from using a sample of persons, and the value that would have been produced if all persons in scope of the survey had been enumerated. The size of the sampling error associated with an estimate depends on the following factors:
  • Sample design - there are many different methods that could have been used to obtain a sample, in order to collect data. The final design attempted to make survey results as accurate as possible within cost and operational constraints.
  • Sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error.
  • Population variability - the extent to which people differ on the particular characteristic being measured. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and therefore the smaller the sampling error. Conversely, the more variable the characteristic, the greater the sampling error.

For more information on sample size and design see the Survey design chapter.


Measures of sampling error

A measure of the sampling error for a given estimate is provided by the Standard Error (SE), which is the extent to which an estimate might have varied by chance because only a sample of persons was obtained. There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all persons had been included in the survey, and about 19 chances in 20 that the difference will be less than two SEs.

Another measure is Relative Standard Error (RSE), which is the SE expressed as a percentage of the estimate. The RSE is a useful measure as it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and therefore avoids the need to refer also to the size of the estimate.

Equation: Relative standard error - proportions

The smaller the estimate, the higher the RSE. Very small estimates are subject to such high SEs (relative to the size of the estimate) as to detract seriously from their value for most reasonable uses. Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes.

RSEs for all estimates published in the National Aboriginal and Torres Strait Islander Social Survey, 2008 (cat. no. 4714.0) are available from the ABS website in spreadsheet format.

Imprecision due to sampling variability, which is measured by the SE, should not be confused with inaccuracies that may occur because of imperfections in reporting by respondents and interviewers, or random errors made in coding and processing of survey data. These types of inaccuracies contribute to the total non-sampling error and may occur in any enumeration. The potential for random non-sampling error adds to the uncertainty of the estimates caused by sampling variability. However, it is not usually possible to quantify either the random or systematic non-sampling errors.

Standard errors of proportions and percentages

Proportions and percentages formed from the ratio of two estimates are subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator.

The RSEs of proportions and percentages for the publication, National Aboriginal and Torres Strait Islander Social Survey, 2008 (cat. no. 4714.0) were calculated using the full delete-a-group jackknife technique, which is described in 'Replicate weights and directly calculated standard errors'. RSEs for all estimates in the summary publication are available in spreadsheet format from the ABS website.

For proportions where the denominator is an estimate of the number of persons in a group and the numerator is the number of persons in a sub-group of the denominator group, a formula to approximate the RSE of the proportion x/y is given by:

Equation: Relative standard error - estimates

From the above formula, the estimated RSE of the proportion or percentage will be lower than the RSE of the estimate of the numerator.

Replicate weights and directly calculated standard errors

Standard errors (SEs) on estimates from this survey were obtained through the delete-a-group jackknife variance technique. In this technique, the full sample is repeatedly subsampled by successively dropping random groups of households and then the remaining records are reweighted to the survey benchmark population. Through this technique, the effect of the complex survey design and estimation methodology on the accuracy of the survey estimates is stored in the replicate weights. For the 2008 NATSISS, this process was repeated 250 times to produce 250 replicate weights for each sample unit. The distribution of the 250 replicate estimates based on the full sample estimate is then used to directly calculate the standard error for each full sample estimate.

The use of directly calculated SEs for each survey estimate, rather than SEs based on models, provides more information on the sampling variability inherent in a particular estimate. Therefore, directly calculated SEs for estimates of the same magnitude, but from different sample units, generally result in different SE estimates. For more information see Appendix 2: Replicate weights technique.

Comparison of estimates

Published estimates may also be used to calculate the difference between two survey estimates. Such an estimate is subject to sampling error. The sampling error of the difference between two estimates depends on their SEs and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

Equation: Standard error

While the above formula will be exact only for differences between separate and uncorrelated (unrelated) characteristics of sub-populations, it is expected that it will provide a reasonable approximation for all differences likely to be of interest in this survey.

Significance testing

For comparing population characteristics between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:

Equation: Significance testing

A test statistic sets out to measure the probability that the observed difference has occurred due to sample error alone. If the value of the tested statistic is greater than 1.96 then there is a 95% certainty that there is a statistically significant difference between the two populations with respect to the particular characteristic. Significance testing is concerned solely with sample error, that is, variations in estimates resulting from taking a sample of households rather than a Census. Significance testing does not take into account non-sampling errors, which are often encountered when interpreting statistical output.


NON-SAMPLING ERROR

Every effort was made to minimise non-sampling error by careful design and testing of questionnaires, intensive training of interviewers, and extensive editing and quality control procedures at all stages of data processing. However, errors can be made in giving and recording information during an interview and these may occur regardless of whether the estimates are derived from a sample or from a full count (eg Census). Inaccuracies of this type are referred to as non-sampling errors. The major sources of non-sampling error are:
  • errors related to the survey scope;
  • response errors (eg incorrect interpretation or wording of questions, interviewer bias, etc);
  • errors in processing (eg mistakes in recording or coding the survey data); and
  • bias due to undercoverage (for more information see undercoverage).

These sources of random and/or systematic non-sampling error are discussed in more detail below.


Field coverage errors

Some dwellings may have been inadvertently included or excluded. For example, if it was unclear whether the dwelling was private or non-private. In order to prevent this type of error, dwelling listings are constantly updated. Additionally, some people may have been inadvertently included or excluded because of difficulties in applying the scope rules. For example, identification of a household's usual residents or treatment of some visitors. For more information on scope and coverage see the Survey design chapter.


Response errors

In this survey response errors may have arisen from three main sources:
  • questionnaire design and methodology;
  • interviewing technique; and
  • inaccurate reporting by the respondent.

Errors may have been caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology, or by poor overall survey design (eg context effects, where responses to a question are directly influenced by the preceding question/s). In order to overcome these types of issues, individual questions and the overall questionnaire were tested before the survey was enumerated. Testing included:
  • focus groups;
  • peer review; and
  • field testing (a pilot test and dress rehearsal).

More information on pre- and field testing is provided in the Survey design chapter.

As a result of testing, modifications were made to:
  • question design, wording and sequencing;
  • the respondent booklet (prompt cards); and
  • survey procedures.

In considering modifications it was sometimes necessary to balance better response to a particular item/topic against increased interview time, effects on other parts of the survey and the need to minimise changes to ensure international comparability. Therefore, in some instances it was necessary to adopt a workable/acceptable approach rather than an optimum approach. Although changes would have had the effect of minimising response errors due to questionnaire design and content issues, some will have inevitably occurred in the final survey enumeration.

Response errors may also have occurred due to the length of the survey interview because of interviewer and/or respondent fatigue (ie loss of concentration). While efforts were made to minimise errors arising from deliberate misreporting or non-reporting, some instances will have inevitably occurred.

Accuracy of recall may also have led to response error, particularly in relation to the lifetime questions. Information in this survey is essentially 'as reported', and therefore may differ from information available from other sources or collected using different methodologies. Responses may be affected by imperfect recall or individual interpretation of survey questions, particularly when a person was asked to reflect on experiences in the 12 months prior to interview. The questionnaire was designed to strike a balance between minimising recall errors and ensuring the data was meaningful, representative (from both respondent and data use perspectives) and would yield sufficient observations to support reliable estimates. It is possible that the reference periods did not suit every person for every topic, and that difficulty with recall may have led to inaccurate reporting in some instances.

A further source of response error is lack of uniformity in interviewing standards. To ensure uniform interviewing practices and a high level of response accuracy, extensive interviewer training was provided. An advantage of using Computer Assisted Interviewing (CAI) technology to conduct survey interviews is that it potentially reduces non-sampling error. More information on interviews, interviewer training, the survey questionnaire and CAI is provided in the Survey design chapter.

Response errors may have also occurred due to language or reading difficulties. In some instances, a proxy interview was conducted on behalf of another person who was unable to complete the questionnaire themselves due to language problems and where an interpreter was unable to be organised. A proxy interview was only conducted where another person in the household (aged 15 years or over) was considered suitable. The proxy may also have been a family member who did not live in the selected household, but lived nearby. A proxy arrangement was only undertaken with agreement from the selected person, who were first made aware of the topics covered in the questionnaire. Aside from difficulties in understanding English verbally/orally, there may have been difficulties in understanding written English. The 2008 NATSISS incorporated the extensive use of prompt cards, as pre-testing indicated that these could aid interpretation by selected persons. People were asked if they would prefer to read the cards themselves or have them read out by the interviewer. It is possible that some of the terms or concepts used on the prompt cards were unfamiliar and may have been misinterpreted, or that a response was selected due to its position on the prompt card.

Some respondents may have provided responses that they felt were expected, rather than those that accurately reflected their own situation. Every effort has been made to minimise such issues through the development and use of culturally appropriate survey methodology. Non-uniformity of interviewers themselves is also a potential source of error, in that the impression made upon respondents by personal characteristics of individual interviewers such as age, sex, appearance and manner, may influence the answers obtained.


Errors in processing

Errors may occur during data processing, between the initial collection of the data and final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or during the manipulation of raw data to produce the final survey data files. For example, when creating new data items from raw survey data (eg coding of occupation data items to the standard classification), during the estimation procedures or when weighting the data file.

To minimise the likelihood of these errors occurring a number of processes were used, including:
  • computer editing - edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present and that specific values lay within certain ranges. These edits were designed to detect reporting and recording errors, incorrect relationships between data items or missing data items.
  • data file checks - at various stages during processing (such as after computer editing or after derivation of new data items) frequency counts and/or tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the content of the data file, to identify unusual values which may have significantly affected estimates and illogical relationships not previously identified. Further checks were conducted to ensure consistency between related data items and in the relevant populations.
  • where possible, the data was checked to ensure consistency of the survey outputs against results of other ABS surveys, such as the 2002 NATSISS and the 2004-05 National Aboriginal and Torres Strait Islander Health Survey.


UNDERCOVERAGE

Undercoverage is one potential source of non-sampling error and is the shortfall between the population represented by the achieved sample and the in-scope population. It can introduce bias into the survey estimates. However, the extent of any bias depends upon the magnitude of the undercoverage and the extent of the difference between the characteristics of those people in the coverage population and those of the in-scope population.

Briefly, the measures taken to address potential bias due to undercoverage were:
  • applying a geographical adjustment to the initial weights to account for the different undercoverage rates and the characteristics of Indigenous persons living in these areas;
  • calibrating the weights to population benchmarks to account for undercoverage at the various calibrarion levels; and
  • including an additional set of benchmarks (community/non-community) to account for the different characteristics of Indigenous persons living in these areas.

More detailed information on undercoverage is provided in the following segments.


Rates of undercoverage

Undercoverage rates can be estimated by calculating the difference between the sum of the initial weights of the sample and the population count. If a survey has no undercoverage, then the sum of the initial weights of the sample would equal the population count (ignoring small variations due to sampling error). For more information on weighting refer to the Survey design chapter.

The 2008 NATSISS has a relatively large level of undercoverage when compared to other ABS surveys. There was also an increase in undercoverage compared to previous ABS Indigenous surveys. For example, the estimated undercoverage in the 2004-05 National Aboriginal and Torres Strait Islander Health Survey was 42%. The estimated undercoverage rate for the Monthly Population Survey for private dwellings is on average 12% and the non-response rate is 3.5%.

The overall undercoverage rate for the 2008 NATSISS is approximately 53% of the in-scope population at the national level. This rate varies across the states and territories, as shown in the table below.

UNDERCOVERAGE, by state or territory

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia
%
%
%
%
%
%
%
%
%

Rates of undercoverage
47.3
57.2
51.2
56.6
57.2
43.1
62.3
53.4
52.6



Of the national undercoverage rate, 6% is due to planned frame exclusions where analysis has shown that the impact of any bias is minimal. More information on these exclusions is provided below.


Potential sources of undercoverage

Undercoverage may occur due to a number of factors, including:
Each of these factors are outlined in more detail in the following paragraphs. To assist interpretation, a diagrammatical representation of the potential sources of undercoverage is shown below.

Diagram: Potential sources of undercoverage

Frame exclusions

Frame exclusions were incorporated into the 2008 NATSISS to manage the cost of enumerating areas with a small number of Indigenous persons. There were also unplanned exclusions on the non-community frame, due to an error in identifying private dwellings during the creation of the frame. This error resulted in the undercoverage of some discrete Indigenous communities, which were supposed to be represented in the survey's non-community sample. An adjustment was applied to the weights to account for this error. More information on this adjustment is provided in the weighting segment of the Survey design chapter.

At the national level it is estimated that 8.5% of the in-scope population was excluded from the frame, that is, they did not have a chance of selection. Part of this exclusion represents an estimate of the people who had moved to areas out on coverage since the 2006 Census. The number of people who moved may be higher than estimated and could account for a portion of the higher than expected non-identification estimate, which is discussed below. Further information on scope and coverage is provided in the Survey design chapter.

Non-response

Non-response may occur when people cannot or will not cooperate, or cannot be contacted during the enumeration period. Unit and item non-response by persons/households selected in the survey will affect both sampling and non-sampling error. The loss of information on persons and/or households (unit non-response) and on particular questions (item non-response) reduces the effective sample and increases both sampling error and and the likelihood of incurring response bias.

The size of any non-response bias depends on the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not, as well as the extent to which non-response adjustments can be made during estimation through the use of benchmarks.

To maximise response rates and reduce the impact of non-response, the following methods were adopted in this survey:
  • face-to-face interviews with respondents;
  • local Indigenous facilitators were employed to assist with interviewing in communities;
  • follow-up of respondents if there was initially no response; and
  • ensuring the weighted file is representative of the population by aligning the estimates with population benchmarks.

In the 2008 NATSISS, non-response accounts for a portion of overall undercoverage. The two components of non-response were:
  • non-response to the screening question; and
  • non-response to the survey after identification of an Indigenous household.

Of the households screened in non-community areas, approximately 89% of Indigenous households responded. This assumes that response to the screening question is not related to the Indigenous status of the household. Of households who responded to the screening question, approximately 2.5% were identified as having an Indigenous usual resident. Of these identified households, 83% then responded to the survey. In discrete Indigenous communities, 78% of selected in-scope households were fully responding.

In developing the survey weights, information available for responding and non-responding households (who provided partial information) were used by the ABS to conduct quantitative investigations into non-response adjustments. No non-response adjustment, apart from benchmarking, was made to the weights as indications were that non-response had a negligible impact on the estimates.

Response rates

Response rates reflect the number of people who responded to the survey divided by the number of people in the sample, expressed here as a percentage. The response rate for the 2008 NATSISS was 82% nationally. Response rates are only one measure of the quality of this survey, therefore other components of undercoverage should be taken into account when analysing survey results. The tables below provide the achieved sample and response rates for each state and territory, as well as the response rates for the community and non-community samples.

SURVEY RESPONSE, By state or territory of usual residence

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia

Achieved sample (no.)
1 969
2 252
2 030
1 291
1 981
1 082
2 267
435
13 307
Response rate(a) (%)
90
81
89
78
77
94
74
87
82

(a) Fully-responding households

RESPONSE RATES, By sample and state or territory of usual residence

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia
Fully-responding households
%
%
%
%
%
%
%
%
%

Community sample(a)
-
-
92.3
70.1
74.4
-
74.3
-
77.9
Non-community sample
89.7
81.3
87.4
79.6
78.2
93.9
73.7
87.0
83.4

(a) NSW, Vic, Tas and the ACT were treated the same throughout and therefore do not have separate community samples. See the Survey design chapter for more information.

Non-identification as Indigenous

Non-identification of Indigenous households during the screening process may have occurred due to:
  • Indigenous people not identifying themselves as Indigenous (passive refusals); or
  • the household spokesperson being unaware of (or unwilling to provide) the Indigenous status of other residents.

The under-identification of Indigenous persons in non-community areas is estimated to be up to 31% of those screened. This estimate is the remaining level of undercoverage when all other known sources of undercoverage have been removed. Part of this percentage is likely to be due to other factors which are unknown.

It is not possible to measure the potential bias induced by non-identification, as there is no information available for people who weren't identified as Indigenous. However, the adjustment applied in the weighting process and the calibration to the benchmarks should reduce potential bias.

Issues arising in the field

Known undercoverage, due to other issues arising in the field, included sample being excluded due to:
  • overlap with the Monthly Population Survey; and
  • occupational, health and safety issues.

The estimated undercoverage due to these issues was 3.7% at the national level. The undercoverage induced by the Monthly Population Survey should have minimal impact on the estimates as the process of avoiding overlap is random.


Comparisons to other data sources

Given the high undercoverage rate, the analysis undertaken to ensure that results from the 2008 NATSISS were consistent with other data sources was more extensive than usual. The characteristics of the 2008 NATSISS respondents were compared to a number of ABS collections, including:
From this analysis, it was determined that some of the respondent characteristics from the initial weighted data did not align well with other ABS estimates in some states and territories. In particular, some of the social outcomes in the NT differed to those anticipated. The estimates were also higher than expected for:
  • employment;
  • the decrease in households with major structural damage; and
  • the decrease in people who spoke an Indigenous language.

Further analysis indicated that the community sample was having a greater influence on the estimates than would reasonably be expected. As a result, additional benchmarks (community/non-community) were incorporated into the weighting strategy to ensure that each sample of the population were appropriately represented. This improved the consistency between estimates of NATSISS with other ABS collections. Each step in the weighting process was then thoroughly assessed to ensure that it was not biasing the results. More information on data confrontation and on weighting, benchmarking and estimation is provided in the Survey design chapter.


SEASONAL EFFECTS

The estimates from the survey are based on information collected from August 2008 to April 2009, and due to seasonal effects they may not be fully representative of other time periods in the year. For example, the 2008 NATSISS asked people if they had participated in any physical, sporting, community or social activities in the three months prior to interview. Involvement in particular activities may be subject to seasonal variation through the year. Therefore, the results could have differed if the survey had been conducted over the whole year or in a different part of the year.


AGE STANDARDISATION

Age standardisation techniques were applied to some data in the summary publication, National Aboriginal and Torres Strait Islander Social Survey, 2008 (cat. no. 4714.0), to remove the effect of the differing age structures in comparisons between Indigenous and non-Indigenous populations. The age structure of the Indigenous population is considerably younger than that of the non-Indigenous population. As age is strongly related to many health measures, as well as labour force status, estimates of prevalence which do not take account of age may be misleading. The age standardised estimates of prevalence are those rates that 'would have occurred' should the Indigenous and non-Indigenous populations both have the standard age composition.

The summary publication used the direct age standardisation method. Estimates of age standardised rates were calculated using the following formula:

Equation: Age standardisation

Cdirect = the age standardised rate for the population of interest;

a = the age categories that have been used in the age standardisation;

Ca = the estimated rate for the population being standardised in age category a; and

Psa = the proportion of the standard population in age category a.

An alternative technique for analysing characteristics in populations that have different age structures is to compare the distribution of the variable of interest by age group. For this approach, unadjusted (ie not age standardised) data could be output in 10-year age ranges.

Age standardisation may not be appropriate for particular variables, even though the populations to compare have different age distributions and the variables in question are related to age. It is also necessary to check that the relationship between the variable of interest and age is broadly consistent across the populations. If the rates vary differently with age in the two populations then there is evidence of an interaction between age and population, and as a consequence age standardised comparisons are not valid.


COMPARISON TO THE 2002 NATSISS

Overview

The ABS previously conducted the National Aboriginal and Torres Strait Islander Social Survey (NATSISS) in 2002. A National Aboriginal and Torres Strait Islander Survey (NATSIS) was also conducted in 1994. Extensive information on the differences between the 2002 and 1994 surveys is provided in the National Aboriginal and Torres Strait Islander Social Survey: Expanded Confidentialised Unit Record File, Technical Paper, 2002 (cat. no. 4720.0).

Understanding the extent to which data from the 2008 and 2002 NATSISS can be compared is essential in interpreting apparent changes over time. While many key data items in the 2008 survey are the same or similar to those in the 2002 survey, there are differences in the sample design and coverage, survey methodology and content, definitions, and classifications, all of which may impact on comparability.


Survey methodology

Both surveys collected information from Indigenous people living in private dwellings throughout Australia. The 2008 NATSISS collected information from Indigenous people of all ages, while the 2002 survey collected information on Indigenous people aged 15 years and over. In 2008, visitors who had been staying at a selected household for six months or longer were considered in scope, whereas in 2002 visitors were excluded.

The scope of the NATSISS changed between 2002 and 2008, to enable the inclusion of Indigenous children aged 0-14 years. While this change does not specifically impact on the comparability of data for Indigenous adults aged 15 years and over, some survey modules and questions were redeveloped and/or expanded to include Indigenous children. For example, the 2008 survey includes information on the selected child's main carer, as well as on assumed parents or guardians of Indigenous children, which are not available in the 2002 survey. Refer to the data items list or to the topic based chapters for more information.

The sample sizes varied for both surveys. The 2008 survey had a sample of approximately 13,300 Indigenous people, compared to approximately 9,400 Indigenous people in 2002. However, when comparing by similar age groups, the 2008 survey had a sample of approximately 7,800 Indigenous people aged 15 years and over, with the remainder being children aged 0-14 years. The 2008 survey had a larger sample of Indigenous households, approximately 6,900 households compared to approximately 5,900 households in 2002. Broad differences in the design of the two surveys are outlined in the Survey design chapter.


Survey timing

Each of the surveys were conducted over a similar enumeration period. The 2008 NATSISS was undertaken from August 2008 to April 2009 and the 2002 NATSISS from August 2002 to April 2003.


Population characteristics

Classifications

The classification of several demographic and socio-economic characteristics used in the 2008 NATSISS differ to those used in 2002, as outlined in the table below.

STANDARD CLASSIFICATIONS, 2008 to 2002 comparison

Type of classification
2008 NATSISS
2002 NATSISS

Language
Australian Standard Classification of Languages (ASCL), 2005-06 (cat. no. 1267.0)
Australian Standard Classification of Languages (ASCL), 1997 (cat. no. 1267.0)
Education
Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0)
Australian Standard Classification of Education (ASCED), 2001 (cat. no. 1272.0)
Occupation
ANZSCO - Australian and New Zealand Standard Classification of Occupations, First Edition, 2006 (cat. no. 1220.0)
ASCO - Australian Standard Classification of Occupations, Second Edition, 1997 (cat. no. 1220.0)
Industry (of employment)
Australian and New Zealand Standard Industrial Classification (ANZSIC), 2006 (Revision 1.0) (cat. no. 1292.0)
Australian and New Zealand Standard Industrial Classification (ANZSIC), 1993 (cat. no. 1292.0)
Geography
Australian Standard Geographical Classification (ASGC), Jul 2008 (cat. no. 1216.0)
Australian Standard Geographical Classification (ASGC), 2001 (cat. no. 1216.0)



Geographic characteristics

The standard geographical classification for the two surveys differs, with the 2008 survey based on data from the 2006 Census of Population and Housing and the 2002 survey based on the 2001 Census. Mesh Blocks, a small area unit of information, were defined for the first time in the 2006 Census and were used to assist in targeting Indigenous people for the 2008 NATSISS. More information on this process is provided in the Survey design chapter.

Only one of the Socio-Economic Indexes for Areas (SEIFA) items is available from the 2002 NATSISS, the Index of Relative Disadvantage. Section of state is not available for 2002 and therefore, there are no data for comparisons.


Survey topics

Within each of the following topic based chapters, information on the 2008 data items includes comparisons to items collected in 2002:

Other considerations
  • Reported information on physical or mental health conditions was not medically verified, and was not necessarily based on diagnoses by a medical practitioner;
  • the results of previous surveys on alcohol and illicit drug consumption suggest a tendency for people to under-report actual consumption levels; and
  • the employment component of the NATSISS is based on a reduced set of questions from the ABS monthly Labour Force Survey.