4363.0.55.001 - National Health Survey: Users' Guide, 2001  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 27/05/2003   
   Page tools: Print Print Page Print all pages in this productPrint All  
Contents >> Chapter 7 - Data Quality and Interpretation of Results


Data quality

Sampling variability

Measure of sampling variability

Significance testing for Indigenous results

Non-sampling errors

Other factors affecting estimates

Specific data quality issues for the sparse NHS(I)

Interpretation of results

Age standardisation

Comparability between 1995 National Health Survey and 2001 National Health Survey

Additional comparability issues between 1995 National Health Survey and 2001 National Health Survey (I)


Note: The following information relates to 2001 NHS(G) and, unless otherwise specified, also relates to 2001 NHS(I). Results presented in this chapter are based on data from the 2001 NHS(G) sample and do not include data from the 2001 NHS(I) sample.

Although care was taken to ensure that the results of the 2001 NHS are as accurate as possible, there are certain factors which affect the reliability of the results to some extent and for which no adequate adjustments can be made. One such factor is known as sampling variability. Other factors are collectively referred to as non-sampling errors. These factors, which are discussed below, should be kept in mind in interpreting results of the survey.

Sampling variability

Since the estimates are based on information obtained from a sample of the population, they are subject to sampling variability (or sampling error), i.e. they may differ from the figures that would have been obtained from an enumeration of the entire population, using the same questionnaires and procedures. The magnitude of the sampling error associated with a sample estimate depends on the following factors:
  • sample design - there are many different methods which could have been used to obtain a sample from which to collect data on health status, health-related actions and health risk factors. The final design attempted to make survey results as accurate as possible within cost and operational constraints. (Details of sample design are contained in Chapter 2, under Sample Design and Selection)
  • sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error
  • population variability - the third factor which influences sampling error is the extent to which people differ on the particular characteristic being measured. This is referred to as the population variability for that characteristic. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and therefore the smaller the sampling error. Conversely, the more variable the characteristic, the greater the sampling error.

Measure of sampling variability

One measure of sampling variability is the standard error. There are about two chances in three that a sample estimate will differ by less than one standard error from the figure that would have been obtained if all dwellings had been included in the survey, and about nineteen chances in twenty that the difference will be less than two standard errors. The relative standard error (RSE) is the standard error expressed as a percentage of the estimate to which it relates.

Very small estimates may be subject to such high relative standard errors as to detract seriously from their value for most reasonable purposes. Only estimates with relative standard errors less than 25% are considered sufficiently reliable for most purposes. However, estimates with relative standard errors of 25% or more are included in ABS publications of results from this survey: estimates with an RSE of 25% to 50% are preceded by the symbol * as a caution to indicate that they are subject to high relative standard errors, while estimates with an RSE greater than 50% are preceded by the symbol ** to indicate the estimate is too unreliable for general use.

Standard errors of estimates from this survey are available compiled using two different methodologies:
  • the split-halves method, from which a table of standard errors is compiled. This table is a useful quick guide to the approximate level of sampling error on estimates. Details of the methodology used and the table of standard errors and relative standard errors for estimates of numbers of persons are provided in Appendix 12.
  • standard errors are also available compiled using the replicate weight methodology. This methodology involves taking a number of different sub-samples of survey records and reweighting them to total. The standard errors are compiled based on the differences between the reweighted estimates and the original estimate. Details of the methodology are provided in Appendix 12.

Standard errors based on the split-halves methodology are shown in the publication National Health Survey: Summary of Results, Australia 2001 (cat. no. 4364.0) and also in National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0). Standard errors compiled using the replicate weight methodology at the level of individual cells in a table, are available for National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) via the web site, and also on request with special data services.

Significance testing for Indigenous results

The relatively small number of Indigenous persons sampled means that results for health characteristics with low population prevalences are subject to relatively large sampling error. Comparing results between sub-populations needs to take account of the confidence that can be placed on the sample results. While significance tests are always encouraged, it is particularly important that any comparisons for Indigenous data are tested before inferring that a real difference exists. Some differences may appear quite marked but because of the relatively large sampling error associated with the Indigenous sample a significant difference may not exist.

Table 1 in the publication National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) presented 20 age standardised summary results for both the Indigenous and non-Indigenous sub-populations. Significance tests were performed on key comparisons within this table and for 9 of these 20 key data items, sample error prevents a conclusion (with 95% confidence) that the sample results for the two sub-populations are statistically different. Table 7.1 below presents the age standardised results for the 11 summary characteristics for which the differences were significant (at the 95% level). When the Indigenous results are compared between remote and non-remote areas the Indigenous sample is further divided and significant differences (with 95% confidence) were limited to only 4 of the key summary health measures as shown in Table 7.2 below.

TABLE 7.1 Significant comparisons between Indigenous and Non-Indigenous persons - age standardised


    Health status (15+ years)
    Excellent/very good/good
    Long-term conditions
    Eye/sight problems
    Ear/hearing problems
    Health related actions
    Hospital admission
    Other health professional
    Risk behaviours/characteristics
    Daily smoker

TABLE 7.2 Significant comparisons between Indigenous persons in remote and non-remote areas comparisons - age standardised


    Health status (15+ years)
    Long-term conditions
    Eye/sight problems
    Health related actions1
Other health professional

As 1995 NHS data are not available for remote areas, comparisons between Indigenous estimates for 1995 and 2001 are restricted to non-remote areas. Sampling error also affects the extent to which meaningful time series comparisons can be made. The small size of the 2001 Indigenous sample from non-remote areas, and the even smaller sample for the 1995 survey, mean that differences can be identified (with 95% confidence) for only 3 of the summary data items shown in Table 7.3 below.

TABLE 7.3 2001 NHS(I) Indigenous Results - age standardised


    Long-term conditions
    Ear/hearing problems
    Circulatory problems/diseases
    Health related actions
Other health professional

The very large number of comparisons possible in each table (between areas, sexes, ages, Indigenous status, condition prevalences etc., and combinations of these) did not allow all potential differences to be tested for statistical significance. To highlight the issue of sampling error and whether differences were significant and to assist users in interpreting results, estimates in tables 1 and 2 of the publication National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) were footnoted to indicate where apparent differences were not statistically significant.

Non-sampling errors

The imprecision due to sampling variability should not be confused with inaccuracies that may occur for other reasons, such as errors in response and reporting. Inaccuracies of this kind are referred to as non-sampling errors, and may occur in any enumeration whether it be a full count or a sample. The major sources of non-sampling error are:
    • errors related to the survey scope
    • response errors such as incorrect interpretation or wording of questions, interviewer bias, etc
    • bias due to non-response, because health status, health-related behaviour and other characteristics of non-responding persons may differ from responding persons
    • errors in processing such as mistakes in the recording or coding of the data obtained.

    Each of these sources of error is discussed in the following paragraphs.

    Errors related to survey scope

    Some dwellings may have been inadvertently included or excluded because, for example, the distinctions between whether they were private or non-private dwellings may have been unclear. All efforts were made to overcome such situations by constant updating of lists both before and during the survey. Also, some persons may have been inadvertently included or excluded because of difficulties in applying the scope rules concerning who was identified as usual residents, and concerning the treatment of some overseas visitors. Other errors which can arise from the application of the scope and coverage rules are outlined in section Scope and Coverage.

    Response errors

    In this survey response errors may have arisen from three main sources: deficiencies in questionnaire design and methodology; deficiencies in interviewing technique; and inaccurate reporting by the respondent.

    Errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or by poor overall layout of the questionnaire causing questions to be missed. In order to overcome problems of this kind, individual questions and each of the questionnaires overall were thoroughly tested before being finalised for use in the survey. Testing took two forms:
    • cognitive testing, involving questioning and re-questioning of focus groups. Four tests of this type were undertaken, covering various aspects of long-term conditions, medications, injuries, mental well-being, the folate component of diet, and the Women's Health Supplementary Form
    • pilot testing; for the 2001 NHS(G), one pilot test and a dress rehearsal were conducted in Brisbane and Melbourne respectively, each covering about 250 households. In sparse NHS(I) the pilot test was conducted in 4 communities in WA, NT and Qld and consisted of 40 households. The dress rehearsal was conducted in 2 communities in WA and 26 households were interviewed. In non-sparse NHS(I) the pilot test was conducted in Queensland and covered 104 Indigenous households. A dress rehearsal was not conducted separately for non-sparse NHS(I) because the questionnaire was essentially the same as the 2001 NHS(G) questionnaire. In these tests respondents were put through the entire survey. Feedback was obtained from interviewers and the data collected were analysed.

    As a result of both forms of testing, modifications were made to question design, wording, ordering and associated prompt cards, and some changes were made to survey procedures. In considering modifications it was sometimes necessary to balance better response to a particular item/topic against increased interview time or other effects on other parts of the survey, with the result that acceptable though not necessarily optimum approaches were adopted in some instances; for example in the collection of data on the usual intake of fruit and vegetables. Although such changes would have had the effect of minimising response errors due to questionnaire design and content issues, some will have inevitably have occurred in the final survey enumeration.

    Reporting errors may also have occurred because the survey is quite large, and particularly for those respondents reporting for themselves and several children, errors may have resulted from interviewer and/or respondent fatigue (i.e. loss of concentration). While efforts were made to minimise errors arising from deliberate misreporting or non-reporting by respondents (e.g. through emphasising the importance of the data, and through checks on consistency throughout the questionnaires), some instances will inevitably occurred.

    Reference periods used in relation to each topic were selected to suit the nature of the information being sought. However it is possible that the reference periods did not suit every person for every topic and that difficulty with recall may have led to inaccurate reporting in some instances.

    Lack of uniformity in interviewing standards will result in non-sampling errors. Thorough training and retraining programs, regular supervision and checking of interviewers’ work were methods employed to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the survey questionnaire (see Data Collection: Interviews). Non-uniformity of the interviewers themselves is also a potential source of error in that the impression made upon respondents by personal characteristics of individual interviewers such as age, sex, appearance and manner, may influence the answers obtained.

    Non-response bias

    Non-response may occur when people cannot or will not cooperate in the survey, or cannot be contacted by interviewers. Non-response can introduce a bias to the results obtained in that non-respondents may have different characteristics and behaviour patterns in relation to their health than those persons who responded to the survey. The magnitude of the bias depends on the extent of the differences and the level of non-response.

    The 2001 NHS(G) achieved an overall response rate of 92% (after sample loss). The response rate for the 2001 NHS(I) was 91% in non-sparsely settled areas and 87% in sparsely settled areas. Data to accurately quantify the nature and extent of the differences in health characteristics between respondents in the survey and non-respondents are not available. Under or over-representation of particular demographic groups in the sample are compensated for at the State, section of State, sex and age group levels in the weighting process. Other disparities are not adjusted for.

    Individuals for whom a partial response was obtained were treated as fully responding for estimation purposes if sufficient information was recorded e.g. if the only questions not answered related to income or age (provided the interviewer had provided an estimate) then the non-response items were coded to 'not stated'. With the exception of response to the Supplementary Women's Health Questionnaire, if any other questions were not answered, respondents were treated as if they had been non-responding (i.e. as if no questionnaire had been obtained). Missing answers to questions in the Supplementary Women's Health Questionnaire were recorded as "not stated"; generally information from these questionnaires was only completely omitted from the survey data file (i.e. treated as if the questionnaire was not received at all) when the information provided was considered so scant as to be of no use.

    In 2001 NHS(I), Indigenous facilitators were used to assist with interviews in an attempt to further reduce the impact and level of non-response. The sample achieved was weighted to population benchmarks to reduce the effect of any non-response bias.

    Processing Errors

    Processing errors may occur at any stage between initial collection of the data and final compilation of statistics. Specifically, in this survey, processing errors may have occurred at the following stages in the processing system:
    • clerical checking and coding - errors may have occurred during checking of questionnaires for completeness and during input or output coding
    • data transfer - errors may have occurred during the OMR transfer of data from the original questionnaires to computer files, or in transferring data between records
    • editing - computer editing programs may have failed to detect errors which could reasonably have been corrected
    • manipulation of data - errors may have occurred during various stages of computer processing involving the manipulation of raw data to produce the final survey data files (e.g. during the estimation procedure or weighting of the data file or in the course of deriving new data items from raw survey data).

    A number of steps were taken to minimise errors at various stages of processing:
    • coding - in the main, experienced personnel undertook the majority of input and output coding. Detailed instructions were provided, and coding was supervised and checked. An audit was conducted of the quality of country of birth, language and occupation and industry coding undertaken in the ABS State offices to ensure appropriate quality standards were maintained.
    • Comprehensive quality assurance procedures were developed for coding of conditions, medications and alcohol data. The centralisation of this coding meant such a strategy could be developed and implemented more easily than under a decentralised arrangement as for input coding. These procedures aimed to ensure that the accuracy of coding was continuously monitored and, if warranted, to ensure that major coding errors were corrected. The procedures were aimed at ensuring a maximum error rate of 3% on condition coding, medication coding, and alcohol coding. In the end an error rate of less than 1% was achieved.
    • computer editing - edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present and that specific values lay within certain ranges. These edits were designed to detect reporting errors, errors that may have occurred when the data were entered onto computer files, incorrect relationships between data items or missing data items.
    • data file checks - at various stages during processing (such as after computer editing and subsequent amendments, weighting of the file and after derivation of new data items) frequency counts and/or tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the contents of the data file, to identify unusual values which may have significantly affected estimates and illogical relationships not previously identified by edits.

    Other factors affecting estimates

    In addition to data quality issues, there are a number of other factors, both general and specific to individual topics, which should be considered in interpreting the results of this survey. The general factors affect all estimates obtained, but may affect topics to a greater or lesser degree depending on the nature of the topic and the uses to which the estimates are put. This section outlines these general factors. Additional issues relating to the interpretation of individual topics are discussed in the topic descriptions provided in other sections.
    • Sampling variability: It is important to bear in mind that survey estimates are derived from a sample of the population and are, therefore, subject to sampling variability. Consideration should be given to whether estimates are sufficiently reliable for the uses to which they are to be put. Sampling variability and its implications for data reliability are discussed in Data Quality: Sampling Variability.
    • Scope: The scope of the survey defines the boundaries of the population to which the estimates relate. The most important aspect of the survey scope affecting the interpretation of estimates from this survey is that institutionalised persons (including inpatients of hospitals, nursing homes and other health institutions) and other persons resident in non-private dwellings (e.g. hotels, motels, boarding houses) were excluded from the survey.
    • Personal interview and self-assessment nature of the survey: The survey was designed using personal interview and self-completion questionnaires (with proxy interviews for children aged under 18 years), to obtain data on respondents’ own perceptions of their state of health, their use of health services and aspects of their lifestyle. The information obtained is therefore not necessarily based on any professional opinion (e.g. a doctor, nurse, dentist, etc.) or on information available from records kept by respondents. For this reason data from this survey are not necessarily compatible with data from other sources or with data collected by other methods.
    • Concepts and definitions: The scope of each topic and the concepts and definitions associated with individual pieces of information (see Survey Content and Methods) should be considered when interpreting survey results.
    • Wording of questions: To enable accurate interpretation of survey results it is essential to bear in mind the precise wording of questions used to collect individual items of data, and particularly in those cases where the question involved a series of ‘running prompts’ or where a prompt card was used.
    • While no analysis has been conducted at the time of writing, it is believed that reporting of medical conditions is improved if questions are about a specific condition, or that condition is otherwise specifically identified (e.g. such as through a prompt card), than if left to the respondent to identify in response to a general question. It is not practicable to mention all conditions in questions or prompts; the approach taken in the survey was to identify NHPA conditions and some other conditions of particular interest or known from previous surveys to require special attention. The fact then that some conditions are specifically identified in the questionnaires and others not will affect the relativity of response levels (and possibly accuracy) between conditions, and where that level and nature of identification has changed between surveys, to also affect comparability.
    • Reference periods: All results should be considered within the context of the time references that apply to the various topics. A variety of reference periods was used for specific topics (e.g. one week for alcohol consumption, two weeks for exercise and actions taken, four weeks for events resulting in jury, six months for long-term conditions, etc). Caution should be exercised when attempting to extrapolate results of this survey to time periods other than those on which the estimates are based or when attempting to interpret cross-classifications of items which used different reference periods.
      Although it can be expected that a larger section of the population would have reported taking a certain action if a longer reference period had been used, the increase is not proportionate to the increase in time. While it is possible to produce reasonable estimates of the number of actions taken in a year by multiplying the estimate for two weeks by 26, it is not possible to produce, by this method, estimates of the number of persons who took those actions.

      This should be taken into consideration when comparing results from this survey to data from other sources where the data relates to different reference periods.
    • Coding framework: The coding framework (i.e. the classifications and categories) used in the survey provides an indication of the level of detail available in survey output. However, the coding framework adopted had to take account of the ability of respondents to provide the data, and may limit the amount of detail that can be provided in statistical output. For example, the output classifications of medical conditions reported by respondents were developed in recognition of the type of information reported (e.g. non-medical terminology, symptoms rather than conditions, generic rather than specific terminology, etc.). One result of this is that some caution should be used in interpreting counts from this survey of the number of medical conditions experienced, since such counts would, in part, be a function of the categories contained in the classification. The major classifications used in this survey are briefly discussed under the relevant topic descriptions in Content and Methods. Copies of, or references to, the full classifications are provided in Appendixes.
    • Collection period: It is important to bear in mind the survey collection period (from February to November 2001 for the 2001 NHS(G) and from June to November 2001 for 2001 NHS(I), with a 6 week break from 28 July to 10 September 2001 for the 2001 Population Census and Post Enumeration Survey) when considering results in perspective, or when comparing them with data from another source.

    Specific data quality issues for the sparse NHS(I)

    Based on experience with previous ABS surveys of Aboriginal and Torres Strait Islander peoples in sparsely settled areas, it was known that standard survey concepts and questions are not always appropriate. Specific testing was conducted in sparsely settled areas with the aim of testing as much as possible, but recognising that it may not be possible to fully test some items. However, based on the ABS's best judgement we would proceed to enumeration with a view that such data would be assessed for quality based on interviewer feedback and post-field quality checks, and would not be released if data quality was considered to be unacceptable.

    Data quality investigations undertaken on sparse NHS(I) data before final enumeration

    A validation process was undertaken on the 2001 NHS(I) pilot test data collected in sparsely settled areas to assess its quality. The following methods were employed to validate each data item:
    • Validation against health clinic records

    Approximately 50% of completed questionnaires from the pilot test were randomly selected and validated against the respondents' health clinic records where possible, after first obtaining each respondent's permission.
    • Interviewer evaluation

    Each data item was evaluated based on interviewer feedback on the performance of survey questions in the field.

    The outcomes from field testing and the validation process indicated that approximately 40% of the 2001 NHS(G) content would be of acceptable data quality if collected for the 2001 NHS(I) sample in sparsely settled areas using the personal interview collection methodology. This is based on the negligible level of mismatch encountered when validating topics against clinic records (e.g. topics such as visits to hospitals, hearing problems, sight problems and injuries), or the favourable assessment given to the performance of questions by interviewers (e.g. general demographics data, smoking, dental visits). Field testing also indicated that the number of data items to be collected in sparsely settled areas could be increased to approximately 60% of the 2001 NHS(G) content if selected health information were to be provided by health clinic staff (based on clinic records). The topics judged to be best suited for collection from clinics were type and number of medications used, health service usage, immunisation status, type of diabetes, and some women's health items.

    Based on these findings, the ABS sought consent and support from State and Territory health departments and representative Aboriginal and Torres Strait Islander community controlled health organisations in October 2000 for health clinic staff to provide selected information on the respondents behalf from their records, with the respondent's written permission. There was general support for ABS efforts in collecting reliable health information on Aboriginal and Torres Strait Islander peoples. However, the ABS was advised that relevant ethics committees in each State or Territory would have to fully consider all the implications of using health clinic records and this process could be very lengthy. As there was limited time to follow this through before enumeration was due to commence, the collection of selected health data from community health clinics was not adopted for the 2001 NHS(I). Instead, it was decided to collect items via personal interview, including items where some data quality concerns remained, with a view to examine the data collected. If no quality issues so serious as to undermine the usefulness of the data were encountered then the data would be released. The data items listed below were those where further investigation was considered necessary. Interviewers were instructed to record details about these items so an assessment could be made prior to releasing the results.

    Items from the 2001 NHS(I) where further investigation was required to assess quality:
    • Influenza vaccination status
    • Pneumococcus vaccination status
    • Whether admitted to hospital
    • Number of times admitted to hospital
    • Number of nights in hospital in most recent stay
    • Whether discharged in last 2 weeks
    • Patient type at most recent admission
    • Whether visited casualty/ emergency ward or outpatients section (remote)
    • Number of times visited casualty/emergency or outpatients (remote)
    • Whether consulted general practitioner
    • Number of times consulted GP
    • Number of times consulted specialist
    • Time since last consulted GP
    • Whether consulted OHP
    • Type of OHP consulted
    • Whether ever had a mammogram
    • Reasons for last mammogram
    • Whether have regular mammograms (remote)
    • Whether ever had a Pap smear test
    • Whether have regular Pap smear tests (remote)

    Data quality investigations undertaken on sparse NHS(I) data after final enumeration

    Interviewer assessments and feedback

    As part of the sparse NHS(I) questionnaire, all interviewers completed an 'Interviewer's Assessment'. The aim of this assessment was to provide qualitative information that would assist in data validation. This assessment covered the items where further investigation was required and also other items where there can be some level of inaccuracy in reporting.

    Rather than having the interviewer subjectively assess the accuracy of the respondent's answers, interviewers were instructed to record 'observations' on particular topics and questions. These observations were then used to assess data quality.

    Interviewers were asked to assess responses given using the following 4-point scale:
    • adequate answer (the respondent gives a confident answer that meets the objectives of the question)
    • qualified answer (the respondent gives an answer that meets the objectives of the question but with some uncertainty, e.g. they are 'pretty sure' or 'think so')
    • inadequate answer (the respondent gives an answer that they are completely unsure about, e.g. an obvious guess)
    • no answer provided (the respondent is not able to answer or refuses to answer).

    The topics included for interviewer assessment were:

    Self-assessed health
    Adult immunisation
    Cardiovascular conditions
    Long-term health conditions
    Hospital visits
    Dentist visits
    Doctor visits
    Women's health
    Height and weight.

    Interviewer assessments and feedback indicated that height and weight were the questions which were least accurately reported. However, when sparse NHS(I) respondents were unsure of their height or weight they were asked if they would agree to be measured and interviewers reported that nearly all respondents who were unsure of their height or weight agreed to be measured. It should be noted that height and weight are not well self-reported in the general population and some level of inaccuracy is always expected.

    Most questions that asked for a reference period (i.e. how long since respondent had seen optometrist/dentist/doctor) were considered to contain some inaccuracies. When required, the interviewer prompted the respondent with a specific reference event (e.g. Christmas) in order to gain a more accurate response. While some problems were expected with the "Income" section of the questionnaire, results from the Interviewer Assessments showed that, with the exception of Q.374 ("Before Income Tax and other expenses are taken out, how much does your spouse/partner usually receive?'), responses to these questions were considered by interviewers to be reasonably accurate.

    General interviewer feedback received from all states was relatively consistent, with the same types of questions being reported as problematic in all reports. Contributing factors reported included language barriers, western concepts, varying skills of Indigenous facilitators, a tendency for respondents to say they are perfectly healthy, and individual surveying methods on a group culture.

    Validation against external data sources

    In validating the survey data, results were compared against external data sources where possible. As there is only limited health information available on the Indigenous population, and the data that are available are collected using different methodologies, direct comparisons between 2001 NHS(I) data and other external sources was not always possible. When direct comparisons were not possible, similar data were compared, where available, to check that general trends in the data were similar. For some items, because the data are not readily available by another source, no validation against an external data source was possible. The 2001 NHS(I) data were also compared with comparable data collected in the 2001 NHS(G) and the 1995 NHS to ensure the data followed expected patterns. Although it was not always possible to verify results against an external source, there was extensive internal validation performed, as outlined previously, for all data items.

    Outcomes of data quality investigations on sparse NHS(I) data

    The data quality investigations undertaken did not provide strong evidence that particular data items collected had quality concerns so serious as to undermine the usefulness of the data. However, it should be noted that this assessment has been made based on information available to the ABS at the time and there was only limited external validation possible for some items. The items listed above as being those 'where further investigation was performed' should be used with caution and if they appear to contradict another reliable source of data the ABS should be contacted. However, some apparent discrepancies can be due to differences in the method of collection or the scope of the data collection, or various other factors, without either source of data necessarily being incorrect. The ABS will interrogate any apparent discrepancies and a decision will be made regarding the reliability of the data item in question.


    As noted above, there is a variety of factors which have impacted on the quality of the data collected. Through various means in the development and conduct of this survey the ABS has sought to minimise the effects of these factors; however, it is only sampling error which can be quantified enabling users of the data to adjust for possible errors when using/interpreting the data. For the other issues affecting the data, information is not available from the survey to enable these effects to be quantified. The relative importance of these factors will differ between topics, between items within topics, and by characteristics of respondents.

    Comments have been included in individual topic descriptions in this publication to alert users of the data to the more significant issues likely to effect results for that topic or items within it. In part these notes reflect ABS experience of past health and other surveys and feedback from users of data from those surveys, ABS and other research on survey methods and response patterns, on testing for this survey, on comparisons between survey data and other data sources and in part on 'common sense'. However, these comments are indicative only, and are not necessarily comprehensive of all factors impacting results, nor necessarily of the relative importance of those factors.

    Against this background, the following general comments are provided about interpreting data from the survey;
    • the survey aims to provide statistics which represent the population or component groups of the population; the survey does not aim to provide data for analysis at the individual level. While errors of the types noted above may occur in individual respondent records, if these errors are not repeated commonly throughout the respondent population, they will have little impact on the estimates from the survey, and hence little impact on the story to be gleaned from those estimates.
    • the survey data are all self-reported. For some items topics/items their self-reported nature is the purpose/value of the item (self-assessed health, changes in health, self-assessed body mass, reasons for not insuring, etc) while for some others self-reported data are the only source of the information, particularly information with a population group perspective (e.g. insurance status, diet, alcohol consumption). For other topics/items information is available from other sources (e.g. hospital episodes), and because of the different sources and methods, including the self-reported nature of the survey data, the information will likely differ between those sources. In the case of data from administrative sources it is likely (though not necessarily certain) that those data will be more accurate than the survey data. However, the survey data should not be discounted on that basis; survey data can often show other dimensions to the data (e.g. population group dimension, related and other health characteristics, information about uses of other health services) which are not available from administrative sources.
    • some survey topics, such as alcohol consumption, are known to be of relatively low data quality. While this means the data should be interpreted with care, the information is still considered valuable for certain uses. For example, while the overall levels of alcohol consumption described by the survey should be interpreted with caution, the data are still considered useful in describing consumption patterns across days of the week, types of drink consumed, relative levels of consumption across population groups, alcohol consumption in relation to other risk behaviours or characteristics, and is useful for monitoring changes in the levels and patterns of consumption over time. Notes regarding any known data quality issues are contained in the individual topic descriptions in this publication.
    • although various reference periods are used throughout the survey for different topics (e.g. current, usual, last week, last 2 weeks, last 4 weeks) the survey essentially provides a 'point in time' picture of the health of the population and of population sub-groups. The survey then provides information about the prevalence of characteristics, not the incidence of those characteristics or of changes in characteristics (except in terms of differences between surveys). Because the survey was conducted over a 10 month period, the results essentially are an average over that period e.g. they represent a typical week, fortnight, etc in that period. In some cases these estimates can reasonably be expanded to represent a different reference period. For example, the number of doctor consultations in a two week period can be multiplied by 26 to provide an estimate of consultations over a year; however see point below regarding seasonal variations. In other cases, for example persons who consulted a doctor, the data cannot be expanded in this way.

    For the 2001 NHS(I), the content collected in sparsely settled areas was a subset of that collected in non-sparsely settled areas, therefore not all data items are available for the total Indigenous population. Also, no 1995 NHS data are available for sparsely settled areas restricting comparisons between Indigenous estimates for 1995 and 2001 to non-sparsely settled areas.

    In both 1995 and 2001, all children of Aboriginal and/or Torres Strait Islander origin living in households in non-sparsely settled areas had a random chance of selection in the 2001 NHS(G). Similarly, all such Indigenous children had a chance of selection in the Indigenous supplement to the 1995 NHS. Selected households in non-sparsely settled areas identified to have at least one usual resident of Aboriginal and/or Torres Strait Islander origin were enumerated. However, in the 2001 NHS(I), selected households were screened to identify only those households where at least one adult (18 years or over) of Aboriginal and/or Torres Strait Islander origin was a usual resident of the household. Therefore, Indigenous children living in non-sparsely settled areas where there was no Indigenous adult usually resident in the household (up to one quarter of all Indigenous children in non-sparsely settled areas reside in such households) did not have a chance of selection in the supplement.

    Indigenous respondents from the 2001 NHS(G) and 2001 NHS(I) samples were weighted and then benchmarked to Indigenous population estimates (for age, sex, and area of usual residence) so that final survey estimates will be representative of the age and sex characteristics of the Indigenous population in different areas. However, it is possible that the health characteristics of Indigenous children living in households where there are no Indigenous adults may be different to those of Indigenous children of the same age and sex living in the same non-sparsely settled areas, but in households where Indigenous adults are resident. If such differences exist, then survey results for Indigenous children may under-represent these differences. Although the methodology employed may under-represent these children in the final estimates which could affect interpretation of some results, the under-representation is generally not significant in the context of the sampling error associated with the survey. In the 2004-05 Indigenous Health Survey, field procedures will be changed to provide for adequate representation of Indigenous children in households with no resident Indigenous adult.


    Australia's Indigenous population is considerably younger (on average) than the non-Indigenous population, and there is a close relationship between age and health-related issues. It is often misleading to compare Indigenous and non-Indigenous health outcomes unless the data have been age standardised to take account of this difference (i.e. adjusting the results to reflect the age composition of the total Australian population at the 2001 Census). Therefore, in National Health Survey: Aboriginal and Torres Strait Islander Results Australia, 2001 (cat. no.4715.0) any results presenting total population prevalence rates by Indigenous status were age standardised (about half the tables in the publication). However, for more detailed presentations by age group, the data were not age standardised. Analysis showed that, within narrow age groups, age standardisation did not significantly affect the results. Therefore, presenting the data without being age standardised provided measures that allowed comparisons between sub-populations as well as providing measures of prevalence within the reported sub-populations.

    For National Health Survey: Aboriginal and Torres Strait Islander Results Australia, 2001, the direct age standardisation method was used. The formula for direct standardisation is as follows:

    Image: equation

    where Cdirect = the age standardised estimate of prevalence for the population of interest,
    a = the age categories that have been used in the age standardisation,
    Ca = the estimate of prevalence for the population being standardised in age category a, and
    Psa = the proportion of the standard population in age category a.

    Data which have been tabulated according to broad age groupings have not been age-standardised and hence the rates apply to the Indigenous and non-Indigenous populations without adjustments to account for the differing age structures. These rates, together with the total estimates presented in Table 4 below, can be used to calculate the actual population estimate for an item of interest. The ABS considers that comparisons of unadjusted rates within the broad age groups presented in National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) would be little different if standardised within the age ranges.

    TABLE 7.4 Population estimates: Aboriginal and Torres Strait Islander Persons: Summary health characteristics, Australia, 1995 and 2001(a)


    Health characteristic
    Non-Remote areas (a)
    Remote areas (b)
    Non-Remote areas (b)
    Self-assessed health status (c)
    Excellent/very good
    Selected long-term conditions (d)(e)
    Eye/sight problems
    Ear/hearing problems
    Circulatory problems/diseases
    Back problems
    No long-term condition
    Health-related actions (f)
    Admitted to hospital (g)
    Visited casualty/outpatients
    Doctor consultation (GP and/or specialist)
    Dental consultation
    Consultation with other health professional
    Day(s) away from work/study (h)
    Risk behaviour/characteristics
    Current daily smokers (i)
    Risky/high risk alcohol consumption (i)
    Sedentary/low level exercise (c)(j)
    Overweight/obese BMI (c)
    Low usual daily fruit intake (b)(j)(k)
    Low usual daily vegetable intake (b)(j)(k)

    a) Indigenous data for 1995 are only available for non-remote areas. As a result non-Indigenous and time series comparisons are made on this basis.
    b) See Appendix 1 'Glossary of terms'.
    c) Persons aged 15 years and over.
    d) For 1995 data, International Classification of Diseases: 9th revision (ICD-9) based output classification.
    e) For 2001 data, International Classification of Diseases: 10th revision (ICD-10) based output classification.
    f) Hospital admissions relate to 12 months prior to interview. All other health-related actions relate to the two weeks prior to the interview.
    g) Hospital admissions collected for different timeframes in 1995 (2 weeks) and 2001 (12 months). While 1995 data is available, it is not presented here as comparisons over time are not possible.
    h) Persons aged 5 years and over.
    i) Persons aged 18 years and over.
    j) Data collected for non-remote areas only.
    k) Persons aged 12 years and over.

      Understanding the comparability of data from the 2001 NHS with data from the last previous NHS in 1995 (and with the 1989-90 NHS) is important to the use of those data and interpretation of apparent changes in health characteristics over time. While the 2001 NHS is deliberately the same or similar in many ways to the 1995 NHS (and in part to the 1989-90 NHS), there are important differences across most aspects of the surveys; sample design and coverage, survey methodology and content, definitions, classification, etc. These differences will effect the degree to which data are directly comparable between the surveys, and hence the interpretation of apparent changes in health characteristics over the 1995 to 2001 period.

      Throughout the topic descriptions and in other parts of this publication, comments have been made about the changes between surveys and their expected impact on the comparability of data. These are general comments based on results of testing, ABS experience in survey development, and a preliminary examination of results from the 2001 survey. As a result they should not be regarded as definitive statements on comparability and may omit the types of findings which might result from a detailed analysis of the affects of all changes made.

      The following tables summarise the key differences between the 1995 and 2001 surveys, and hence the degree of comparability between them:
      • Table 7.5 shows general characteristics of the two surveys, where differences have the potential to affect comparability for several or all topics, and provides some general comments as to the probable effects of these changes.
      • Table 7.9 compares the survey content at the topic level. As further issues affecting comparability may be present at the data item and classification levels readers are encouraged to refer to the individual topic outlines in other parts of this publication for more detail.

      TABLE 7.5: General Survey Characteristics

      Survey characteristic1995 NHS2001 NHS(G)

      Collection methodPersonal interview with adult respondents; proxy interview for children less than 18 years.

      Self-completion questionnaire for adult female respondents

      Self-completion general health & well-being questionnaire for adult respondents
      Personal interview with adult respondents; proxy interview for children less than 18 years.

      Self-completion questionnaire
      for adult female respondents
      QuestionnairesHousehold form; Main questionnaire; Women's supplementary questionnaire;
      General health and well-being questionnaire
      Household form; Main adult questionnaire; Main child questionnaire; Women's supplementary questionnaire
      Sample coveragePrivate and special dwellings
      Urban, rural and sparsely settled areas
      All States and Territories; additional sample in Victoria, South Australia, NT and ACT
      Private dwellings only
      Urban and rural areas

      All States and Territories; additional sample in ACT. The NT sample was reduced such that it contributes appropriately to national estimates, but is not large enough to support estimates for the NT.
      Sample design/sizeAll persons

      Fully responding H'holds = 21787
      Final sample = 53 828 persons
      Sub-sampling of some topics; see Table 2.
      All children aged 0-6 years, one child 7-17 years, one adult per dwelling
      Fully responding H'holds = 17918
      Final sample = 26863 persons
      Enumeration periodJanuary 1995 to January 1996 February 2001 to November 2001
      Collection methodologyPen and paper questionnaire
      OMR and key data entry
      Manual coding
      Pen and paper questionnaire
      OMR and key data entry
      Manual coding, supported by CAC systems.
      Main output unitsPerson
      Household, family, income unit

      Sample design/size
      While the overall sample of households was about 18% lower in 2001 than 1995, the enumeration of selected persons only within households has meant the 2001 sample of persons is about half that of the 1995 survey. The 2001 approach has had the effect of spreading the sample more and reducing the effects on the final estimates of clustering of characteristics within households. However the smaller sample in 2001 has the effect of more than doubling the standard errors on estimates as shown below:

      Table 7.6 Relative standard errors (%)

      Size of estimate: Australia
      1995 NHS
      2001 NHS(G)


      The reduced reliability of estimates of similar size from the 2001 survey compared with the 1995 survey should be considered in interpreting apparent changes between the surveys. It is recommended that apparent changes are subject to significance testing to ensure that changes are not simply the product of different sample size and design.

      Through the weighting process, survey estimates at the State x part of State x sex x broad age group will be the same or very similar to the benchmark populations. Because the characteristics of the sample are not identical to those of the benchmark population (table see below), some records will receive higher or lower weights than others. As a result the RSE on estimates for those particular groups may also be slightly higher or lower than the average RSE shown in the table above. As this will vary between surveys, it is a factor to consider in comparing 1995 with 2001 data, but its impact on comparability is expected to be small.

      TABLE 7.7. Survey weighting

      2001 NHS(G)
      1995 NHS

      Sex / age (yrs)
      % of adults in sample
      % of adults in pop'n(a)
      % of adults in sample
      % of adults in pop'n

      18 - 24
      25 - 34
      35 - 44
      45 - 54
      55 - 64
      65 - 74
      75 and over

      (a) Benchmark population as at 30 June 2001.

      Partial enumeration of households
      In the 1995 NHS all persons in sampled dwellings were included in the survey, and only records from fully responding households were retained on the data file. This meant that results could be compiled at household, family and income unit level in addition to person level. Because the 2001 survey sub-sampled persons in households (one adult, one child 7-17 years, all children 0-6 years) complete enumeration only occurred in a minority of households, and by definition, only in single adult households. The table below shows the degree of enumeration within households, by household composition.

      TABLE 7.8: Number of Households by Composition and Coverage: 2001 NHS(G)

      All household members enumerated
      All children enumerated (a) but one or more adults not enumerated
      All adults enumerated but one or more children not enumerated
      One or more adults and one or more children not enumerated

      Single person
      Couple only
      6 (b)
      Single adult with child(ren)
      Couple with
      683 (b)
      Total - number
      Total - per cent

      (a) Includes households with no children.
      (b) Includes households where a spouse was less than 18 years old.

      Enumeration period
      The 2001 NHS(G) survey was effectively enumerated over about a ten month period, compared with a 12 month period for the 1995 survey; the 2001 survey was not enumerated in December or January, nor during a 6 week period mid-winter (coinciding with the conduct of the 2001 Census of Population and Housing and the Post Enumeration Survey). The effects of the shorter enumeration period have been assessed. For most variables collected in the 2001 NHS(G), the seasonal pattern is such that the proposed enumeration pattern should not produce any bias of a level that would be problematic to most users. Statistically significant differences were found for some items in the alcohol consumption, visits to other health professionals and exercise topics. It should be taken into account in interpretting Indigenous results particularly for these topics that the NHS(I) survey was only conducted over a six month period so any seasonal effects may be exaggerated for the NHS(I) sample.

      Because some data in the 2001 survey were not collected in the 1995 survey, or were collected in a substantially different way, it has not been possible to examine the possible effects of the shorter enumeration on all estimates from the 2001 survey, and users are advised to consider this when interpreting the data.

      TABLE 7.9: Survey content
      Note: This table only refers to content for 2001 NHS(G) and non-sparse NHS(I). For details about whether particular items are available for sparse NHS(I), please refer to Indigenous Output Data Item List on the web site.

        Topics covered
      1995 NHS
      2001 NHS
      Main items available from 2001 Comments on main differences between 1995 and 2001

      Health status indicators

      General health -
      Self-assessed health status
      Health transition
      Quality of life


      Self-assessed health status

      Health transition
      Quality of life
      Covered as part of SF36 in 1995
      Recent illness
      Asthma status; whether has asthma action plan; type of plan; types of medications used in last 2 weeks; reasons for medication use; type of other action for asthma in last 2 weeks.Covered in the context of general long-term conditions in 1995, and similar recent actions data obtained in 1995, but key conceptual and classification changes between surveys.
      Cancer status; age diagnosed; type of cancer; types of medications used in last 2 weeks.Covered in the context of general long-term conditions in 1995. Conceptual and classification changes between surveys.
      Circulatory conditions
      Circulatory condition status; types of condition; type of medications used in last 2 weeks.Covered in the context of general long-term conditions in 1995. Conceptual and classification changes between surveys.
      Diabetes/ high sugar levels
      Diabetes/High sugar level status; types of diabetes; types of medications used in last 2 weeks; type of other actions taken to manage condition; whether condition interferes with usual activity; whether diabetes- related sight problems; period since last visited optometrist/eye specialist.
      Similar methodology used. Data items differ between surveys but share a core of common items.
      Other long-term conditions
      Types of condition.Classification and methodological differences.
      Cause of reported long-term conditions
      Whether work related or result of injury; type of injury event.Different methodology means that data conceptually differ between surveys. Also, broader scope in 1995 than 2001: recent and long-term conditions in 1995 but long-term conditions only in 2001.
      Whether "defined" event in last 4 weeks resulting in action; whether resulted in injury; type of occurrence; type of injury; parts of body injured; activity at time of event; location of event; whether attended for treatment; whether time off work/school or other reduced activity as a result of injury.New methodology adopted in 2001, which resulted in conceptually different coverage of topic. Additional data items in 2001.
      Asthma symptoms
      Whether whistly/wheezy chest - frequency; Whether coughing at night or during exercise - frequency.Collected for restricted age group in 2001.
      General health & well-being (SF36)
      Mental well-being
      (not collected for 2001 NHS(I))
      Psychological distress (K10); types of medication used in last 2 weeks; frequency and duration of use.New topic. Includes two questions from SF36, but different methodology used to collect data.
      Health - related actions

      Stays in hospital
      Whether admitted in last 12 months; number of admissions; Most recent admission - number of nights, patient type, whether discharged in last 2 weeks. 2 week reference period in 1995; 12 month and 2 week period in 2001. No link to reason for stay in 2001.
      Visits to casualty, outpatients, day clinics
      Whether visited in last 2 weeks; number of visits; whether outpatients visit related to admission.No link to reason for action in 2001.
      Doctor consultations
      Time since last visit; number of visits in last 2 weeks, separately for GP and specialist.GP and specialist split in 2001, but no link to reason for action.
      Dental consultations
      Time since last visit; number of visits in last 2 weeks.Type of treatment/service received (hence reason for action) not collected in 2001.
      Consultations with other health professionals
      Whether visited in last 2 weeks; number of visits by type of OHP.Expanded list of OHP types in 2001, but no link to reason for action.
      Other persons/organisations consulted
      Days away from work/school: own illness
      Whether had days away in last 2 weeks due to own illness; number of days.No link to reason for action in 2001.
      Days away from work/school as carer
      Whether had days away in last 2 weeks as carer; number of days.
      Other days of reduced activity
      Whether cut down on usual activities in last 2 weeks due to illness; number of days.No link to reason for action in 2001.
      Use of medications (incl vitamins, minerals, natural and herbal medicines).
      See asthma, cancer, heart and circulatory conditions, diabetes and mental well-being above.Restricted and conceptually different coverage in 2001. Whether prescribed/source of medication not available and frequency and duration of use restricted coverage in 2001.
      Health risk factors

      Adult immunisation
      Whether had influenza and pneumococcal vaccines; time since last vaccine; how obtained flu vaccine.
      Alcohol consumption
      Period since last drank; days consumed in last week; quantity of alcohol by type of drink consumed in last week (max 3 days); alcohol risk level. Collected for sub-sample of adult respondents in 1995. Minor change to drink categories and improved coding system in 2001, but essentially methodology and content the same for both surveys.
      Whether breastfed; duration of breastfeeding; age at introduction of infant formula, cows milk, milk substitute, solid food; reasons for ceasing breastfeeding.Methodology and content the same for both surveys.
      Body mass
      Self-reported height, weight and body mass; Body mass index.Methodology and content the same for both surveys.
      Children's immunisation
      See comment
      Type and number of vaccinations received; reported and derived immunisation status; reasons for vaccinating/not vaccinating.Not covered in 1995 NHS but a separate survey was conducted in April 1995. Similar content updated to reflect current immunisation schedule.
      Whether self/partner use contraception; type of contraception used; age first used cont. pill; reasons not using contraception.Expanded and conceptually different coverage in 2001.
      Type of milk usually consumed; usual daily intake of vegetables & fruit; frequency adding salt after cooking; Consumption of folate enriched products in last 2 weeks; Food security.All items except those relating to deliberate consumption of folate were obtained in the 1995 National Nutrition Survey (NNS). In the NNS usual intake of fruit and vegetables was obtained via a self-completion questionnaire - in the 2001 NHS they were obtained through interviewer administered questions supported by prompt cards.
      Type; frequency and duration of exercise in last 2 weeks; exercise level.Methodology and content the same for both surveys.
      Smoker status; number of smokers in household.Derivation of smokers in household conceptually different, but methodology and content the same for smoker status.
      Sun protection
      Whether regularly have skin checks; type of protective measures taken in last month.Methodology and content the same, but most items collected for children only in 2001.
      Women's supplementary health topics

      Screening for breast & cervical cancer
      Types and usual frequency of regular breast examinations; time since and reasons for last mammogram; Usual frequency of Pap tests and time since last.Similar methodology and items in both surveys; new concept of "regular" tests introduced in 2001.
      Use of Hormone Replacement Therapy
      Whether currently use; time used HRT.Similar questions but scope narrowed to medically prescribed HRT in 2001.
      Whether had hysterectomy; age at hysterectomy. Same items from both surveys.
      Breastfeeding history
      Number of children ever had, whether breastfed, number breastfed, time breastfed each child.Number of children ever had was a new item in 2001; other items the same in both surveys.
      Population characteristics

      General demographics
      Sex; age; marital status (registered & social); Indigenous status; country of birth; year of arrival in Australia; language spoken at home; proficiency in English; Family type;
      Household size, composition, type; Income unit type; Location.
      Social marital status was not available in 1995. Information about English language proficiency conceptually differs between surveys.
      Whether attending school; age left school; highest level of school completed; whether has post-school qualification; level of highest post-school qualification; whether currently studying full or part time.Information about post school qualification was recorded for a sub-sample of respondents only in 1995. Additional item on highest level of school completed available in 2001.
      Labour force
      Labour force status; status in employment; no of jobs; occupation, industry and industry sector of main job; hours worked; duration of unemployment; shift work.Additional information available in 2001 on Industry of employment, industry sector and shift work.
      Personal income - Level; sources and main source; type of pension/benefit received.
      Income unit income - Level.
      In 1995, unit income was compiled by aggregating self-reported personal income of unit members; in 2001 it is the aggregation of personal income and income of partner/spouse as reported by the selected adult.
      Dwelling type; number of bedrooms.Housing tenure and landlord details from 1995 were not collected in 2001.
      Private health insurance/health cards
      Whether has PHI; contribution rate; type of cover; time covered by PHI; reasons having/not having PHI; Whether has DVA or other Govt concession card: type of card cards held.Information about private health insurance was recorded for a sub-sample of respondents only in 1995. The data set in 2001 was expanded.

      Comparability of data about long-term conditions:

      As noted previously, a new classification and coding system for medical conditions was introduced in the 2001 NHS. These changes would have had some effects on comparability between the 2001 and previous NHSs. In general it is felt the coding system introduced for the 2001 survey enabled more accurate and consistent coding of reported conditions than in previous surveys.

      Potentially greater effects on comparability may arise through the use of different methodologies in the surveys for collecting and recording the data. The table below presents a selection of results from the 1995 and 2001 surveys, with comments regarding methodological similarities or differences which may have contributed to movements between surveys. In addition to the points below, the adoption of the new approach to NHPA conditions (described in Chapter 3: Health Status Indicators) should also be borne in mind.

      TABLE 7.10: Selected Long-Term Conditions: Comparison of Survey Methodology: 1995 NHS and 2001 NHS(G)

      Estimate ('000)

      Type of condition
      % difference

      NeoplasmIn the 1995 NHS, the terms cancer and tumour/cyst/growth were used on prompt cards to trigger respondent reporting of current conditions. In 2001 respondents were asked directly whether they had ever been told they had cancer and whether it was still current.
      - 3.4
      + 2.0
      Diabetes mellitus
      + 37.2
      In both surveys respondents were asked whether they had ever been told they have diabetes and whether it was still current.
      High cholesterol
      + 28.3
      In both surveys high cholesterol was mentioned on prompt cards - in 1995 on a general prompt covering all long-term conditions, in 2001 on a prompt card specifically covering heart and circulatory conditions. While high cholesterol is not a heart or circulatory condition, it was felt that for many respondents this was the appropriate context in which to recall/report the condition.
      Mental and behavioural problems
      + 137.7
      Similar entries in general prompt cards in both surveys, but list expanded in 2001. Behavioural and emotional problems, dependence on alcohol and drugs and difficulty learning used in both surveys. The 2001 survey also included Feeling nervous/anxious and Feeling depressed.
      + 416.4
      Mentioned on general illness prompt card in 2001 but not mentioned at all in 1995 survey.
      Deafness - complete/partial
      + 17.1
      From specific questions in both 1995 and 2001 surveys.
      + 3.8
      Mention on general illness prompt card in 1995; mention on circulatory condition prompt card in 2001 which was used in conjunction with specific questions about heart and circulatory conditions.
      IIschaemic & other heart diseaseI
      - 18.7
      Mention of heart and coronary disease on general illness prompt card in 1995; mention of a range of different conditions (incl heart attack and angina) on circulatory condition prompt card in 2001 which was used in conjunction with specific questions about heart and circulatory conditions.
      Circulatory signs and symptoms
      + 91.4
      No mention in 1995. mention of a range of different conditions (incl tachycardia, murmur) on circulatory condition prompt card in 2001 which was used in conjunction with specific questions about heart and circulatory conditions.
      + 9.7
      Included on general prompt card for long-term conditions in 1995. Specific questions in 2001.
      + 18.8
      Included on general prompt cards in 1995 and 2001.
      + 12.3
      Included on general prompt cards in 1995 and 2001.
      Eczema / dermatitis
      - 46.1
      Not specifically mentioned in questions or in prompts in either survey. See however general comments below.
      Disc disorders
      + 85.5
      Not mentioned in questions or in prompt cards in 1995. Both contained on general long-term condition prompt card in 2001.
      Back problems unspecified
      + 580.1
      - 2.6
      Both conditions covered by specific questions in 1995 and in 2001.
      + 21.6
      + 21.0
      Mentioned on general long-term condition prompt cards in both 1995 and 2001.

      As well as the differences which may arise through the use/non-use of direct questions or prompt cards differences may arise from the context in which the questions were asked i.e. the effects of accompanying or associated questions. For example, in the 1995 survey, after the questions about long-term conditions respondents were asked about recent actions they had taken for their health (e.g. use of medication, consulted a doctor, had days away from work) and the medical conditions involved. This provided the opportunity for respondents to be reminded about a condition which they had but had forgotten to previously mention (e.g. because it was controlled through use of medication) and for them to identify it as a long-term condition (in which case earlier responses would have been amended accordingly). In contrast in 2001 respondents were asked about recent actions but, except for the NHPA conditions, were not asked to associate those actions with a particular condition. Under the 1995 approach, for example, a respondent may be reminded about their dermatitis by questions about their use of skin ointments or creams, while in 2001 this trigger was not available. While the result of this context effect may overall have been to boost 1995 levels relative to 2001, other changes introduced in 2001 (e.g. direct questions or mention on prompt cards) may have more than compensated for this effect for some conditions.

      A further factor which may affect comparability is that the reported prevalence of illness is complex and dynamic, and directly a function of respondent knowledge and attitudes, which in turn may be affected by the availability of health services and health information, public education and awareness, accessibility to self-help, etc. For example a public education program has been running in Australia over a number of years aimed to raise public awareness and public acceptance of mental health disorders. One consequence may be that respondents are more willing to talk about, and more willing to report feelings of anxiousness or depression now than they might have been willing to report previously.

      While the nature and general direction of the various influences on survey results can be gauged with reasonable surety, the level of effects are much more difficult to determine: i.e. how much of observed changes between estimates from the 2001 NHS and those from the 1995 or 1989-90 NHSs are attributable to real changes in the health characteristics or relationships between characteristics and how much to methodological or other differences between surveys, or to changes in respondent awareness of and attitudes to those characteristics. Unfortunately data to support this type of quantitative analysis are not available.

      While the points noted above, and within individual topic sections of this publication about comparability between NHSs, are useful guides to interpreting apparent changes between surveys, data users should also consider other information external to the NHS to assist them in interpreting the data. For many topics covered in the NHS, some data are available from other sources; although these other sources will seldom be directly comparable with the NHS they can provide a basis for data comparison and assessment.

      During validation of the 2001 NHS, selected results from the survey were compared both with results from previous NHSs and with data from other sources; differences were reconciled and notes relating to differences or changes have been included where appropriate within individual topic descriptions in this publication. However, as only selected data sources were examined, other differences may exist, and users of the NHS data should contact the ABS if they have any queries regarding comparability issues.


      National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) contains selected results from the Indigenous component of the 1995 NHS. These results are limited to topics where a reasonable level of comparability between the 1995 and 2001 data is expected. While the 2001 NHS(I) is similar to the 1995 survey in many ways, there are important differences in sample design and coverage, survey methodology and content, definitions, classifications, etc. which affect the degree to which data are directly comparable between the surveys.

      Due to the small size of the supplementary Indigenous samples in the 1995 and 2001 NHS(I), the Indigenous results from these surveys are not available at state level and have a larger associated sampling error than results from many other ABS surveys. For this reason, differences in reported rates for 1995 and 2001 may or may not be statistically significant. Significance testing has been undertaken on selected Indigenous and non-Indigenous comparisons (table 1) and time series data (table 2) presented in National Health Survey: Aboriginal and Torres Strait Islander Results, Australia 2001 (cat. no. 4715.0) to assist readers with understanding the level of significance that should be attributed to apparent differences in rates. Further information about significance testing is discussed in more detail earlier in this chapter.

      Time series information for 1995 and 2001 is based on data collected for non-sparsely settled areas only, due to concerns about the quality of data collected from sparsely settled areas in the 1995 survey. After an extensive investigation into Indigenous results from the 1995 collection, responses from people living in sparsely settled areas were excluded. Table 7.9 above in this chapter compares the survey content between 1995 and 2001 at the topic level.

      Chapter 1 - Introduction

      Chapter 2 - Survey Design and Operation

      Chapter 3 - Health Status Indicators

      Chapter 4 - Health Related Actions

      Chapter 5 - Health Risk Factors

      Chapter 6 - Population Characteristics

      Chapter 7 - Data Quality and Interpretation of results

      Chapter 8 - Data Output and Dissemination
      Appendix 1 - Glossary of Terms Used

      Appendix 2 - Sample Counts and Weighted Estimates

      Appendix 3 - Classification of Long-term Medical Conditions: Based on ICD-10

      Appendix 4 - Classification of Long-term Medical Conditions: Based on ICD-9

      Appendix 5 - Classification of Long-term Medical Conditions: ICPC Based

      Appendix 6 - Classification of Type of Medication

      Appendix 7 - Classification of Country of Birth

      Appendix 8 - Classification of Language Spoken at Home
      Appendix 9 - Classification of Occupation

      Appendix 10 - Classification of Industry of Employment

      Appendix 11 - Classification of Types of Alcoholic Drinks

      Appendix 12 - Standard Errors

      Appendix 13 - Content of the 2001 National Health Survey (Indigenous)

      Appendix 14 - List of Abbreviations

      Previous PageNext Page