4363.0.55.001 - National Health Survey: Users' Guide - Electronic Publication, 2007-08  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 14/09/2009   
   Page tools: Print Print Page Print all pages in this productPrint All

This document was added or updated on 17/09/2009.


CONTENTS

Data quality

      Sampling variability
      Measure of sampling variability
      Significance testing on differences between survey estimates
      Non-sampling error
      Errors related to scope and coverage
      Response errors
      Non response bias
      Processing errors
      Other factors affecting estimates
Interpretation of results
      Comparability with 2004-05
      General survey characteristics
      Sample design/size
      Partial enumeration of households
      Enumeration period
      Survey content
      Comparability of long term conditions data with 2004-05


DATA QUALITY

Although care was taken to ensure that the results of the 2007-08 NHS are as accurate as possible, there are certain factors which may affect the reliability of the results and for which no adequate adjustments can be made. One such factor is known as sampling variability. Other factors are collectively referred to as non-sampling error. These factors, which are discussed below, should be kept in mind in interpreting results of the survey.


Sampling variability

Since the estimates are based on information obtained from a sample of the population, they are subject to sampling variability (or sampling error), that is, they may differ from the figures that would have been obtained from an enumeration of the entire population, using the same questionnaires and procedures. The magnitude of the sampling error associated with a sample estimate depends on the following factors:
  • Sample design - there are many different methods which could have been used to obtain a sample from which to collect data on health status, health-related actions and health risk factors. The final design attempted to make survey results as accurate as possible within cost and operational constraints. (Details of sample design are contained in Chapter 2: Survey Design and Operation, under Sample Design and Selection);
  • Sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error; and
  • Population variability - the extent to which people differ on the particular characteristic being measured. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and therefore the smaller the sampling error. Conversely, the more variable the characteristic, the greater the sampling error.


Measure of sampling variability

One measure of the likely difference is given by the standard error (SE), which indicates the extent to which an estimate might have varied because only a sample of dwellings was included. There are about two chances in three that the sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about 19 chances in 20 that the difference will be less than two SEs.

Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate to which it relates. The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate. More detail on the calculation of SEs and RSEs can be found in the Technical Note.

For estimates of population sizes, the size of the SE generally increases with the level of the estimate, so that the larger the estimate, the larger the SE. However, the larger the sampling estimate, the smaller the SE in percentage terms (RSE). Thus, larger sample estimates will be relatively more reliable than smaller estimates. Very small estimates may be subject to such high relative standard errors as to detract seriously from their value for most reasonable purposes.

Only estimates with relative standard errors less than 25% are considered sufficiently reliable for most purposes. However, estimates with relative standard errors of 25% or more are included in ABS publications of results from this survey. Estimates with RSEs greater than 25% but less than or equal to 50% are annotated by an asterisk to indicate they are subject to high SEs and should be used with caution. Estimates with RSEs of greater than 50%, annotated by a double asterisk, are considered too unreliable for general use and should only be used to aggregate with other estimates to provide derived estimates with RSEs of 25% or less.

Relative standard errors for estimates from NHS 2007-08 are published in 'direct' form. In previous NHSs, a statistical model was produced that related the size of estimates to their corresponding RSEs, and this information was displayed via a standard error table. For NHS 2007-08, RSEs for estimates were calculated for each separate estimate and published individually.

The Jackknife method of variance estimation was used for this process, through a process called replicate weighting, where a small group of households in the sample are assigned a zero weight and then the remaining records are reweighted to the survey benchmark population. For the 2007-08 NHS, this process was repeated 60 times to produce 60 replicate weights. These replicate weights are used for calculating the variances of the estimates by finding the difference between the estimate for each replicate group and the original estimate, squaring the difference and summing these differences over all of the 60 replicate groups. The difference between the replicate estimates and the original estimate is then used in calculating the standard error of the estimate. Unlike the previous method, direct calculation of RSEs can result in larger estimates having larger RSEs than smaller ones, since these larger estimates may have more inherent variability.

More information about the replicate weights technique can be found in the Technical Note.


Standard errors of proportions, differences and sums

Proportions formed from the ratio of two estimates are subject to sampling error. The size of the error depends on the accuracy of both the estimates.

The difference between, or sum of, two survey estimates (of numbers or percentages) is itself an estimate and is therefore also subject to sampling error. The SE of the difference between, or sum of, two survey estimates depends on their SEs and the relationship between them.

The formulas to approximate the RSE for proportions and the SE of the difference between, or sum of, two estimates can be found in the Technical Note.


Testing for statistically significant differences

For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics, or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the standard error of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:

Equation: test for difference

If the value of the test statistic is greater than 1.96, then we may say that we are 95% certain that there is a statistically significant difference between the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence that there is a real difference between the populations.


Non-sampling error

Lack of precision due to sampling variability should not be confused with inaccuracies that may occur for other reasons, such as errors in response and recording. Inaccuracies of this type are referred to as non-sampling error. This type of error is not specific to sample surveys and can occur in a census enumeration. The major sources of non-sampling error are:
  • errors related to scope and coverage;
  • response errors due to incorrect interpretation or wording of questions;
  • interviewer bias;
  • bias due to non-response, because health status, health-related behaviour and other characteristics of non-responding persons may differ from responding persons; and
  • errors in processing such as mistakes in the recording or coding of the data obtained.

These sources of error are discussed below.


Errors related to scope and coverage

Some dwellings may have been inadvertently included or excluded because, for example, the distinctions between whether they were private or non-private dwellings may have been unclear. All efforts were made to overcome such situations by constant updating of lists both before and during the survey. Also, some persons may have been inadvertently included or excluded because of difficulties in applying the scope rules concerning the identification of usual residents, and the treatment of some overseas visitors.


Response errors

In this survey response errors may have arisen from three main sources:
  • deficiencies in questionnaire design and methodology;
  • deficiencies in interviewing technique; and
  • inaccurate reporting by the respondent.

Errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or poor overall survey design (for example, context effects where responses to a question are directly influenced by the preceding questions). In order to overcome problems of this kind, individual questions and the questionnaire overall were thoroughly tested before being finalised for use in the survey. Testing took two forms:
  • cognitive interviewing and focus group testing of concepts, terminology, questions and measurement/reporting issues relating to the following topics: alcohol consumption, smoking, arthritis, osteoporosis, exercise, injuries, and dietary behaviours; and
  • field testing, which involved a pilot test and dress rehearsal conducted in Victoria, South Australia and Western Australia respectively, each covering 250-300 households.

As a result of both forms of testing, modifications were made to question design, wording, ordering and associated prompt cards, and some changes were made to survey procedures. In considering modifications, it was sometimes necessary to balance better response to a particular item/topic against increased interview time or effects on other parts of the survey; for example, questions for collecting data on usual intake of fruit and vegetables referred to consumption in the form of 'serves', which required the use of a prompt card to define a serve, and a fair amount of recall and calculation on the part of the respondent.

Although every effort was made to minimise response errors due to questionnaire design and content issues, some will have inevitably have occurred in the final survey enumeration.

As the survey is quite lengthy, reporting errors may also have resulted from interviewer and/or respondent fatigue (i.e. loss of concentration), particularly for those respondents reporting for both themselves and a child. Inaccurate reporting may also occur if respondents provide deliberately incorrect responses. While efforts were made to minimise errors arising from fatigue, or from deliberate misreporting or non-reporting by respondents, through emphasising the importance of the data and checks on consistency within the survey instrument, some instances will have inevitably occurred.

Reference periods used in relation to each topic were selected to suit the nature of the information being sought; in particular to strike the right balance between minimising recall errors and ensuring the period was meaningful and representative (from both respondent and data use perspectives), and would yield sufficient observations in the survey to support reliable estimates. It is possible that the reference periods did not suit every person for every topic and that difficulty with recall may have led to inaccurate reporting in some instances.

Lack of uniformity in interviewing standards may also result in non-sampling errors. Training and retraining programs, regular supervision and checking of interviewers’ work were methods employed to achieve and maintain uniform interviewing practices and a high level of accuracy in recording answers on the survey questionnaire (see Data Collection: Interviews, in Chapter 2: Survey Design and Operation). The operation of the Computer Assisted Instrument (CAI) itself, and the built in checks within it, ensure that data recording standards are maintained. Respondent perception of the personal characteristics of the interviewer can also be a source of error, as the age, sex, appearance or manner of the interviewer may influence the answers obtained.


Non-response bias

Non-response may occur when people cannot or will not cooperate in the survey, or cannot be contacted by interviewers. Non-response can introduce a bias to the results obtained insofar as non-respondents may have different characteristics and behaviour patterns in relation to their health to those persons who did respond. The magnitude of the bias depends on the extent of the differences and the level of non-response.

The 2007-08 NHS achieved an overall response rate of 91% (fully responding households, after sample loss). Data to accurately quantify the nature and extent of the differences in health characteristics between respondents in the survey and non-respondents are not available. Under or over-representation of particular demographic groups in the sample are compensated for at the State, section of State, sex and age group levels in the weighting process. Other disparities are not adjusted for.

Households with incomplete interviews were treated as fully responding for estimation purposes where the only questions that were not answered were legitimate 'don't know' or refusal options, or any or all questions on income, or where weight and height (measured and self-reported) were not obtained. These non-response items were coded to 'not stated'. If any other questions were not answered, in any interview, households were treated as if they had been non-responding; i.e. as if no responses to the questionnaire(s) had been obtained.


Processing Errors

Processing errors may occur at any stage between the initial collection of the data and the final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or may occur during the manipulation of raw data to produce the final survey data files; for example, in the course of deriving new data items from raw survey data, or during the estimation procedures or weighting of the data file.

To minimise the likelihood of these errors occurring, a number of quality assurance processes were employed, including:
  • comprehensive quality assurance procedures, applied to the coding of conditions, medications and alcohol data. An automated coding system was used, complemented by manual coding of medications unable to be recognised by the automated coding system;
  • computer editing. Edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present, and that specific values lay within certain ranges. These edits were designed to detect reporting and recording errors, incorrect relationships between data items, and missing data items;
  • data file checks. At various stages during processing (such as after computer editing and subsequent amendments, weighting of the file, and derivation of new data items), frequency counts and/or tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the contents of the data file, to identify unusual values which might have significantly affected estimates, and illogical relationships not previously identified by edits. Further checks were conducted to ensure consistency between related data items, and between relevant populations; and
  • comparison of data. Where possible, checks of the data were undertaken to ensure consistency of the survey outputs against results of previous NHSs and data available from other sources.


Other factors affecting estimates

In addition to data quality issues, there are a number of both general and topic-specific factors which should be considered in interpreting the results of this survey. The general factors affect all estimates obtained, but may affect topics to a greater or lesser degree depending on the nature of the topic and the uses to which the estimates are put. This section outlines these general factors. Additional issues relating to the interpretation of individual topics are discussed in the topic descriptions provided in other sections of this User's Guide.

Scope

The scope of the survey defines the boundaries of the population to which the estimates relate. The most important aspect of the survey scope affecting the interpretation of estimates from this survey is that institutionalised persons (including inpatients of hospitals, nursing homes and other health institutions) and other persons resident in non-private dwellings (e.g. hotels, motels, boarding houses) were excluded from the survey.

Personal interview and self-assessment nature of the survey

The survey was designed using personal or proxy interviews to obtain data on respondents’ own perceptions of their state of health, their use of health services and aspects of their lifestyle. The information obtained is therefore not necessarily based on any professional opinion (e.g. from a doctor, nurse, dentist, etc.) or on information available from records kept by respondents. For this reason, data from this survey are not necessarily compatible with data from other sources or collected by other methods.

Concepts and definitions

The scope of each topic and the concepts and definitions associated with individual pieces of information should be considered when interpreting survey results. This information is available for individual topics in Chapters 3, 4 and 5 of this User's Guide.

Wording of questions

To enable accurate interpretation of survey results it is essential to bear in mind the precise wording of questions used to collect individual items of data, particularly in those cases where the question involved ‘running prompts’ (where the interviewer reads from a list until the respondent makes a choice), or where a prompt card was used.

Testing has shown that reporting of medical conditions is improved where direct questions are asked about a specific condition, or where conditions are specifically identified in a prompt card; and that data is less robust where it is up to the respondent to identify conditions in response to a general question. It is not possible or practical to mention all conditions in questions or prompts, therefore the approach taken in the survey was to identify National Health Priority Area (NHPA) conditions, and some other conditions of particular interest or known from previous surveys to require special attention. As some conditions are specifically identified in the questionnaire and others are not, response levels and accuracy of condition reporting may be affected. Where the level and nature of condition identification has changed between surveys, comparability over time may be affected.

Reference periods

All results should be considered within the context of the time references that apply to the various topics. Different reference periods were used for specific topics (e.g. 'in the last week' for alcohol consumption, 'in the last week and last two weeks' for exercise, 'ever' and 'in the last 12 months' for actions taken).

Although it can be expected that a larger section of the population would have reported taking a certain action if a longer reference period had been used, the increase is not proportionate to the increase in time. This should be taken into consideration when comparing results from this survey to data from other sources where the data relates to different reference periods.

Classifications and categories

The classifications and categories used in the survey provide an indication of the level of detail available in survey output. However, the ability of respondents to provide the data may limit the amount of detail that can be output. Where respondents may have used non-medical terminology, symptoms rather than conditions, or generic rather than specific terminology, conditions may only be able to be output in general terms (e.g. 'heart condition nfd' rather than 'Angina' or 'Atrial fibrillation'). Categories used in the survey are available in the data item list which can be downloaded from the National Health Survey: Users' Guide, 2007-08 (cat. no. 4363.0.55.001) and the National Health Survey: Data Reference Package (cat. no. 4363.0.55.002). Classifications used in the survey can be found in Appendix 4: ABS Standard Classifications.

Collection period

The NHS 2007-08 was enumerated from August 2007 to June 2008. When considering survey results over time or comparing them with data from another source, care must be taken to ensure that any differences between the collection periods take into consideration the possible effect of those differences on the data, for example, seasonal differences and effects of holidays.


INTERPRETATION OF RESULTS

As noted above, a range of factors have impacted on the quality of the data collected. The ABS has sought to minimise the effects of these factors through various means in the development and conduct of this survey, however, only sampling error can be quantified to allow users of the data to adjust for possible errors when using/interpreting the data. Information is not available from the survey to enable the effects of other issues affecting the data to be quantified. The relative importance of these factors will differ between topics, between items within topics, and by characteristics of respondents.

Comments have been included in individual topic descriptions in this publication to alert users of the data to the more significant issues likely to affect results for that topic, or items within it. These notes reflect ABS experience of past health and other surveys, feedback from users of data from those surveys, and ABS and other research on survey methods and response patterns, as well as information from survey testing and validation. However, they are indicative only, and do not necessarily reflect all factors impacting results, nor the relative importance of those factors.

Against this background, the following general comments are provided about interpreting data from the survey:
  • The survey aims to provide statistics which represent the population or component groups of the population. It does not aim to provide data for analysis at the individual level. While errors of the types noted above may occur in individual respondent records, they will have little impact on survey estimates unless they are repeated commonly throughout the respondent population.
  • The survey data are mainly self-reported and may differ from data sources that have different collection methodologies (e.g. administrative data), however, the NHS is able to provide dimensions of the data (e.g. population group, related health characteristics, uses of other health services) and cross-classifications (e.g. self-assessed health by alcohol risk level) which are not available from administrative sources.
  • Some survey topics, such as alcohol consumption, have some known data quality issues. While this means the data should be interpreted with care, the information is still considered valuable for certain uses. For example, while the overall levels of alcohol consumption described by the survey should be interpreted with caution, the data is still considered useful in describing consumption patterns across days of the week, types of drink consumed, relative levels of consumption across population groups, and alcohol consumption in relation to other risk behaviours or characteristics. It is also useful for monitoring changes in the levels and patterns of consumption over time. Notes regarding any known data quality issues are contained in the individual topic descriptions in this User's Guide.
  • Although various reference periods are used throughout the survey for different topics (e.g. current, usual, last week, last 2 weeks, last 4 weeks) the survey essentially provides a 'point in time' picture of the health of the population and of population sub-groups. That is, the survey provides information about the prevalence of characteristics, not the incidence of those characteristics or of changes in characteristics (except in terms of differences between surveys). As the survey was conducted over a 10 month period, the results are essentially an average over that period, that is, representative of a typical week, fortnight, etc in that period.


Comparability with 2004-05

Understanding the comparability of data from the 2007-08 NHS with data from previous NHSs is important to the use of those data and interpretation of apparent changes in health characteristics over time. While the 2007-08 NHS is deliberately the same or similar in many ways to the 2004-05 and 2001 NHS (and in part to the 1995 NHS), there are important differences in sample design and coverage, survey methodology and content, definitions and classifications between the surveys. These differences will affect the degree to which data are directly comparable between the surveys, and hence the interpretation of apparent changes in health characteristics over the 2004-05 to 2007-08 period.

Throughout the topic descriptions and in other parts of this publication, comments have been made about the changes between surveys and their expected impact on the comparability of data. These are general comments based on results of testing, ABS experience in survey development, and preliminary examination of data from the 2007-08 survey. They should not, therefore, be regarded as definitive statements on comparability, and they may omit the types of findings which might result from a detailed analysis of the effects of all changes made.

The following table summarises key differences in the general survey characteristics of the 2004-05 and 2007-08 surveys:

Survey characteristics

2007-08 NHS 2004-05 NHS

Collection method Personal interview with adult respondents; proxy interview for children less than 15 years Personal interview with adult respondents; proxy interview for children less than 15 years
Personal interview with children aged 15-17 years unless parents ask to be proxy. Personal interview with children aged 15-17 years with parental consent; otherwise interview by proxy
Questionnaires Single CAI instrument, incorporating household, adult and child components Single CAI instrument; incorporating household, adult and child components
Sample coverage Private dwellings only Private dwellings only
Urban and rural areas Urban and rural areas
All States and Territories(a) All States and Territories; additional sample in SA, Tas and ACT(a)
Sample design/size One child aged 0-17 years, one adult per dwelling One child aged 0-17 years, one adult per dwelling
Fully responding households = 15792 Fully responding households = 19501
Final sample = 5009 children, Final sample = 6405 children,
15779 adults, 20788 persons 19501 adults, 25906 persons
Enumeration period August 2007 to June 2008 August 2004 to June 2005
Collection methodology CAI questionnaire CAI questionnaire
Automated coding, supported by manual and CAC systems Automated coding, supported by manual and CAC systems
Main output units Person Person

(a) The NT sample was reduced such that it contributes appropriately to national estimates, but is not large enough to support most estimates for the NT



Sample design/size

The overall sample of households was about 19% lower in 2007-08 than in 2004-05, with a similar proportion of people in each household enumerated (1.3 persons per hhld). This resulted in the total sample of persons in the 2007-08 survey being below that of 2004-05. The impact of the lower sample size on the RSEs of the NHS 2007-08 estimates, however, was significantly offset by design changes in 2007-08. Taking advantage of synergies in the reference period and sample sizes of the NHS 2007-08 survey and the Survey of Income and Housing 2007-08, the field operations of these two surveys were combined. This allowed the NHS 2007-08 sample to be collected from approximately twice as many geographic locations as previous cycles, which significantly lowered the effect of cluster sampling on the RSEs of the estimates. The RSEs for the two 2004-05 and 2007-08 NHS surveys are therefore reasonably comparable, despite the difference in sample sizes.

Differences in the reliability of estimates between surveys should be considered in interpreting apparent changes between the surveys. It is recommended that apparent changes are significance tested to ensure that changes are not simply the product of different sample size and design (see 'Testing for statistically significant differences', in the Data Quality section of this chapter).

Through the weighting process, survey estimates at the state by part of state by sex by broad age group will be the same or very similar to the benchmark populations. However, because the characteristics of the sample are not identical to those of the benchmark population, some records will receive higher or lower weights than others. As this will vary between surveys, it is a factor to consider in comparing 2004-05 with 2007-08 data, but the impact on comparability is expected to be small. Sample and population figures for the two surveys appear in the following table:

2007-08 NHS(a)
2004-05 NHS(b)
Sex /age (years)
% of adults in sample
% of adults in pop'n
% of adults in sample
% of adults in pop'n

Males
47.1
49.3
45.6
49.2
Females
59.2
50.7
54.4
50.8
18-24
8.6
12.7
9.3
12.7
25-34
16.9
18.3
17.3
18.8
35-44
20.4
19.3
20.6
19.8
45-54
18.0
18.3
18.0
18.3
55-64
15.9
14.9
15.1
14.2
65-74
10.8
9.2
10.6
9.0
75 and over
9.3
7.3
9.2
7.3
Persons
100.0
100.0
100.0
100.0

(a) Benchmark population as at 31 December 2007
(b) Benchmark population as at 31 December 2004



Partial enumeration of households

Prior to 2001, all persons in sampled dwellings were included in NHSs, and only records from fully responding households were retained on the data file. This meant that results could be compiled at household, family and income unit level, in addition to person level. The 2001, 2004-05 and 2007-08 surveys, however, sub-sampled persons in households (one adult and one child 0-17 years), therefore complete enumeration occurs only in a minority of households, and by definition, only in single adult households.

As basic demographic characteristics, and some other items (e.g. number of daily smokers in household) were collected from the selected adult about all household members, some household, family and income unit characteristics of the respondent are available as person level items. This information enables a greater degree of analysis in relation to the person and the household; for example; number of children aged 0-14 in households containing smokers. This level of information is only available for persons enumerated in the survey, not all people in the household. No data at the household, family or income unit level is available.


Enumeration period

The 2007-08 and 2004-05 NHSs were enumerated over a ten and eleven month period respectively: 2007-08 from August to June; and 2004-05 from August to July, therefore some winter weeks were not enumerated. Research using data from previous NHSs shows minimal seasonal effects on the data for most survey topics, however, statistically significant seasonal differences were found for some items in the alcohol consumption, visits to other health professionals and exercise topics. The effect of the missed enumeration months compared to a full 12 month period has been included in the uncertainty of the estimates. As discussed in Chapter 2: Survey Design and Operation, the sample weights from each sub-period of a calendar quarter have included an adjustment to ensure each sub-period does represent the population for the full quarter. With enumeration from only two winter months, the final weights then accurately reflect the higher uncertainty in estimates for this period compared to other quarters. The RSEs in the yearly estimates, which are the sum of the quarterly estimates, also then include the higher uncertainty in the estimates for the winter months.


Comparability of long term conditions data with 2004-05

There are a number of issues affecting comparability of data for long-term medical conditions between the 2004-05 and 2007-08 surveys, including:
  • the methodology used in the questionnaire to elicit responses;
  • the ways in which responses were recorded; and
  • the ways in which they were turned into coded information for the survey data file.

These issues are discussed in general terms below. Further discussion of issues related to particular conditions is contained in the relevant condition sections of this Users' Guide, and a general discussion of the methodologies used for collecting data on medical conditions is included at the beginning of Chapter 3: Health status indicators.

Methodological issues

The question methodology for long-term conditions in 2007-08 was similar to that of 2004-05: a combination of direct questions, general questions supported by prompt cards (either showing examples of conditions, or a list of conditions from which respondents are asked to select), and open ended questions. Differences between the two surveys are outlined below:
  • In the 2007-08 survey, the derivation of current asthma changed so that the respondent must have had symptoms of asthma or taken treatment for asthma in the last 12 months, rather than as assessed by the respondent. This has decreased the prevalence of asthma over all age groups.
  • Although heart failure, heart attack, stroke, angina and rheumatic heart disease may have been reported as not current by the respondent, they are treated as being a current condition. This has increased the prevalence of these conditions.
  • Depression was added to the prompt card for other long term conditions. This is likely to have increased the prevalence of depression.
  • The population answering questions about osteoporosis/osteopenia is restricted to those aged 15 years and over plus any children aged less than 15 years who currently had arthritis, gout or rheumatism. This has a negligible affect on the data.
  • Actions taken for arthritis and osteoporosis/osteopenia were asked as one question in 2007-08, whereas they were asked separately in 2004-05. Where respondents have both arthritis and osteoporosis, the condition for which a particular action was taken cannot be identified in 2007-08.

A further factor which may affect comparability is that the reported prevalence of illness is complex and dynamic, and is a function of respondent knowledge and attitudes, which in turn may be affected by the availability of health services and health information, public education and awareness, accessibility to self-help, etc. For example, a public education program has been running in Australia over a number of years aimed to raise public awareness and public acceptance of mental health disorders. Consequently, respondents may be more willing to talk about or report feelings of anxiety or depression than in previous years.

Recording of condition details

Provision made to record conditions information (a combination of selected categories and text responses) was similar in the 2004-05 and 2007-08 surveys. Interviewers were encouraged in both surveys to add supplementary information if they felt this would help clarify responses or assist with coding.

Coding conditions data

The classification of medical conditions and the supporting coding index introduced in the 2004-05 NHS was used largely unchanged in 2007-08. For both surveys, conditions (and medications and alcohol) data were coded progressively throughout the enumeration period by a group of coders employed and trained specifically to undertake this coding. This coding was subject to rigorous quality assurance procedures.

In 2004-05, survey coding was initially performed manually, supported by a computer assisted coding (CAC) system, and coded values were included on computer survey records via a separate data entry process. From February 2005 an automated coding system was introduced, which automatically assigned a code, and added it to the computer record from the survey. Cases which could not be coded by the auto-coding system were manually coded using the CAC system. The auto-coding system, which coded on the basis of an 'exact match' with the coding index, was successful in coding 29% of coding instances in the period of its operation.

In the 2007-08 survey the automated coding system was in place and used throughout the entire survey process. Auto-coding was effective in 20% of coding instances, and the remaining coding was done manually using the CAC system. Specific coders were employed for quality and consistency.

The coding processes and systems were designed to ensure the codes assigned were as specific and accurate as possible. Thorough testing of the auto-coding system prior to its introduction ensured it met or surpassed manual coding quality levels. Although auto-coding could be expected to ensure greater consistency in the coding process, the nature of the coding processes used is considered to have minimal impact on the comparability of data between the surveys.


Survey content

The following tables summarise the main differences in content between the 2007-08 and 2004-05 surveys:

Survey content Population characteristics

Topics covered
2004-05 NHS
2007-08 NHS
Main items available from 2007-08 Comments on main differences between 2004-05 and 2007-08

General demographics
X
X
Sex; age; marital status (registered & social); Indigenous status; country of birth; year of arrival in Australia; main language spoken at home; proficiency in English; family type; household size, composition, type; Income unit type; Location. Same content in 2007-08 as in 2004-05.
Education
X
X
Whether attending school; highest year of school completed; whether has non-school qualification; level of highest educational attainment; level of highest non-school qualification; whether currently studying full or part time; field of study of qualification obtained. Same content in 2007-08 as in 2004-05.
Labour force
X
X
Labour force status; status in employment; occupation, industry and industry sector of main job; hours worked; duration of unemployment; shift work. Same content in 2007-08 as in 2004-05.
Income
X
X
Personal income - Level; equivalised sources and main source; type of pension/benefit received; Household income level, equivalised Similar content in 2007-08 as in 2004-05, excluding income unit level items in 2007-08. Hhld income collected using different methodology - 2007-08 personal income for adults only collected for 18+, 2004-05 was collected for 15+.
Housing
X
X
Dwelling type; number of bedrooms; household tenure; landlord type. Hhd content in 2007-08 as in 2004-05. Personal tenure not collected in 2007-08.
Private health insurance/health cards
X
X
Whether has PHI; type of cover; time covered by PHI; reasons having/not having PHI; duration with PHI; Whether has DVA or other Govt concession card: type of card. Same content in 2007-08 as in 2004-05.


Survey content health status indicators

Topics covered
2004-05 NHS
2007-08 NHS
Main items available from 2007-08 Comments on main differences between 2004-05 and 2007-08

Arthritis and Osteoporosis
X
X
Type of arthritis; age first told arthritis; types of specific vitamins/minerals or herbal remedies taken for arthritis and osteoporosis; types of other specific actions taken for arthritis and osteoporosis; types of medications used in last 2 weeks; visits to health professionals; bone density checked. Similar content in 2007-08 to 2004-05. Osteoporosis population changed to 15+ with gout, rheumatism or arthritis. Bone density checked. Visits to health professionals for all persons. Actions taken for arthritis and osteoporosis not separately available for 2007-08.
Asthma
X
X
Whether asthma worse or out of control; whether attended emergency; whether has written asthma action plan; types of medication used in last 2 weeks; visits to health professionals; and whether discussed self management. Similar content in 2007-08 to 2004-05. New items: whether worse or out of control; whether attended emergency. New derivation for current long-term asthma. Visits to health professionals for all persons. Whether discussed self management.
Cancer
X
X
Cancer status; type of cancer; visits to health professionals. Similar content in 2007-08 to 2004-05. Possible to identify priority cancer types.
Cause of reported long-term conditions
X
X
Whether condition result of an injury; where injury occurred; age injury occurred. Similar content in 2007-08 to 2004-05. Some questions not asked in 2007-008; 'where injury occured' questions combined into one question for 2007-08.
Circulatory conditions
X
X
Types of condition; blood pressure taken and by whom; cholesterol/blood pressure checks in last 12 months and 5 years; whether aspirin taken; if advised by doctor to take aspirin; type of medications used in last 2 weeks. Visits to GPs, specialists, other health professionals. Whether discussed self management. Similar content in 2007-08 to 2004-05. New items: blood pressure taken in last 12mths and by whom; cholesterol checks in last 12 mths and 5 yrs; whether aspirin taken; whether advised by GP to take aspirin; visits to health profs for all persons; and whether discussed self-management. Derivations for heart attack, heart failure, stroke, rheumatic heart disease and angina changed so condition always current and long-term in 2007-08.
Diabetes/ high sugar levels
X
X
Types of diabetes; types of meds used in last 2 weeks; type of other actions taken to manage condition; whether screened for diabetes in the last 3 years; age first told; whether condition interferes with usual activity; whether has diabetes-related sight problems; period since last visited optometrist/eye specialist; whether discussed self-management. Similar content in 2007-08 to 2004-05. New items: Whether screened for diabetes in last 3 years. Visits to health professionals for all persons. Whether discussed self-management.
Disability
X
Disability status; type of disability; main type of disability. New module in 2007-08
Other long-term conditions
X
X
Types of condition. Same content in 2007-08 as 2004-05.
Mental health condition
X
X
Type of condition; health consults and frequencies of consults for GP; Psychiatrist, OHP, age first told, brand of medication taken in lat 2 weeks; type of medication taken in the last 2 weeks; how long taken medication; how often taken medication Similar content in 2007-08 to 2004-05. New items: health consults and freq for GP, Psychiatrist, OHP; age first told; brand of meds taken in last 2 weeks; type of meds taken in the last 2 weeks; how long and how often taken medication. Some new items included on prompt card which may change prevalence.
Mental wellbeing
X
X
Psychological distress (K10); reported types of medication used in last 2 weeks; frequency and duration of use. Same content in 2007-08 as 2004-05.
Bodily pain
X
Bodily pain in last 4 weeks; whether interfered with work. New module in 2007-08
Self assessed health
X
X
SF1 - Rate health Same content in 2007-08 as 2004-05.
Status of condition
X
X
Status for each condition reported Same content in 2007-08 as 2004-05.
Health transitions
X
Not collected in 2007-08
Recent injuries
X
Not collected in 2007-08
Quality of life
X
Not collected in 2007-08


Survey content - Health-related actions

Topics covered 2004-05 NHS 2007-08 NHS Main items available from 2007-08 Comments on main differences between 2004-05 and 2007-08

Doctor consultations X X Collected within each NHPA module regarding specific NHPA condition - how often consulted GP/Specialist/OHP. All persons - frequency of check-ups; whether seen specialist in last 12 months; whether discussed lifestyle issues with GP in last 12 months. New collection method in 2007-08 - GP/Specialist visits collected within each NHPA module regarding specific NHPA condition. New questions and timeframes for all other consultation questions. Not comparable to 2004-05.
Consultations with other health professionals X X Collected within each NHPA regarding specific NHPA condition - whether discussed specific condition with OHP in last 12 months. All persons - whether consulted OHP in last 12 months; whether discussed lifestyle issues with OHP in last 12 months; type of OHP. New collection method in 2007-08 - OHP visits collected within each NHPA module regarding specific NHPA condition. New questions and timeframes for all other consultation questions. Not comparable with 2004-05.
Days away from work/study/school: own illness X X Collected within each NHPA regarding specific NHPA condition. Whether had days away in last 12 months due to own specific NHPA condition; number of days. New collection method in 2007-08 from 2004-05. Not comparable to 2004-05.
Healthy lifestyle X X Whether have check ups with GP, frequency of check-ups; whether discussed health lifestyle issues with GP in the last 12 months; whether discussed health lifestyle issues with OHP (inc specialist) in last 12 months; which OHP discussed health lifestyle issues with. New collection method in 2007-08 from 2004-05. Collected as a stand alone module, mostly not comparable to 2004-05.
Use of medications (incl vitamins, minerals, natural and herbal medicines). X X Collected within each NHPA regarding specific NHPA condition. Collection method the same as in 2004-05. In addition to medication taken for mental wellbeing, medication for mental health also collected in 2007-08. Whether used aspirin daily for heart or circulatory condition, and whether that was done on the advice of a doctor also collected in 2007-08.
Stays in hospital or emergency department X X Number of times asthma got worse and went to hospital in last 12 months Only item collected in 2007-08. Not comparable to 2004-05.
Days away from work/school as carer X Not collected in 2007-08
Dental consultations X Not collected in 2007-08
Other days of reduced activity X Not collected in 2007-08
Visits to casualty, outpatients, day clinics X Not collected in 2007-08


SURVEY CONTENT, health risk behaviours

Topics covered
2004-05 NHS
2007-08 NHS
Main items available from 2007-08 Comments on main differences between 2004-05 and 2007-08

Alcohol consumption
X
X
Period since last drank; days consumed alcohol in last week; quantity of alcohol by type of drink consumed in last week (max 3 days); alcohol risk level; graduated frequency; how consumption changed since this time last year. Same content in 2007-08 as 2004-05, with additional population of persons aged 15 to 17 years. New item: How consumption changed since this time last year.
Body mass
X
X
Self-reported height, weight and body mass; body mass index; measured height, weight, waist and hip measurements; and waist/hip ratio. Same content in 2007-08 as 2004-05, plus measured height, weight and hip and waist measurements for persons aged 5 years and over.
Dietary habits
X
X
Type of milk usually consumed; usual daily intake of vegetables & fruit; fat content of milk. Similar content to 2007-08 as 2004-05, with new item on fat content of milk, but food security items not collected. Additional population 5 years and over.
Exercise
X
X
Type, frequency and duration of exercise in last 2 weeks and in the last week; exercise level; whether walked for transport; number of times walked and total duration; time spent sitting at work and home. Same content in 2007-08 as 2004-05, with new items on exercise in the past week and time spent sitting at work and home.
Smoking
X
X
Smoker status; number of smokers in household, age started/stopped smoking regularly. Same content in 2004-05 as 2004-05. Includes additional population of persons aged 15-17 years.
Sun protection
X
-
Not collected in 2007-08 0
Adult immunisation
X
-
Not collected in 2007-08 0
Breastfeeding
X
-
Not collected in 2007-08 0
Children's immunisation
X
-
Not collected in 2007-08 0
Contraception/protection
X
-
Not collected in 2007-08 0

- nil or rounded to zero (including null cells)