4327.0 - National Survey of Mental Health and Wellbeing: Users' Guide, 2007  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 11/02/2009   
   Page tools: Print Print Page Print all pages in this productPrint All

10. INTERPRETATION OF RESULTS


OVERVIEW

Care has been taken to ensure that the results of this survey are as accurate as possible. All interviews were conducted by trained ABS officers. Extensive reference material was developed for use in the field enumeration and intensive training was provided to interviewers. However, there remain other factors which may have affected the reliability of results, and for which no specific adjustments can be made. The following factors should be considered when interpreting the estimates for the 2007 National Survey of Mental Health and Wellbeing (SMHWB):

  • information recorded in this survey is 'as reported' by respondents, and therefore may differ from information available from other sources or collected using different methodologies. Responses may be affected by imperfect recall or individual interpretation of survey questions.
  • some respondents may have provided responses that they felt were expected, rather than those that accurately reflected their own situation. Every effort has been made to minimise such bias through the development and use of culturally appropriate survey methodology.

For a number of survey data items, some respondents were unwilling or unable to provide the required information. Where responses for a particular data item were missing for a person or household they were recorded in a 'not known' or 'not stated' category for that data item. These 'not known' or 'not stated' categories are not shown in the publication tables, but have been included in the totals.

This chapter explores the reliability of the survey estimates; sampling and non-sampling error; and comparisons of this survey to other data sources.


RELIABILITY OF ESTIMATES

Response rate

The response rate for the 2007 National Survey of Mental Health and Wellbeing (SMHWB) was 60% nationally. However, the survey sample was designed based on an assumed response rate of 75%. As this rate was not achieved, a smaller sample was available for estimation. More information on 'Sample design' is provided in Chapter 2. The following table provides the differences in response rates for each state/territory based on the achieved sample. It should be noted that 117 records were deleted during processing due to serious inconsistencies in the data, which were not able to be corrected. For example, a person other than the selected person undertook the survey interview and there was an interviewer note to this effect. The exclusion of these records gives an achieved sample of 8,841 fully/adequately responding households.

15. SURVEY RESPONSE, by state or territory

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia

Expected sample (no.)
3 679
2 785
2 102
845
1 072
265
76
176
11 000
Achieved sample (no.)
2 936
2 003
1 800
683
927
260
62
170
8 841
Response rate (%)
59
55
64
60
63
77
55
71
60



Sample survey errors

Two types of error are possible in estimates based on a sample survey:
Sampling error occurs because only a small proportion of the total population is used to produce estimates that represent the whole population. Sampling error can be reliably measured as it is calculated based on the scientific methods used to design surveys. Non-sampling error may occur in any data collection, whether it is based on a sample or a full-count (eg Census). Non-sampling error may occur at any stage throughout the survey process. Examples of non-sampling error include:
  • persons selected for the survey may not respond (non-response);
  • survey questions may not be clearly understood;
  • responses may be incorrectly recorded by interviewers; or
  • errors in coding or processing survey data.

Sampling and non-sampling errors should be considered when interpreting results of the survey. Sampling errors are considered to occur randomly, whereas non-sampling errors may occur randomly and/or systematically.


SAMPLING ERROR

Sampling error is the expected difference that could occur between the published estimates, derived from repeated random samples of persons, and the value that would have been produced if all persons in scope of the survey had been included.

The magnitude of the sampling error associated with an estimate depends on the following factors:
  • Sample design - there are many different methods that could have been used to obtain a sample, in order to collect data. The final design attempted to make survey results as accurate as possible within cost and operational constraints.
  • Sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error.
  • Population variability - the extent to which people differ on the particular characteristic being measured. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and therefore the smaller the sampling error. Conversely, the more variable the characteristic, the greater the sampling error.

For more information on 'Sample design' see Chapter 2.


Measures of sampling error

A measure of the sampling error for a given estimate is provided by the Standard Error (SE), which is the extent to which an estimate might have varied by chance because only a sample of persons was obtained. There are about two chances in three that a sample estimate will differ by less than one SE from the figure that would have been obtained if all persons had been included in the survey, and about 19 chances in 20 that the difference will be less than two SEs.

Another measure is Relative Standard Error (RSE), which is the SE expressed as a percentage of the estimate. The RSE is a useful measure as it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and therefore avoids the need to refer also to the size of the estimate.

Equation: Relative Standard Error percentage

The smaller the estimate, the higher the RSE. Very small estimates are subject to such high SEs (relative to the size of the estimate) as to detract seriously from their value for most reasonable uses. Only estimates with RSEs of less than 25% are considered sufficiently reliable for most purposes.

RSEs for all estimates published in the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0) are available from the ABS website <www.abs.gov.au> in spreadsheet format.

Imprecision due to sampling variability, which is measured by the SE, should not be confused with inaccuracies that may occur because of imperfections in reporting by respondents and interviewers, or random errors made in coding and processing of survey data. These types of inaccuracies contribute to the total non-sampling error and may occur in any enumeration. The potential for random non-sampling error adds to the uncertainty of the estimates caused by sampling variability. However, it is not usually possible to quantify either the random or systematic non-sampling errors.

Standard errors of proportions and percentages

Proportions and percentages formed from the ratio of two estimates are subject to sampling errors. The size of the error depends on the accuracy of both the numerator and the denominator.

The RSEs of proportions and percentages for the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0) are calculated using the full delete-a-group jackknife technique, which is described in the following segment (see 'Replicate weights and directly calculated standard errors'). RSEs for all estimates in the summary publication are available in spreadsheet format from the ABS website <www.abs.gov.au>.

For proportions where the denominator is an estimate of the number of persons in a group and the numerator is the number of persons in a sub-group of the denominator group, a formula to approximate the RSE of the proportion x/y is given by:

Equation: Relative Standard Error

From the above formula, the estimated RSE of the proportion or percentage will be lower than the RSE of the estimate of the numerator.

Replicate weights and directly calculated standard errors

Standard Errors (SEs) on estimates from this survey were obtained through the delete-a-group jackknife variance technique. In this technique, the full sample is repeatedly subsampled by successively dropping households from different groups of clusters of households and then the remaining records are re-weighted to the survey benchmark population. Through this technique, the effect of the complex survey design and estimation methodology on the accuracy of the survey estimates is stored in the replicate weights. For the 2007 SMHWB, this process was repeated 60 times to produce 60 replicate weights for each sample unit. The distribution of the 60 replicate estimates based on the full sample estimate is then used to directly calculate the SE for each full sample estimate.

The use of directly calculated SEs for each survey estimate, rather than SEs based on models, provides more information on the sampling variability inherent in a particular estimate. Therefore, directly calculated SEs for estimates of the same magnitude, but from different sample units, generally result in different SE estimates.

For more information on the replicate weights technique see Appendix 2.

Comparison of estimates

Published estimates may also be used to calculate the difference between two survey estimates. Such an estimate is subject to sampling error. The sampling error of the difference between two estimates depends on their Standard Errors (SEs) and the relationship (correlation) between them. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

Equation: Standard Error

While the above formula will be exact only for differences between separate and uncorrelated (unrelated) characteristics of sub-populations, it is expected that it will provide a reasonable approximation for all differences likely to be of interest in this survey.

Significance testing

For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant. This is done by calculating the SE of the difference between two estimates (x and y) and using that to calculate the test statistic using the formula below:

Equation: Significance Testing

If the value of the tested statistic is greater than 1.96 then there is a 95% certainty that there is a statistically significant difference between the two populations with respect to the particular characteristic.

Seasonal effects

The estimates from the survey are based on information collected from August to December 2007, with enumeration being completed prior to Christmas. Due to seasonal effects the data may not be fully representative of other time periods in the year. For example, the survey included questions on labour force status to determine whether a person was employed. Employment is subject to seasonal variation throughout the year. Therefore, the survey results for employment could have differed if the survey had been conducted over the whole year or in a different part of the year.


NON-SAMPLING ERROR

Every effort was made to minimise non-sampling error by careful design and testing of questionnaires, intensive training of interviewers, and extensive editing and quality control procedures at all stages of data processing. However, errors can be made in giving and recording information during an interview and these may occur regardless of whether the estimates are derived from a sample or from a full count (eg Census). Inaccuracies of this type are referred to as non-sampling errors. The major sources of non-sampling error are:
These sources of random and/or systematic non-sampling error are discussed in more detail in the following segments.


Errors related to the survey scope

Some dwellings may have been inadvertently included or excluded. For example, if it was unclear whether the dwelling was private or non-private. In order to prevent this type of error, dwelling listings are constantly updated. Additionally, some people may have been inadvertently included or excluded because of difficulties in applying the scope rules. For example, identification of a household's usual residents or treatment of some overseas visitors. For more information on 'Scope and coverage' see Chapter 2.


Response errors

In this survey response errors may have arisen from three main sources:
  • questionnaire design and methodology;
  • interviewing technique; and
  • inaccurate reporting by the respondent.

Errors may have been caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology, or by poor overall survey design (eg context effects, where responses to a question are directly influenced by the preceding question/s). In order to overcome these types of issues, individual questions and the overall questionnaire were tested before the survey was enumerated. Testing included:
  • cognitive laboratory interviews;
  • peer review; and
  • field testing (a pilot test and dress rehearsal).

More information on pre- and field testing is provided in Chapter 2.

As a result of testing, modifications were made to:
  • question design, wording and sequencing;
  • the respondent booklet (prompt cards); and
  • survey procedures.

In considering modifications it was sometimes necessary to balance better response to a particular item/topic against increased interview time, effects on other parts of the survey and the need to minimise changes to ensure international comparability. Therefore, in some instances it was necessary to adopt a workable/acceptable approach rather than an optimum approach. Although changes would have had the effect of minimising response errors due to questionnaire design and content issues, some will have inevitably occurred in the final survey enumeration.

Response errors may also have occurred due to the length of the survey interview (on average 90 minutes) because of interviewer and/or respondent fatigue (ie loss of concentration). While efforts were made to minimise errors arising from deliberate misreporting or non-reporting, some instances will have inevitably occurred.

Accuracy of recall may also have led to response error, particularly in relation to the lifetime questions. Information in this survey is essentially 'as reported', and therefore may differ from information available from other sources or collected using different methodologies. Responses may be affected by imperfect recall or individual interpretation of survey questions. The focus of this survey is on lifetime mental disorders and people who experienced symptoms in the 12 months prior to interview. The reference periods of the questions reflect this emphasis. The questionnaire was designed to strike a balance between minimising recall errors and ensuring the data was meaningful, representative (from both respondent and data use perspectives) and would yield sufficient observations to support reliable estimates. It is possible that the reference periods did not suit every person for every topic, and that difficulty with recall may have led to inaccurate reporting in some instances.

A further source of response error is lack of uniformity in interviewing standards. To ensure uniform interviewing practices and a high level of response accuracy, extensive interviewer training was provided. An advantage of using Computer Assisted Interviewing (CAI) technology to conduct survey interviews is that it potentially reduces non-sampling error. More information on interviews, interviewer training, the survey instrument and CAI is provided in Chapter 2.

Some respondents may have provided responses that they felt were expected, rather than those that accurately reflected their own situation. Every effort has been made to minimise such bias through the development and use of culturally appropriate survey methodology. Non-uniformity of interviewers themselves is also a potential source of error, in that the impression made upon respondents by personal characteristics of individual interviewers such as age, sex, appearance and manner, may influence the answers obtained.


Errors in processing

Errors may occur during data processing, between the initial collection of the data and final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or during the manipulation of raw data to produce the final survey data files. For example, in the course of deriving new data items from raw survey data (eg coding), during the estimation procedures or when weighting the data file.

To minimise the likelihood of these errors occurring a number of processes were used, including:
  • computer editing - edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present and that specific values lay within certain ranges. These edits were designed to detect reporting and recording errors, incorrect relationships between data items or missing data items.
  • data file checks - at various stages during processing (such as after computer editing or after derivation of new data items) frequency counts and/or tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the content of the data file, to identify unusual values which may have significantly affected estimates and illogical relationships not previously identified. Further checks were conducted to ensure consistency between related data items and in the relevant populations.
  • where possible, the data was checked to ensure consistency of the survey outputs against results of other ABS surveys, such as the 1997 SMHWB, and other data sources, such as the 2006 New Zealand Mental Health Survey.


Non-response

Non-response may occur when people cannot or will not participate in the survey, or cannot be contacted during the period of enumeration. Unit and item non-response by persons/households selected in the survey can affect both sampling and non-sampling error. The loss of information on persons and/or households (unit non-response) and on particular questions (item non-response) reduces the effective sample and increases sampling error.

Non-response can also introduce systematic non-sampling error by creating a biased sample. The magnitude of any non-response bias depends on the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not, as well as the extent to which non-response adjustments can be made during estimation through the use of benchmarks.

To reduce the level and impact of non-response, the following methods were adopted in this survey:
  • face-to-face interviews with respondents;
  • follow-up of respondents if there was initially no response;
  • ensuring the weighted file is representative of the population by aligning the estimates with population benchmarks;
  • use of pseudo-benchmarks for educational attainment, labour force status and household composition; and
  • a Non-Response Follow-Up Study (NRFUS) was conducted to gain qualitative assessment of possible bias.

Of the dwellings selected for the 2007 SMHWB, 5,851 (40%) did not respond fully or adequately. Reflecting the sensitive topic for the survey, the average expected interview length (approximately 90 minutes) and the voluntary nature of the survey, almost two-thirds (61%) of these dwellings were full refusals. Household details were provided by more than a quarter (27%) of these dwellings, but then the selected person did not complete the main questionnaire. The remainder of these dwellings (12%) provided partial or incomplete information.

Reasons for non-response

Categorisation of interviewer remarks from the 2007 SMHWB indicated that the majority of persons who refused to participate stated that they were 'too busy' or 'not interested' in the survey. People also refused to participate as the survey was 'not compulsory', the content was 'too personal', or they expressed an anti-government or anti-survey sentiment (eg invasion of privacy).


NON-RESPONSE ANALYSES

As the level of non-response for this survey was higher than expected, extensive non-response analyses to assess the reliability of the survey estimates were undertaken. The non-response analyses included data comparisons with other ABS sources, external data sources and a Non-Response Follow-Up Study (NRFUS).


Non-response Follow-Up Study (NRFUS)

A purposive small sample/short-form intensive Non-Response Follow-Up Study (NRFUS) was developed for use with non-respondents in Sydney and Perth. The aim of the NRFUS was to provide a qualitative assessment of the likelihood of non-response bias. The NRFUS was conducted from January to February 2008 and yielded 151 respondents or a response rate of 40%. NRFUS respondents were from households that were classified as full non-contacts or full-refusals during the enumeration of the 2007 SMHWB.

Methodology

As the intent of the NRFUS was to provide qualitative information on non-response bias, the sample selection was done in such a way as to minimise costs (eg interviewer travel). There were ten interviewers available in Sydney and five interviewers in Perth. The resulting sample was of convenience, rather than random, but the interviewers were reasonably spread across different areas of the two cities.

In addition, interviewer remarks from the 2007 SMHWB were used to screen for households where there may be a high risk to the safety of interviewers (eg households with aggressive dogs). This resulted in a small number of households being excluded from the NRFUS. The interviewers were also given the opportunity to exclude potentially dangerous households from the approached sample, however such households were included as 'refusals'.

The NRFUS aimed to achieve 100 fully responding households (persons), within a capped budget during a four week enumeration period. Interviewers were assigned 401 households (229 in Sydney and 172 in Perth) and achieved 151 fully responding households (77 in Sydney and 74 in Perth). As the NRFUS is not based on a random sample, the results should be interpreted with caution. The following table gives the distribution of responses to the NRFUS.

16. NON-RESPONSE FOLLOW-UP STUDY, by response rates

Target response
Fully responding
Partial response
Non-response
Refusals
Sample loss
Total less sample loss
Response rate
City of enumeration
no.
no.
no.
no.
no.
no.
no.
%

Sydney
50
77
1
44
95
12
217
34
Perth
50
74
1
40
50
7
165
43
Sample
100
151
2
84
145
19
382
40


Analysis

The NRFUS used a short-form questionnaire containing demographic questions and the Kessler Psychological Distress Scale (K10). The short-form approach used for the NRFUS precluded the use of the full diagnostic assessment modules. However, the K10 was included as a minor proxy to the mental health questions. Respondents to the NRFUS were compared to people who responded fully to the 2007 SMHWB by a number of demographic variables, including age, sex and marital status.

Given the small size and purposive nature of the NRFUS sample, the results of the study were not incorporated into the 2007 SMHWB estimation strategy, but were used for qualitative comparison.

Demographic characteristics

The age and sex distribution of respondents to the 2007 SMHWB was compared with the distribution of the NRFUS respondents. The 2007 SMHWB had a higher proportion of older persons (aged 65-85 years) than the NRFUS. This is expected, as older people were more likely to have responded to the main survey if selected and there was also a higher probability of selection for this age group. The proportion of younger people (aged 16-24 years) in both the 2007 SMHWB and the NRFUS samples (17.8% and 17.9% respectively) was higher than the proportion of younger people in the population (16.5%). However, this could also be explained by the higher probability of selection used for this age group.

The 2007 SMHWB had higher coverage of female respondents than the NRFUS (55% compared to 47%). The NRFUS achieved higher coverage of male respondents (53%), particularly younger males (56%), resulting in additional information being available on the characteristics of these people. The NRFUS also achieved higher relative numbers of responding 'never married' people compared to the 2007 SMHWB (46% compared to 35%).

The analysis undertaken suggests that there may be differences in the direction and magnitude of potential non-response bias between various geographical, age and sex variables that the weighting strategy does not correct for. The magnitude of potential non-response bias appears to be small at the aggregate level, but there is possible underestimation in the prevalence of mental health conditions in Perth, for men, and for young persons. It should be noted that the NRFUS was only conducted with non-respondents to the 2007 SMHWB from Sydney and Perth.

K10 scores

Research has found a strong association between high scores on the K10 and the diagnosis of Anxiety and Affective disorders through the current WMH-CIDI (version 3.0). There is also a lesser, but still significant association between the K10 and other mental disorder categories, or the presence of any current mental disorder (Andrews & Slade, 2001). More information about the K10 is provided in Chapter 6.

Differences in K10 scores may be associated with differences in the prevalence rates of certain mental disorders, therefore if the K10 score is underestimated, then it is likely that prevalence rates are also underestimated. By comparing the K10 scores from the NRFUS to the 2007 SMHWB, we can assess the likely differences.

Based on the combined sample from Sydney and Perth, the unweighted mean K10 scores for the NRFUS were higher than the 2007 SMHWB (15.6 compared to 14.4). This indicates that NRFUS respondents had higher levels of distress and therefore may have had marginally higher prevalence of mental disorders. However, the difference between the two samples is small and separate results for Sydney and Perth indicate lower and higher (respectively) K10 scores than the 2007 SMHWB. Given that younger males, people who are 'divorced' or 'never married' characteristically exhibit higher K10 scores, this would be expected in the NRFUS sample.

In order to analyse the sensitivity of changes to the K10 estimates, speculative revised values were obtained by applying the NRFUS K10 scores to the non-respondents of the 2007 SMHWB. This resulted in a revised mean K10 score of 14.8 for the 2007 SMHWB, which is not significantly higher than the current estimate (14.4).


COMPARISONS TO OTHER DATA SOURCES

To ascertain data consistency, the characteristics of the 2007 SMHWB respondents were compared to a number of ABS collections, including:
  • 2006 Census of Population and Housing;
  • 2004-05 National Health Survey;
  • 2007 Survey of Education and Work; and
  • the monthly Labour Force Survey.

From this analysis, it was determined that some of the demographic and socio-economic characteristics from the initial weighted data did not align with other ABS estimates. Additional (or 'pseudo') benchmarks were used to adjust for differential undercoverage of educational attainment, labour force status and household composition. For more information on 'Weighting, benchmarking and estimation' see Chapter 2.

Comparisons were also made between the 2007 SMHWB and a number of mental health data sources, including:
  • 1997 ABS Mental Health and Wellbeing: Profile of Adults, Australia;
  • 2003 ABS Survey of Disability, Ageing and Carers, Australia;
  • 2006 Te Rau Hinengaro: The New Zealand Mental Health Survey (NZMHS);
  • 2006-07 Bettering the Evaluation and Care of Health (BEACH) survey data program, Australia;
  • 2002 Canadian Community Health Survey: Mental Health and Wellbeing;
  • 2002-03 National Comorbidity Replication Survey, United States of America; and
  • European studies from the World Mental Health Survey Initiative.

Analysis undertaken to compare the results of the 2007 and 1997 SMHWB indicated:
  • little change in the prevalence of mental disorders that are not consistent with an ageing population, or for which changes in methodologies between surveys have contributed to differences (eg differences in the prevalence of depression are suspected to be at least partly due to methodological changes);
  • little change in the level and patterns of actions taken; and
  • a continued decline in the proportion of adults smoking, but an increase in risky/high risk drinking, overweight and obesity, and a fall in the proportion of adults exercising.

Changes in prevalence between the 1997 and 2007 SMHWB were not included in the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0), due to concerns of data comparability. For more information on comparability between these two surveys see 'Comparison with the 1997 survey' in this chapter.


COMPARISON WITH THE 1997 SURVEY

Overview

In 1997 the ABS conducted the National Survey of Mental Health and Wellbeing of Adults. The survey provided information on Australians aged 18 years and over on:
  • the prevalence of selected 12-month mental disorders;
  • the level of disability associated with those disorders;
  • health services used; and
  • perceived need for help with a mental health problem.

The 1997 survey was an initiative of, and was funded by, the then Commonwealth Department of Health and Family Services, as part of the National Mental Health Strategy. A key aim of the survey was to provide prevalence estimates for selected mental disorders in a 12-month time frame. Therefore, diagnostic criteria were assessed solely on a persons' experiences in the 12 months prior to the survey interview.

The 2007 survey was designed to provide lifetime prevalence estimates for selected mental disorders. People were asked about experiences throughout their lifetime, with an emphasis on the time when they had the most symptoms or the worst period of this type. Where a number of symptoms were endorsed across a lifetime, the person was asked about the presence of symptoms in the 12 months prior to the survey interview. To be included in the 12-month prevalence rates, a person must have met the criteria for lifetime diagnoses and had symptoms in the 12 months prior to the survey interview. The full diagnostic criteria were not assessed within the 12-month time frame.

Due to the differences described in this chapter and throughout this publication, users should exercise caution when comparing data from the two surveys.


Comparison of diagnoses

The diagnoses of mental disorders for the 2007 survey are based on the WMH-CIDI 3.0, while the 1997 survey diagnoses were based on an earlier version of the CIDI (version 2.1). Apart from the differences in time frames outlined above, the WMH-CIDI 3.0 differs from earlier versions as it:
  • has a number of expanded modules;
  • incorporates changes to diagnostic algorithms and sequencing; and
  • utilises a diagnostic 'screener'.

For example, the number of questions asked about scenarios which may have triggered a Post-Traumatic Stress Disorder (PTSD) has increased substantially, from 10 questions in 1997 to 28 questions in 2007. Additionally, the 1997 survey excluded people who said their extremely stressful or upsetting event was only related to:
  • bereavement;
  • chronic illness;
  • business loss;
  • marital or family conflict; or
  • a book, movie or television show.

The WMH-CIDI 3.0 diagnostic assessment criteria according to the ICD-10 and DSM-IV for the 2007 survey are provided in Chapter 3. A summary of the broad differences in the diagnostic assessment criteria between the two surveys is provided in Chapter 4.


Survey methodology

Both surveys collected information from persons in private dwellings throughout Australia. The 2007 survey collected information from people aged 16-85 years, while the 1997 survey collected information on people aged 18 years and over. In 2007, overseas visitors who had been working or studying in Australia for the 12 months prior to the survey or were intending to do so were included in the scope. In 1997, overseas visitors were excluded.

The sample sizes and response rates varied for both surveys. The 2007 survey had a sample of approximately 8,800 people, compared to approximately 10,600 people in 1997. The 2007 survey had a response rate of 60%, compared to 78% in 1997. Additionally, in 1997 extra survey sample was collected for Victoria, Western Australia (WA) and the Australian Capital Territory (ACT). The additional sample for the ACT was included in published results for the 1997 survey, while the additional sample for Victoria and WA was not. For more information on survey methodology see Chapter 2.


Survey timing

The enumeration period of each survey differs, which may impact on data comparisons. Seasonal effects refer to the influence that timing may have on a survey, ie the period when the survey was enumerated may not be fully representative of other time periods in the year. For example, fluctuations in employment prior to Christmas. The 2007 survey was undertaken from August to December, while the 1997 survey was undertaken from May to August.


Population characteristics

The classification of several demographic and socio-economic characteristics used in the 2007 SMHWB differ to those used in 1997, including:
  • education;
  • occupation;
  • language spoken; and
  • geography.

Industry of employment was collected for the first time in 2007. For information on population characteristics see Chapter 9 and for information on classifications see Chapter 2.


Physical health

Several of the scales and measures used to estimate disability and functioning in the 2007 survey differ from those used in 1997. The 2007 survey includes:
  • a standard set of ABS questions on the level and severity of role impairment (ABS Short Disability Module);
  • the World Health Organization Disability Assessment Schedule (WHODAS); and
  • the Assessment of Quality of Life (AQoL) instrument.

In comparison, the 1997 survey collected information on disability and functioning using:
  • the Brief Disability Questionnaire;
  • the Short-Form 12; and
  • the General Health Questionnaire (GHQ-12).

Both surveys contained questions on:
  • physical conditions (eg asthma, cancer, etc);
  • health risk factors (eg smoking); and
  • the number of 'days out of role'.

However, the positioning of questions within each survey and the wording of questions varies. Information on physical activity and body mass were collected for the first time in 2007. Therefore, there are no data from 1997 survey for comparison.

The 2007 survey included a small number of questions on hypochondriasis and somatisation, whereas the 1997 survey assessed somatic disorder, neurasthenia, and the personality characteristic neuroticism (Eysenck Personality Questionnaire). For more information on physical health see Chapter 5.


Other scales and measures

A psychosis screener was included in both surveys. In 1997, there were seven questions about psychotic experiences in the 12 months prior to interview. Reflecting the differing emphasis in time frames, the 2007 survey asked questions about both lifetime and 12-month psychotic experiences.

The 2007 survey contains more detailed information on suicidal behaviour than the 1997 survey. Apart from a small number of questions in the Depression module, the 1997 survey included only three questions about suicidal behaviour. The 2007 survey collected information on lifetime and 12-month suicidal behaviour, including the persistence of the behaviour/s. Detailed information, such as the consequences of attempted suicide (eg medical attention required) was also collected for suicidal behaviour in the 12 months prior to interview.

Both surveys included the Kessler Psychological Distress Scale (K10). However, there were minor differences in the question wording. The Mini-Mental State Examination (MMSE) was included in both surveys. There are some differences in the period referred to for memory problems, as well as differences in the order of tasks performed. For more information on 'Other scales and measures' see Chapter 6.


New data

There are no data from the 1997 survey for comparison with the following items, as these were collected for the first time in 2007:
  • medications;
  • social networks;
  • caregiving;
  • sexual orientation;
  • homelessness;
  • incarceration; and
  • service in the Australian Defence Forces.

More information on the data collected in the 1997 survey is provided in the National Survey of Mental Health and Wellbeing of Adults: Users' Guide, 1997 (cat. no. 4327.0).


Other considerations
  • The employment component of this survey was based on a reduced set of questions from the ABS monthly Labour Force Survey.
  • In terms of physical conditions, reported information was not medically verified, and was not necessarily based on diagnoses by a medical practitioner.
  • In terms of mental disorders, the WMH-CIDI 3.0 makes diagnoses against specific criteria. It has no facility for subjective interpretation. Therefore, it cannot always replicate diagnoses made by a health professional. Symptoms which have a considerable effect on people are likely to be better reported than those which have little effect.
  • The results of previous surveys on alcohol and illicit drug consumption suggest a tendency for people to under-report actual consumption levels.
  • The inclusion of lifetime diagnosis in the 2007 SMHWB may have led to higher prevalence of 12-month mental disorders compared to the 1997 survey. In the 2007 survey, people may have met the criteria for lifetime diagnosis and had symptoms in the 12 months prior to interview. However, they may not have met full diagnostic criteria within the 12-month time frame, as was required in the 1997 survey. Other differences, outlined in this publication, also need to be considered when comparing prevalence rates between the two surveys.
  • The exclusion of residents in special dwellings (eg hotels, boarding houses and institutions) and homeless people will have affected the results. It is therefore likely that the survey underestimates the prevalence of mental disorder in the Australian population.