4327.0 - National Survey of Mental Health and Wellbeing: Users' Guide, 2007  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 11/02/2009   
   Page tools: Print Print Page Print all pages in this productPrint All

2. SURVEY DESIGN


OVERVIEW

This is the second mental health and wellbeing survey conducted by the ABS, with the previous survey conducted in 1997. Funding for this survey was provided by the Australian Government Department of Health and Ageing (DoHA).

The 2007 National Survey of Mental Health and Wellbeing (SMHWB) is based on a widely-used international survey instrument and is therefore designed to provide data for international comparisons, rather than comparisons with the previous survey. Broadly, the survey collected information on:

  • selected lifetime and 12-month mental disorders;
  • use of health services and medication for mental health problems;
  • physical conditions and disability;
  • social networks and caregiving; and
  • demographic and socio-economic characteristics.

This chapter contains the following segments:

SURVEY CONTENT

The survey was designed to collect information on selected lifetime and 12-month mental disorders. The survey collected basic demographic information from one usually resident household member aged 18 years or over for each person in the selected household. The information collected through the survey's Household Form included:
  • relationship in household;
  • marital status (registered and social);
  • sex;
  • age;
  • whether studying full-time;
  • country of birth; and
  • year of arrival.

Based on the demographic information, one person in the household aged 16-85 years was then randomly selected to complete a personal interview. People aged 16-24 years and 65-85 years had a higher chance of selection. For more information see 'Sample design' in this chapter. The selected person, or an elected household spokesperson also answered some financial and housing items on behalf of other household members, including:
  • income (for persons aged 15 years and over);
  • dwelling tenure type; and
  • household financial stress in the 12 months prior to interview.

The survey included the following topics:

MENTAL DISORDERS AND CONDITIONS (CHAPTER 3)

Screener
  • Diagnostic screening questions about feelings or experiences, which were linked to later modules in the questionnaire.

Mental disorders (diagnostic component)

The survey collected information on selected mental disorders, which are considered to have the highest rates of prevalence in the population and that are able to be identified in an interviewer-based survey. The mental disorders include:
  • Anxiety disorders
      Panic Disorder
      Agoraphobia
      Social Phobia
      Generalised Anxiety Disorder
      Obsessive-Compulsive Disorder
      Post-Traumatic Stress Disorder
  • Affective (mood) disorders
      Depressive Episode
      Dysthymia
      Bipolar Affective Disorder
  • Substance Use disorders
      Alcohol Harmful Use/Abuse
      Alcohol Dependence
      Drug Use disorders (includes Harmful Use/Abuse and Dependence)

A brief overview of the differences between the 1997 and 2007 diagnostic assessment criteria is also provided in Chapter 4.


PHYSICAL HEALTH (CHAPTER 5)

Chronic conditions
  • Physical conditions or disorders that have lasted, or are expected to last for six months or longer;
  • Focuses on Australia's National Health Priority Areas (NHPAs); and
  • Includes a number of other physical conditions, such as hayfever, bronchitis, epilepsy and migraine.

Disability
  • Based on the ABS Short Disability Module.
  • Includes information on the level of severity.

Functioning
  • WHO Disability Assessment Schedule (WHODAS);
  • Assessment of Quality of Life (AQoL) instrument; and
  • Days out of role.

Hypochondriasis and somatisation

Brief questions to determine possible hypochondriasis and/or somatisation.

Health risk factors
  • Smoker status;
  • Level of exercise; and
  • Body Mass Index (BMI).


OTHER SCALES AND MEASURES (CHAPTER 6)

The Kessler Psychological Distress Scale (K10)
  • Set of 10 questions.
  • Four additional questions were asked, exploring anger in the 30 days prior to interview.

Severity measure
  • The severity of 12-month symptoms for people who met criteria for lifetime diagnosis.
  • Measured using information collected in the mental disorder modules.

Delighted-Terrible Scale
  • Measured the respondents' satisfaction with their life.

Self-assessed health rating
  • Measured the respondents' perceptions of their overall physical and mental health.

Main problem
  • Self-reported in cases where a person had more than one physical condition or mental health problem.

Psychosis screener
  • Four questions designed to screen for the possible presence of psychosis. The questions were not designed to provide a diagnosis.

Suicidal behaviour
  • Thoughts and behaviours (lifetime and 12 months prior to interview).

Mini Mental State Examination
  • A brief assessment used to screen for the presence of cognitive impairment. It does not identify any particular organic mental disorders.
  • Only asked of people aged 65-85 years.


SOCIAL NETWORKS AND CAREGIVING (CHAPTER 7)

Social networks
  • Level and frequency of contact with family members and/or friends.
  • Ability to rely on or confide in family members and/or friends.

Caregiving
  • Whether an immediate family member has health problems and the impact this may have on the respondents' life.


HEALTH SERVICE UTILISATION (CHAPTER 8)

Consultations for specific mental disorders
  • Collected within each mental disorder module.

Services used for mental health problems
  • Consultations with health professionals (lifetime and 12 months prior to interview);
  • Hospital admissions; and
  • Self-management strategies.

Medications
  • Types of medications used for mental health in the two weeks prior to interview.

Perceived need for help
  • Whether particular types of assistance were received (eg medication, counselling);
  • Whether the need for assistance was met; and
  • If the need wasn't met, why wasn't it.


POPULATION CHARACTERISTICS (CHAPTER 9)

Household characteristics
  • Family composition of household;
  • Tenure type;
  • Household income;
  • Household financial stress;
  • Geographic characteristics (eg section of state); and
  • the Socio-Economic Indexes for Areas (SEIFA).

Selected person characteristics
  • Education;
  • Labour force status;
  • Personal income;
  • Parental country of birth;
  • Language mainly spoken at home;
  • Sexual orientation;
  • Homelessness;
  • Incarceration; and
  • Service in the Australian Defence Forces.

More information about the survey contents is available in the chapters indicated above and in the Glossary. A detailed list of data items from the survey (in spreadsheet format) has been released with the Technical Manual on the ABS website <www.abs.gov.au>. Appendix 1 also contains a diagram of the structure and flow of the survey.


COMPOSITE INTERNATIONAL DIAGNOSTIC INTERVIEW (CIDI)

Measuring mental health in the community through household surveys is a complex task, as mental disorders are usually determined through detailed clinical assessment. The 2007 National Survey of Mental Health and Wellbeing was based on a widely-used international survey instrument, developed by the World Health Organization (WHO) for use by participants in the World Mental Health Survey Initiative.

The survey used the World Mental Health Survey Initiative version of the World Health Organization's Composite International Diagnostic Interview, version 3.0 (WMH-CIDI 3.0). The WMH-CIDI 3.0 is a comprehensive interview for adults which can be used to assess the lifetime, 12-month and 30-day prevalence of selected mental disorders through the measurement of symptoms and their impact on day-to-day activities. The WMH-CIDI 3.0 was chosen because it:
  • provides a fully structured diagnostic interview;
  • can be administered by lay interviewers;
  • is widely used in epidemiological surveys;
  • is supported by the World Health Organization (WHO); and
  • provides comparability with similar surveys conducted worldwide.

The WMH-CIDI 3.0 provides an assessment of mental disorders based on the definitions and criteria of two classification systems:
  • the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV); and
  • the WHO International Classification of Diseases, Tenth Revision (ICD-10).

Each classification system lists sets of criteria that are necessary for diagnosis. The criteria specify the nature and number of symptoms required; the level of distress or impairment required; and the exclusion of cases where symptoms can be directly attributed to general medical conditions, such as a physical injury, or to substances, such as alcohol.

Diagnostic algorithms are specified in accordance with the DSM-IV and ICD-10 classification systems. As not all modules contained in the WMH-CIDI 3.0 were operationalised for the 2007 SMHWB, it was necessary to tailor the diagnostic algorithms to fit the Australian context. More information on the WMH-CIDI 3.0 diagnostic assessment criteria according to the ICD-10 and DSM-IV is provided in Chapter 3.


Screener

A screener was introduced to the WMH-CIDI 3.0 to try to alleviate the effects of learned responses. The module included a series of introductory questions about the respondent's general health, followed by diagnostic screening questions for the primary disorders assessed in the survey, eg depressive episode. This screening method has been shown to increase the accuracy of diagnostic assessments, by reducing the effects of learned responses due to respondent fatigue. Other disorders, such as Obsessive-Compulsive Disorder (OCD), were screened at the beginning of the individual module.


Other information

The WMH-CIDI 3.0 was also used to collect information on:
  • the onset of symptoms and mental disorders;
  • the courses of mental disorders, that is, the varying degrees to which the symptoms of mental disorders present themselves, including: episodic (eg depression), clusters of attacks (eg panic disorder), and fairly persistent dispositions (eg phobias);
  • the impact of mental disorders on home management, work life, relationships and social life; and
  • treatment seeking and access to helpful treatment.

More information on the WMH-CIDI 3.0 is available from the World Mental Health website <www.hcp.med.harvard.edu/wmh/>.


SURVEY DEVELOPMENT

Adapting content for the Australian context

Most of the survey was based on the international survey modules, however, some modules were tailored to fit the Australian context. The adapted modules were designed in consultation with subject matter experts.

A Survey Reference Group, comprising experts and key stakeholders in the field of mental health, provided the ABS with advice on survey content, including the most appropriate topics for collection, and associated concepts and definitions. They also provided advice on issues that arose during field tests and the most suitable survey outputs. Group members included representatives from government departments, academic institutions, health research organisations, carers organisations and consumer groups.

Staff from the University of New South Wales (UNSW) were also contracted by the Australian Government Department of Health and Ageing (DoHA) to provide technical support for the survey. Changes to the survey were intentionally restricted so as not to alter the underlying definitions or concepts being assessed, and to maintain comparability of the survey data between countries.

Other considerations in the questionnaire design were:
  • the length of individual questions;
  • the use of easily understood words and concepts;
  • the overall length of the questionnaire; and
  • the sensitivity of topics.

Where possible, adapted modules used existing ABS questions. For example, demographic items, labour force status, education, disability status, and days out of role are all based on standard ABS questions. Some parts of the survey were changed more extensively than others, minor changes may have been made to question wording to reflect commonly understood Australian terms or whole modules may have been changed. For example, when discussing payment for health services, 'paid in full by state-funding' was amended to provide examples of state-funded health services, such as 'public hospital outpatient, public community health service or public community mental health service'. Whereas, the Chronic Conditions module used the format of the CIDI questions, but the content was changed to collect information on physical conditions that are National Health Priority Areas for Australia (eg asthma, cancer, diabetes, etc). The set of physical conditions chosen for this survey may therefore vary from physical conditions collected in other country's surveys. Other modules that were either tailored to fit the Australian context or contain substantial changes include:
  • Demographic and socio-economic characteristics;
  • Chronic conditions;
  • Disability status;
  • Functioning;
  • Health service utilisation; and
  • Medications.

As with all ABS surveys, extensive testing was conducted to ensure that the survey would collect objective and high quality data.


Questionnaire testing

Pre-testing

Pre-testing covers a range of testing techniques, the common feature being that the testing is conducted prior to taking a survey into the field (ie 'field test'). This phase of testing is critical for identifying problems for both respondents and interviewers, particularly regarding question content. The techniques used are designed to identify problems with the part of the instrument being tested and give some indication of the source of the error.

Techniques that were available during the pre-testing phase of questionnaire development:
  • Cognitive laboratory interviews - semi-structured interviews which provide insight into the thought processes used to answer survey questions. They can be used to find better ways of constructing, formulating and asking survey questions which are then iteratively redesigned to arrive at draft questions suitable for a field test.
  • Expert evaluation - a peer review process used to identify respondent semantic problems, respondent task problems, assess content validity and translate concepts.

A major advantage of pre-testing is that small samples are used and limited survey specific documentation and training is required as the testing is performed by people working on the survey. Consequently the process can allow several iterative tests of a set of questions in the time it would take to conduct a pilot test.

The broad objectives of the 2007 SMHWB pre-testing were to assess:
  • respondent understanding of proposed concepts and questions;
  • respondent reaction to potentially sensitive data items, and whether this affects reporting;
  • and improve the construct validity of proposed questions; and
  • the availability of data.

Cognitive interviews

Cognitive interviews are semi-structured interviews in which the interviewer asks a person about their interpretation of questions and formulation of answers. Two rounds of cognitive interviewing were conducted from June to July 2006. One round was conducted in Canberra and one in Sydney, with 15 and 12 participants respectively. There were some changes to questions between tests in Canberra and Sydney, to assess the flow and understanding of some concepts.

Cognitive testing assists with the assessment of the effectiveness of a survey instrument. The process yields a large amount of information, often identifying unanticipated issues in the instrument. Cognitive testing generally focuses on respondent error, which is only one aspect of non-sampling error. To understand the potential for non-sampling error, cognitive testing is usually done in conjunction with peer review and field testing.

To ensure that concepts were understood by potential respondents, cognitive interviews were conducted for several topics, including:
  • chronic conditions;
  • health service utilisation;
  • substance use disorders;
  • prosperity; and
  • sexual orientation.

The first three topics from the above list relate to modules in the survey instrument. Not all questions in each of these modules were tested. Where questions were used in the same context, or with the same wording, testing was not repeated. For example, asking the same question about different health professionals in the Health Service Utilisation module. There were also a number of questions that could not be modified, as they were part of the CIDI.

A number of recommendations were made as a result of the cognitive testing, including:
  • the removal of some questions;
  • changes to the sequencing of questions;
  • revision of interviewer instructions (eg additional clarification or exclusions); and
  • amendments to question wording (eg substitutions of words, shortening the length).

There were also recommendations to monitor particular questions, which posed some difficulty in the cognitive testing, in the Pilot Test.

Field testing

The next phase of survey development involved field testing the survey questionnaire and procedures.

Pilot Test

The Pilot Test is the first field test of the entire question set. Testing is designed to:
  • check the positioning and interlinking of various questions or modules;
  • test the mechanical accuracy of the instrument, including sequencing, populations and general layout;
  • undertake a full timing analysis;
  • assess the effectiveness of interviewer training and documentation; and
  • assess field, office management and procedural issues.

A Pilot Test was conducted in Brisbane in November 2006 and consisted of approximately 250 households. It was conducted in both urban and rural areas. The interviews were conducted by 10 ABS trained interviewers.

Dress Rehearsal

The Dress Rehearsal is the final test in the development cycle and mainly focuses on the procedural and timing aspects of the survey. Primarily, it is an operational test. Questionnaire design errors (eg sequencing errors) can be identified, investigated and corrected. Objectives of the Dress Rehearsal are to:
  • confirm the average interview time of the survey (for modules and the whole survey);
  • identify and rectify any issues that interviewers have with procedures, survey documentation, the survey instrument, or the Computer Assisted Interviewer Workload Management System;
  • identify and rectify any issues that respondents have with survey content and structure;
  • refine and add any necessary edits, edit notations or edit resolution instructions;
  • refine and improve the survey's documentation (Interviewers' Instructions; Office Instructions, flowcharts etc.); and
  • refine and improve interviewer training.

The Dress Rehearsal was conducted in Perth and Sydney from April to May 2007 and consisted of approximately 250 households.

The final enumeration of the survey was conducted from August to December 2007.


Other considerations

The survey was designed to provide national estimates that can be compared internationally, rather than to provide comparisons with the 1997 survey. More information on the differences between the 1997 and 2007 surveys is provided in Chapter 10 and throughout each of the chapters describing mental disorders and conditions, physical health, and other scales and measures.


DATA COLLECTION

The survey was conducted under the authority of the Census and Statistics Act 1905. The ABS sought the willing cooperation of households and due to the sensitive nature of the survey contents, participation was voluntary. The confidentiality of all information provided by respondents is guaranteed. Under this legislation, the ABS cannot release identifiable information about households or individuals. All aspects of the survey's implementation were designed to conform to the Information Privacy Principles set out in the Privacy Act 1988, and the Privacy Commissioner was informed of the details of the proposed survey.

Trained ABS interviewers conducted personal interviews at selected private dwellings from August to December 2007. Interviews were conducted using a Computer-Assisted Interview (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews.


Interviews

Selected households were initially sent a Primary Approach Letter (PAL) by mail to inform the household of their selection in the survey and to advise that an interviewer would call to arrange a suitable time to conduct an interview. A brochure, providing some background to the survey, information concerning the interview process, and a guarantee of confidentiality was included with the letter. For a small number of households where the ABS did not have an adequate postal address, this was not possible.

On the first face-to-face contact with the household by an interviewer, general characteristics of the household were obtained from one person in the household aged 18 years or over. This information included basic demographic characteristics of all usual residents of the dwelling (eg age and sex) and the relationships between household members (eg spouse, son, daughter, not related). This person, or an elected household spokesperson also answered some financial and housing items, such as income and tenure, on behalf of other household members.

From the information provided by the household spokesperson, the survey instrument identified those persons in scope of the survey and randomly selected one person aged 16-85 years to be included in the survey. A personal interview was conducted with this randomly selected person.

In order to conduct a personal interview with the selected person (ie the respondent), interviewers made appointments to call-back to the household, as necessary. In some cases appointments for call-backs were made by telephone, however, all interviews were conducted face-to-face. Due to the sensitive nature of the survey questions, it was suggested that interviews be conducted in private. However, interviews may have been conducted in private or in the presence of other household members, according to the wishes of the respondent. Interviews, including the household assessment, took on average 90 minutes per fully-responding household.

Proxy, interpreted or foreign language interviews were not conducted. This decision ensured that the survey could be conducted by lay interviewers using a standard format. It also ensured that survey questions were asked exactly as described and the onus of interpretation was on the respondent, rather than being influenced by a third party. Additionally, the sensitive nature of the survey questions made them unsuitable for use with proxies or interpreters. Given the complexity of the survey, the concepts involved and the additional costs, translation of the survey into foreign languages was not considered viable.

Respondent reactions

ABS interviewers received training which assisted them to monitor respondent reactions to the survey interview. For example, changes in their voice (eg quieter), facial complexion (eg draining of colour), body tension (eg shaking hands), or extended pauses. If these signs were observed, the interviewer was instructed to stop the interview and check that the respondent was okay.

If the respondent had a very sudden intense emotional reaction to the questions (eg bursting into tears), the interviewer was instructed to provide supportive comments, such as 'take your time' or some other similar gesture, such as getting a glass of water or some tissues. It is possible that some interviews were concluded after such a reaction, however, other interviews may have continued.

During interviewer training, information was also provided on the types of assistance that could be offered to a respondent who was distressed or needed support. This included encouraging respondents to:
  • contact their General Practitioner (GP);
  • contact specialised mental health services;
  • contact Lifeline; or
  • contact the pre-arranged crisis counselling service (OSA Group).

Where any respondent showed signs of stress or requested mental health support during the interview, they were offered assistance. A brochure containing telephone numbers for state/territory counselling providers, such as Lifeline and Beyond Blue, was offered to all respondents. Additionally, respondents were able to access crisis counselling through the OSA Group, who operate a 24-hour, seven day a week service. The OSA Group were contracted by the ABS specifically for this survey, to provide up to two sessions of private counselling for respondents in need.

In the event of a crisis situation, procedures were in place for interviewers to be able to:
  • seek assistance for themselves; and/or
  • offer assistance to respondents.

Interviewers were instructed to remove themselves from a situation where they felt in danger, or to contact emergency services (000) if they felt anyone was in crisis or danger. Interviewers were also given two emergency telephone numbers, which could be used to access support for the interviewer and the respondent in a crisis situation (eg the respondent was threatening to harm themselves or was crying uncontrollably). The two emergency contacts were trained psychiatrists, who were able to provide advice on how to proceed in a crisis situation and where necessary, facilitate access to services. In the event that the emergency contacts were unavailable, interviewers were also provided with a list of the nearest mental health services and procedures to follow if they felt they needed an urgent assessment.

Refusals and exclusions

In cases where the household spokesperson provided initial demographic and household information, but the selected person then refused to participate in the survey, a follow-up letter was sent. The letter explained the aims and importance of the survey and encouraged participation. Of the approximate 2,500 letters dispatched, only a handful of respondents subsequently completed the survey questionnaire.

Persons excluded from the survey through non-contact or refusal were not replaced in the sample. People who were identified as having possible cognitive impairment, through the Mini Mental State Examination (MMSE), were also excluded.


Interviewers

A group of ABS officers were trained in the use of the Composite International Diagnostic Interview (CIDI) by staff from the CIDI Training and Reference Center, University of Michigan. These officers then provided training to ABS interviewers, who were recruited from a pool of trained interviewers with previous experience on ABS household surveys. A comprehensive four-day training program run by the ABS included:
  • field procedures, including occupational health and safety;
  • use of the survey instrument, including question-by-question instructions;
  • practical exercises; and
  • sensitivity awareness training.

All phases of the training emphasised understanding of the survey concepts and definitions, survey questions, interpretation of possible responses and adherence to interview procedures to ensure that a standard approach was used by all interviewers.

Sensitivity awareness training was provided by the OSA Group. The interactive session included:
  • understanding the impacts of mental illness and how people may deal with this;
  • difficult and/or sensitive questions;
  • anticipating responses and/or reactions to the survey questions, both from the respondent and the interviewer;
  • self-management and self care; and
  • available support options, including trained counsellors.

Interviewer workloads

Interviewers were allocated a number of dwellings (a workload) at which to conduct interviews. The size of the workload was dependent upon the geographical area. Interviewers living close to their workload area usually had larger workloads. Overall, each workload averaged approximately 25 dwellings per four week period. This meant that throughout the enumeration period (August to December 2007), interviewers may have completed multiple workloads.

Regular communication between field and office staff was maintained throughout the survey via database systems set up for the survey.


Questionnaire

The questionnaire was administered by experienced ABS interviewers, who had received specific training for the survey. The questionnaire was further supported by detailed interviewer instructions, covering general procedural issues as well as specific instructions relating to individual questions.

The questionnaire is not fully indicative of the range of information available from the survey, as additional items were created in processing the data. For example, ABS classifications were applied to raw data inputs to create labour force status. Additionally, some questions were asked solely for the purpose of enabling or clarifying other questions, and are not available in survey results. For example, height and weight measurements were only collected to enable the output of Body Mass Index (BMI) classifications.

Initial household information was collected from one usually resident household member aged 18 years and over using a Household Form. This was similar in design to the household forms used by other ABS household surveys. From this information, one person in the household aged 16-85 years was randomly selected to complete a personal interview. The personal interview consisted of a number of separate modules, collecting information on demographics, physical conditions, mental disorders, use of health services for mental health problems, medications, social networks and caregiving. A diagram depicting the structure and flow of the survey modules is provided in Appendix 1. For a more detailed list of the questionnaire contents see 'Survey content' in this chapter.


Computer Assisted Interviewing (CAI)

Interviews were conducted using a Computer Assisted Interviewing (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews. This type of instrument offers important advantages over paper questionnaires, including:
  • the ability to check the responses entered against previous responses, to reduce data entry errors by interviewers. The audit trail recorded in the instrument also provides valuable information about the operation of particular questions, and associated data quality issues.
  • the ability to use complex sequencing to define specific populations for questions, and ensure word substitutes used in the questions are appropriate to each respondent's characteristics and prior responses.
  • the ability to capture data electronically at the point of interview, removing the added cost, logistical, timing and quality issues around the transport, storage and security of paper forms, and the capture of information from paper forms into a computerised format.
  • the ability to deliver data in an electronic semi-processed form compatible with ABS data processing facilities (semi-processed in terms of data validation and some derivations which occur within the instrument itself). While both the input and output data still need to be separately specified to the processing system, input of the data in this form assists in that specification task and reduces the amount and complexity of some later processing tasks.
  • the provision for interviewers to record comments to help explain or clarify certain responses, or provide supplementary information to assist in office coding.

The questionnaire employed a number of different approaches to recording information at the interview:
  • questions where responses were classified by interviewers to one or more predetermined response categories. This approach was used for recording answers to more straightforward questions, where logically a limited range of responses was expected, or where the focus of interest was on a particular type or group of response (which were listed in the questionnaire, with the remainder being grouped together under ‘other’);
  • questions asked in the form of a running prompt, ie predetermined response categories read out to the respondent one at a time until the respondent indicated agreement to one or more of the categories (as appropriate to the topic) or until all the predetermined categories were exhausted; and
  • questions asked in association with prompt cards, ie where printed lists of possible answers were handed to the respondent who was asked to select the most relevant response(s). By listing a set of possible responses (either in the form of a prompt card or a running prompt question) the prompt served to clarify the question or to present various alternatives, to refresh the respondent’s memory and at the same time assist the respondent select an appropriate response.

To ensure consistency of approach, interviewers were instructed to ask the interview questions as shown in the questionnaire. In certain areas of the questionnaire, interviewers were asked to use indirect and neutral prompts, at their discretion, where the response given was, for example, inappropriate to the question asked or lacked sufficient detail necessary for classification and coding.


Copies of the survey instrument

With the release of this publication, a paper copy of the 2007 National Survey of Mental Health and Wellbeing (SMHWB) instrument has also been made available on the ABS website <www.abs.gov.au>.

The survey is based on the World Mental Health Survey Initiative version of the World Health Organization's (WHO) Composite International Diagnostic Interview (CIDI), version 3.0 (WMH-CIDI 3.0). The paper copy of the survey instrument represents the Computer Assisted Personal Interview (CAPI) version of the CIDI used by the ABS to collect the 2007 SMHWB. For more information on the WMH-CIDI 3.0 please refer to the World Mental Health website <www.hcp.med.harvard.edu/wmhcidi/>

CIDI training

In order to use any version of the WHO's Composite International Diagnostic Interview (CIDI) training must be obtained through an authorised CIDI Training and Reference Centre. These Centres offer training and support for the use of the CIDI.

Australian CIDI Training and Reference Centre (TRC)

For more information on CIDI training please contact the Department of Rural and Indigenous Health at Monash University:
Paper copies

A paper copy of the survey instrument has been released with this publication and is available from the ABS website <www.abs.gov.au>. The survey instrument is provided as a reference to the 2007 SMHWB and should not be used for administering survey interviews. Please note the above information regarding CIDI training.

Electronic copies

To obtain an electronic copy of the CIDI instrument, please contact the CIDI Training and Reference Centre (details above). A Blaise licence is required to use the electronic version of the survey instrument. Blaise is a software package designed primarily for survey data collection and processing.


DATA PROCESSING

Data capture

Computer-based systems were used to collect and process data from the survey. The survey used computer-assisted interviewing (CAI) for data collection and the ABS Household Survey Facilities (HSF) system to process the survey. The code for the survey instrument was provided by the World Mental Health Survey Initiative and then modified by the ABS in line with requirements for the Australian version of the survey. For more information see 'Survey development' in this chapter.

The use of CAI ensured that respondents were correctly sequenced throughout the questionnaire. Inbuilt edits meant that some issues could be clarified with the respondent at the time of interview. However, there were only a small number of automatic edits to check the consistency of ages provided by the respondent across the various modules of the survey, resulting in some adjustments being applied during output processing. For more information see 'Output processing' in this chapter.

Interviewer workloads were electronically loaded on receipt in the ABS office in each state or territory. Checks were made to ensure the workloads were fully accounted for and that questionnaires for each household and respondent were completed. Problems with the questionnaire identified by interviewers were resolved, where possible by using other information contained in the questionnaire, or by referring to the comments provided by interviewers.

During the processing and validation of the main data file, a small number of errors were also picked up in the diagnostic modules of the instrument. Minor changes were made to question sequencing to amend errors identified in the initial code provided and to correct for the differences between the ICD-10 and DSM-IV classifications. For example, the sequencing of questions in one segment of the PTSD module did not account for the differences in the ICD-10 and DSM-IV diagnostic criteria. This meant that a small number of respondents were excluded from questions on 12-month symptoms (and therefore not diagnosed with a 12-month disorder).


Coding

Computer-assisted coding was performed on responses to questions on:
Classifications

Family relationships
  • Coding is based on household information collected for all persons in each dwelling. All usual residents were grouped into family units and classified according to their relationship within the family.

Geography

Country of birth
  • The survey questionnaire listed the 10 most frequently reported countries. Interviewers were instructed to mark the appropriate box, or if the reported country was not among those listed, to record the name of the country for subsequent coding.
  • All responses for country of birth were classified according to the Standard Australian Classification of Countries (SACC), 1998 (cat. no. 1269.0).

Language
  • The survey questionnaire listed the 10 most frequently reported languages first spoken at home. Interviewers were instructed to mark the appropriate box, or if the reported language was not among those listed, to record the name of the language for subsequent coding.
  • All responses for language spoken were classified according to the Australian Standard Classification of Languages (ASCL), 2005-06 (cat. no. 1267.0).

Education

Occupation

Data were classified according to the:
Industry

Data were classified according to the:
Pharmaceutical medications

Coding of open-ended questions

The survey contained a number of open-ended questions, for which there were no predetermined responses. These responses were coded manually by the ABS and externally. For example, the Depression module asked people whether their episodes of symptoms were the result of physical causes, such as illness or injury, or the use of medications, drugs or alcohol. If a person thought their episodes always occurred as the result of physical causes, they were asked to briefly describe what the physical cause was. This information was recorded as stated, using a free-form text field.

Some of the open-ended questions formed part of the assessment to determine whether a respondent met the criteria for diagnosis of a mental disorder. These open-ended questions were designed to probe causes of a particular episode or symptom. Responses were then used to eliminate cases where there was a clear physical cause (ie organic exclusion criteria). As part of the processing procedures set out for the WMH-CIDI 3.0, responses provided to the open-ended questions are required to be interpreted by a suitably qualified person, ie a psychiatrist or a clinical psychologist.

External technical assistance

For this survey, technical assistance for coding the open-ended diagnostic-related questions was provided the School of Psychiatry at the University of New South Wales (UNSW). In order for the data files to be released to UNSW, a set of procedures ensuring respondent confidentiality were required to be met. These procedures are outlined in the ABS Policy and Legislation (Confidentiality and Disclosure) and include the removal of any information that may aid identification or enable spontaneous recognition. Staff from UNSW were also required to sign an undertaking of confidentiality, which included the appropriate handling of the data and non-disclosure to any third party.

There were 20 open-ended questions that formed part of the diagnosis of particular mental disorders, resulting in approximately 1,900 individual responses to be coded. The text responses to 18 of these questions were designed to probe the causes of a particular episode or symptom. These were then used to eliminate cases where there was a clear physical cause for the episode or symptom. The responses to the other two questions were used to classify the type of fear associated with Social Phobia. Text responses that required office coding related to:
  • an 'other stressful experience' that caused the worst episode of sadness/discouragement/lack of interest;
  • an 'other physical illness or injury' that caused the worst episode of sadness/discouragement/lack of interest;
  • an 'other' reason that caused the worst episode of sadness/discouragement/lack of interest;
  • physical cause/s for an episode of sadness/discouragement/lack of interest lasting nearly every day for two weeks or longer;
  • physical cause/s for episodes when the person was excited and full of energy/irritable or grouchy;
  • physical cause/s for unexpected 'out of the blue' attack/s;
  • physical cause/s for fearful reactions (or avoidance of fearful situations);
  • physical cause/s for worry/anxiety/nervousness;
  • 'other personal problems' related to being worried/anxious/nervous;
  • 'other (social) network problems' related to being worried/anxious/nervous;
  • 'other societal problems' related to being worried/anxious/nervous;
  • (up to three) 'other problems' related to being worried/anxious/nervous;
  • an 'other physical illness or injury' that caused the unpleasant thoughts and/or repeated behaviours;
  • medication/s that caused the unpleasant thoughts and/or repeated behaviours;
  • drug/s that caused the unpleasant thoughts and/or repeated behaviours;
  • an 'other' reason that caused the unpleasant thoughts and/or repeated behaviours;
  • a 'real danger' reason that the person feared most in relation to particular social situations; or
  • an 'other' reason that the person feared most in relation to particular social situations.

Apart from the text responses outlined above, no other information collected during the interview was released. Data to be released was first examined and assessed by the ABS Micro Data Review Panel, to ensure respondent confidentiality. Where responses were considered to be 'at risk' they were revised. For example, if the text field included any part of the respondent's name, reference to a country of birth, or the description of any unusual or one-off events.

ABS office coding

Two additional groups of open-ended questions, representing approximately 2,000 individual responses, were also office coded. These questions did not form part of the diagnostic assessment criteria and therefore did not require interpretation by a qualified psychiatrist/clinical psychologist.

The first group of open-ended questions related to health service utilisation. The responses formed part of understanding the reasons why health services were used. The responses for these questions were office coded by staff at the ABS.

An example of office coding related to health service utilisation occurred when people were asked about their most recent admission to hospital for physical health problems. People were asked to nominate a response category from a provided list to best describe the main reason for their admission. People could nominate (a) reason/s from the list or could provide an 'other' reason, which was then specified in a text field. These 'other' reasons were office coded. Text responses may have included:
  • broken limbs;
  • cuts;
  • appendicitis;
  • accidents; or
  • an operation.

The text responses were then allocated a code, which related to a list of categories, including:
  • injury or results of injury;
  • stroke;
  • ears, nose and throat problems;
  • digestive system;
  • infection, poisoning and parasites; or
  • allergy.

Other health service utilisation text responses that required office coding related to:
  • an 'other' reason for the most recent hospital admission for mental health problems;
  • an 'other' source of information about mental illness, its treatment and available services;
  • an 'other' form of help received from a hospitalisation/consultation for mental health care;
  • an 'other' reason for not seeking more help from health professionals;
  • an 'other' reason for not seeking help; and
  • an 'other' reason for not seeking (more) information/medication/counselling/social intervention/skills training.

The second group of open-ended questions related to other non-diagnostic questions, apart from health service utilisation (see above). The responses for these questions did not require specialist coding and were therefore office coded by ABS staff. An example of office coding related to these types of questions occurred when people were asked about the method they used to attempt suicide. People were asked to nominate a response category from a provided list, or they could provide an 'other' type of method, which was then specified in a text field. The text responses were allocated a code, which was based on the original list of categories.


Diagnostic scoring

The WMH-CIDI 3.0 is a fully structured interview that maps the extent to which a person's symptoms meet the ICD-10 and DSM-IV diagnostic criteria. The CIDI comprises standardised interview questions with probing, coding, training and data analysis procedures. The probe questions are used to determine whether the symptom was ever enough to cause either impairment or professional help seeking, as well as to what degree symptoms may be attributed to either physical conditions or the effect of substances.

The information collected through the CIDI is entered into standard data entry and scoring programmes which report whether the diagnostic criteria are satisfied. Assessment against the diagnostic criteria is performed through computerised algorithms, ensuring a high degree of consistency.

More information on the diagnostic assessment criteria is provided in Chapter 3.

Diagnostic algorithms

A set of algorithms (in SAS code) were provided to the ABS to be used with the WMH-CIDI 3.0. These were used to determine diagnoses of mental disorders. Extensive validation of the algorithms was done to ensure that any logic and/or coding errors were rectified prior to enumeration, and again prior to output processing. Where an error was identified, it was resolved through consultation with technical experts from the UNSW and the University of Michigan (where the CIDI team are based).

As not all modules contained in the WMH-CIDI 3.0 were operationalised for the 2007 SMHWB, it was also necessary to tailor the diagnostic algorithms to fit the Australian context. For example, a person is excluded from a DSM-IV diagnosis of Agoraphobia where there is a co-occurring Separation Anxiety Disorder. However, Separation Anxiety Disorders were not collected in the 2007 SMHWB which meant that the diagnostic algorithm for Agoraphobia had to be edited accordingly. The SAS code for the different diagnoses also had to be adjusted to meet the requirements of ABS processing systems.

For more information on the SAS codes applied to the diagnostic algorithms please contact Dr Tim Slade at the National Drug and Alcohol Research Centre:
      Phone: +61 (0)2 9385 0267
      Fax: +61 (0)2 9385 0222
      Mail: University of New South Wales, Sydney NSW 2052


Output processing

Information from the questionnaire, other than names and addresses, was stored on a computer output file in the form of data items. In some cases, items were formed from answers to individual questions, while in other cases data items were derived from answers to several questions. For example, self-reported height and weight measurements were used to calculate Body Mass Index (BMI), which is the output item.

During processing of the data, checks were performed on records to ensure that specific values lay within valid ranges and that relationships between items were within limits deemed acceptable for the purposes of this survey. These checks were also designed to detect errors which may have occurred during processing and to identify instances which, although not necessarily an error, were sufficiently unusual or close to agreed limits to warrant further examination.

Throughout processing, frequency counts and tables containing cross-classifications of selected data items were produced for checking purposes. The purpose of this analysis was to identify any problems in the input data which had not previously been identified, as well as errors in derivations or other inconsistencies between related items.

Adjustments during processing

With the emphasis of the survey being on 'lifetime' mental disorders, it was expected that there would be some inconsistencies in the information provided due to the accuracy of respondent recall. As only a limited number of automatic edits were contained in the mental disorder modules, a number of inconsistencies within the data, specifically related to age were observed. Some of the inconsistencies had flow-on implications for output items which were used to provide estimates of:
  • the age at which a mental disorder first began; and
  • the age at which a person last had symptoms of that disorder.

Resolving these inconsistencies required extensive consultation with technical experts from the University of New South Wales (UNSW).

Adjustments made during output processing were minimised as much as possible, to ensure that the data remained 'as reported'. Some adjustments were made to account for age inconsistencies, but only where the inconsistency would impact on diagnosis or estimates of onset, recency or persistence. The adjustments were generally based on the use of the 'minimum' age reported. For example, if a person reported they were 21 (years) when they first tried a drink, but then say that their drinking problem started at age 16, the lower of these two ages would be used as the response for both questions. The adjustments are briefly described in the following list:
  • Panic Disorder - a number of age inconsistencies were identified for the age 'first time' and age 'last time' symptoms occurred, which impacted on estimates of onset and recency.
  • Agoraphobia - age inconsistencies were identified in the age 'first time' and age 'last time' symptoms occurred, impacting on estimates of onset.
  • Obsessive-Compulsive Disorder - age inconsistencies were identified in the age 'first' had symptoms and 'first' experienced symptoms for two weeks or longer. Inconsistencies also occurred in the age 'first' had symptoms and the age 'last' had symptoms for two weeks or longer.
  • Post-Traumatic Stress Disorder - age inconsistencies were identified in the age 'first' experienced an event and age when experienced the worst event.
  • Substance Use Disorders - age inconsistencies were identified in the age 'first time' used and age 'first time' had dependence, impacting on estimates of onset.

Adjustments for each of these identified inconsistencies were made on a case-by-case basis to ensure the minimum age was logical and that there would be no flow-on impacts to other variables.

Output file

A multi-level hierarchical data file was produced. The contents of the person and household levels are briefly described below:
  • Person level - contains the majority of information on the respondent, including demographic and socio-economic characteristics (eg age, sex, marital status, education, labour force status, personal income), chronic conditions, functioning, mental disorder, health service use, medications, social networks and caregiving, etc; and
  • Household level - contains information on the household and its members, collected in the household form, including family composition of household, household structure, number of persons/children in household, household income, dwelling tenure type, etc.

Validation checks

The output data file was extensively validated through item-by-item examination of input and output frequencies, checking populations through derivations, internal consistency of items and across levels of the file, data confrontation, etc. Despite these checks, it is possible that some small errors remain undetected on the file.

As a result of the validation processes, some adjustments have been made to data on a record-by-record basis. Changes were done with reference to other information provided by respondents and only to correct clear errors that were not identified during the survey interview. For example, age data provided was inconsistent within a module or across numerous modules of the survey instrument. Adjustments may also have occurred as the result of an edit not being applied or being by-passed. For example, where the response to a question was recorded as 'don't know' and was subsequently answered. In cases where the interviewer did not or was unable to return to the original question, the details may have been recorded in a text file.

In general, unless data were 'impossible' they have not been corrected, and results are essentially 'as reported'. To hide 'improbable' responses (eg extremely high alcohol consumption, excessive amounts of time exercised, income) some outliers have been reclassified, primarily to 'not stated' values. Some of these adjustments were made record-by-record; for others a global change was used for all records where reported values lay outside acceptable limits.

Decisions to apply treatments or adjustments to the data were made, as appropriate, by the ABS. Adjustments to data associated with the mental disorder diagnoses were done in consultation with technical experts from the UNSW.

Data confrontation

In the final stages of processing, extensive analyses, including data confrontation, was undertaken to ensure the survey estimates conformed to known or expected patterns, and were broadly consistent with data from the 1997 SMHWB, from other ABS data sources or from international sources, allowing for methodological and other factors which might impact comparability. A list of the data sources used for comparison is provided in Chapter 10.

Due to the lower than expected survey response rate, the ABS undertook extensive data comparisons and also conducted a purposive small sample Non-Response Follow-Up Study (NRFUS). The NRFUS provided some qualitative analysis on the possible differing characteristics of fully-responding and non-responding persons. More information on non-response is provided in Chapter 10. Comparisons of numerous demographic and socio-economic characteristics indicated that some of the 2007 SMHWB estimates did not align with other ABS estimates. As a result, additional benchmarks were incorporated into the survey's weighting strategy. For more information see 'Weighting, benchmarking and estimation' in this chapter.

Data available from the survey are essentially 'as reported' by respondents. The procedures and checks outlined above were designed primarily to minimise errors occurring during processing. In some cases it was possible to correct errors or inconsistencies in the data which was originally recorded in the interview, through reference to other data in the record; in other cases this was not possible and some errors and inconsistencies may remain on the data file.


RESPONSE RATES AND SAMPLE ACHIEVED

Overview

Ideally, interviews would be conducted with all people selected in the sample. In practice, some level of non-response is likely. Non-response occurs when the person selected for interview is unable or unwilling to participate or where they cannot be contacted during the enumeration of the survey. Unit and item non-response by persons/households selected in the survey can affect both sampling and non-sampling error. The loss of information on persons and/or households (unit non-response) and on particular questions (item non-response) reduces the effective sample and increases sampling error.

Non-response can also introduce non-sampling error by creating a biased sample. The magnitude of any non-response bias depends upon the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not within population subgroups as determined by the weighting strategy.


Survey participation

The ABS sought the willing participation of selected households. However, due to the sensitive nature of the survey content, participation was voluntary. Measures taken to encourage participation included:
  • information was provided to selected households, initially by letter and a brochure, explaining that their dwelling had been selected for the survey, the purposes of the survey, and the confidentiality of the information collected. The letters gave advance notice that an ABS interviewer would call, and provided an ABS contact number for more information.
  • stressing the importance of participation in the survey by selected households, by explaining that each household selected represented a number of others similar in size, composition, location, occupation, lifestyle and health. Further explanation that the cooperation of those selected was important to ensure all households/persons were properly represented in the survey and properly reflected in survey results.
  • stressing the importance of the survey itself, which measures the mental health and well being of Australians and therefore helps plan and provide support to those groups at risk.
  • stressing the confidentiality of all information collected. The confidentiality of data is guaranteed by the Census and Statistics Act 1905. Under the provisions of this Act the ABS is prevented from releasing any identifiable information about individuals or households to any person, organisation or government authority.

Through call-backs and follow-up at selected households, every effort was made to contact the occupants in order to conduct the survey. Interviewers made several call-backs before a household was classified as a ‘non-contact’. Call-backs occurred at different times during the day to increase the chance of contact. If any person who was selected to be included in the survey was absent from the household when the interviewer called, arrangements were made to return at a later time. Interviewers made return visits as necessary in order to complete the survey with the selected person. In some cases, the selected person within a household could not be contacted or interviewed, and these were classified as 'non-contacts'.

People who refused to participate were sent a follow-up letter, emphasising the importance of their participation in the survey. This measure resulted in a small number of refusals being converted. Instances occurred where a person was willing to answer some, but not all, of the questions asked, or did not know an answer to a particular question. The survey instrument was programmed to accept 'don't know' responses, as well as refusals on sensitive topics, such as income.


Response rates

There were 17,352 private dwellings initially selected for the survey. This sample was expected to deliver the desired fully-responding sample, based on an expected response rate of 75% and sample loss. The sample was reduced to 14,805 dwellings due to the loss of households with no residents in scope for the survey and where dwellings proved to be vacant, under construction or derelict. Of the eligible dwellings selected, there were 8,841 fully-responding households, representing a 60% response rate at the national level. The table below provides the proportion of fully-responding households for each state/territory:

1. RESPONSE RATES, by state or territory

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia
%
%
%
%
%
%
%
%
%

Proportion of fully-responding households
59
55
64
60
63
77
55
71
60



Non-response

Of the dwellings selected with usual residents in-scope of the survey, 40% did not respond fully. Reflecting the sensitivity of the survey, the average expected interview length (of around 90 minutes) combined with the voluntary nature of the survey, almost two-thirds (61%) of these dwellings were full refusals. Household details were provided by more than a quarter (27%) of these dwellings, but then the selected person did not complete the main questionnaire. The remainder of these dwellings (12%) provided partial or incomplete information or were assigned as full non-contacts. As the level of non-response for this survey was significant, extensive non-response analyses to assess the reliability of the survey estimates were undertaken.

The non-response analyses included extensive comparisons of the 2007 SMHWB to other data sources, as well as a purposive small sample/short-form Non-Response Follow-Up Study (NRFUS). The NRFUS was developed for use with non-respondents in Sydney and Perth and was conducted from January to February 2008. It achieved a response rate of 40%, yielding information on 151 non-respondents. It used a short-form questionnaire containing demographic questions and the Kessler Psychological Distress Scale (K10). The short-form approach precluded the use of the full diagnostic assessment modules. As a minor proxy of the mental health questions, the K10 was included. The aim of the NRFUS was to provide a qualitative assessment of the likelihood of non-response bias. Given the small size and purposive nature of the NRFUS, the results were not explicitly incorporated into the survey's weighting strategy.

Categorisation of interviewer remarks from the survey and the NRFUS indicated that the majority of persons who refused to participate stated that they were 'too busy' or 'not interested' in the survey.

For more information on non-response and the analyses undertaken see Chapter 10.


COMPARISON WITH THE 1997 SURVEY

The sample sizes differed between the two surveys. In 2007, there were 8,841 fully-responding households, compared to approximately 10,600 in 1997. The 2007 survey approached a larger initial sample (17,352 approached dwellings) compared to 1997 (15,500 approached dwellings). There was a reduction in achieved proportions of the initial sample sizes, ie the response rates varied, 60% in 2007 compared to 78% in 1997. These differences, as well as those outlined throughout this publication, should be considered when comparing results. For the 1997 survey results see Mental Health and Wellbeing: Profile of Adults, Australia, 1997 (cat. no. 4326.0) available from the ABS website <www.abs.gov.au>.


SURVEY METHODOLOGY

Scope and coverage

The scope of the survey is people aged 16-85 years, who were usual residents of private dwellings in Australia, excluding very remote areas. Private dwellings are houses, flats, home units and any other structures used as private places of residence at the time of the survey. People usually resident in non-private dwellings, such as hotels, motels, hostels, hospitals, nursing homes, and short-stay caravan parks were not in scope. Usual residents are those who usually live in a particular dwelling and regard it as their own or main home.

Scope inclusions:
  • Members of Australian permanent defence forces; and
  • Overseas visitors who have been working or studying in Australia for the 12 months or more prior to the survey interview, or intend to do so.

Scope exclusions:
  • Non-Australian diplomats, non-Australian diplomatic staff and non-Australian members of their household;
  • Members of non-Australian defence forces stationed in Australia and their dependents;
  • Overseas visitors (except for those mentioned in 'scope inclusions'); and
  • Households where all residents were aged less than 16 years or more than 85 years.

Proxy and foreign language interviews were not conducted. Therefore, people who were unable to answer for themselves were not included in the survey coverage, but are represented in statistical outputs through inclusion in population benchmarks used for weighting.

The projected Australian adult resident population aged 16 years and over, as at 31 October 2007 (excluding people living in non-private dwellings and very remote areas of Australia) was 16,213,900, of which, 16,015,300 were aged 16-85 years.

For this survey, the population benchmarks were projections of the quarterly Estimated Resident Population (ERP) data, released 30 June 2007. For information on the methodology used to produce the ERP see Australian Demographic Statistics (cat. no. 3101.0). To create the population benchmarks for the 2007 SMHWB reference period, the most recently released quarterly ERP estimates were projected forward two quarters past the period for which they were required. The projection was based on the historical pattern of each population component - births, deaths, interstate migration and overseas migration. By projecting two quarters past that needed for the current population benchmarks, demographic changes are smoothed in, thereby making them less noticeable in the population benchmarks.


Sample design

The survey was designed to provide reliable estimates at the national level. The survey was not designed to provide state/territory level data, however, some data may be available (on request) for states with larger populations, eg New South Wales. Users should exercise caution when using estimates at this level due to high sampling errors. Relative Standard Errors (RSEs) for all estimates provided in the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0) were released in spreadsheet format as an attachment to that publication. The summary publication and spreadsheets are available from the ABS website <www.abs.gov.au>.

Dwellings included in the survey in each state and territory were selected at random using a stratified, multistage area sample. This sample included only private dwellings from the geographic areas covered by the survey. Sample was allocated to states and territories roughly in proportion to their respective population size. The expected number of fully-responding households was 11,000. The sample allocations for each state/territory appear in the following table:

2. SAMPLE ALLOCATIONS, by state or territory

NSW
Vic
Qld
SA
WA
Tas
NT
ACT
Australia
no.
no.
no.
no.
no.
no.
no.
no.
no.

Expected number of fully-responding households
3 679
2 785
2 102
845
1 072
265
76
176
11 000


The multistage sample of dwellings was based on the framework of the Population Survey Master Sample. The appropriate fraction of the Monthly Population Survey (MPS) parallel blocks was selected in each state/territory in order to achieve the above sample allocations (based on an assumed response rate of 75% and a sample loss rate of approximately 14%).

Overlap with the MPS is avoided because the Population Survey Master Sample assigns a block for the MPS and a separate block for Special Social Surveys within each selected Collection District (CD). Overlap with other Special Social Surveys is avoided by rotating to the next cluster of dwellings in the 2007 SMHWB selected parallel blocks.

To improve the reliability of estimates for younger (16-24 years) and older (65-85 years) persons, these age groups were given a higher chance of selection in the household person selection process. That is, if you were a household member within the younger or older age group, you were more likely to be selected for interview than other household members.

There were 17,352 private dwellings initially selected for the survey. This sample was expected to deliver the desired fully-responding sample, based on an expected response rate of 75% and sample loss. The sample was reduced to 14,805 dwellings due to the loss of households with no residents in scope for the survey and where dwellings proved to be vacant, under construction or derelict.

Of the eligible dwellings selected, there were 8,841 fully-responding households, representing a 60% response rate at the national level. Interviews took on average, around 90 minutes to complete.

Some survey respondents provided most of the required information, but were unable or unwilling to provide a response to certain data items. The records for these persons were retained in the sample and the missing values were recorded as 'don't know' or 'not stated'. No attempt was made to deduce or impute for these missing values.

Due to the lower than expected response rate, the ABS undertook extensive non-response analyses as part of the validation and estimation process, including a Non-Response Follow-Up Study (NRFUS). Further information is provided in Chapter 10.


Weighting, benchmarking and estimation

Weighting

Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each sample unit corresponding to the level at which population statistics are produced, eg household or person level. The weight can be considered an indication of how many population units are represented by the sample unit. For the 2007 SMHWB, separate person and household weights were developed.

Selection weights

The first step in calculating weights for each person or household is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. In the sample design, the sample size was allocated proportional to the population density of the state by:
  • capital city (all state and territory capital city statistical divisions);
  • other urban (all centres with a total population of 1,000 or more people, except capital cities); and
  • remainder of state (rural areas and towns with a total population of less than 1,000 people).

For this survey, due to the length of the interview, only one in-scope person was selected per household. Therefore, the initial person weight was derived from the initial household weight according to the total number of in-scope persons in the household and the differential probability of selection by age used to obtain more younger (16-24 years) and older (65-85 years) people in the sample.

Apart from the 8,841 fully-responding households, basic information was obtained from the survey's household form for an additional 1,476 households and their occupants. This information was provided by a household member aged 18 years or over. In the case of these 1,476 households, the selected person did not complete the main questionnaire (eg they were unavailable or refused to participate). The information provided by these additional 1,476 households was analysed to determine if an adjustment to initial selection weights could be made as a means of correcting for non-response. However, no explicit adjustment was made to the weighting due to the negligible impact on survey estimates.

Benchmarking

The person and household weights were separately calibrated to independent estimates of the population of interest, referred to as 'benchmarks'. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over- or under-enumeration of particular categories which may occur due to either the random nature of sampling or non-response. This process can reduce the sampling error of estimates and may reduce the level of non-response bias.

A standard approach in ABS household surveys is to calibrate to population benchmarks by state, part of state, age and sex.

'State' consists of the six states and two territories:
  • New South Wales (NSW);
  • Victoria (Vic);
  • Queensland (QLD);
  • South Australia (SA);
  • Western Australia (WA);
  • Tasmania (Tas);
  • Northern Territory (NT); and
  • Australian Capital Territory (ACT).

'Part of state' is divided into two classifications:
  • Capital city - if the respondent lived in a capital city; and
  • Balance of state - if the respondent lived elsewhere in the state.

Due to the small populations of the NT and ACT, these were not divided into parts. The NT was classified as 'Balance of state' and the ACT as 'Capital city'.

The 'age' of respondents was categorised into seven age groups, defined as:
  • 16-24 years;
  • 25-34 years;
  • 35-44 years;
  • 45-54 years;
  • 55-64 years;
  • 65-74 years; and
  • 75-85 years.

'Sex' consists of two categories:
  • male; and
  • female.

In terms of the effectiveness of 'correcting' for potential non-response bias, it is assumed that the characteristics being measured by the survey for the responding population are similar to the non-responding population within weighting classes, as determined by the benchmarking strategy. Where this assumption does not hold, biased estimates may result.

Given the relatively low response rate for the 2007 SMHWB, extensive analysis was done to ascertain whether further benchmark variables, in addition to geography, age, and sex, should be incorporated into the weighting strategy. Analysis showed that the standard weighting approach did not adequately compensate for differential undercoverage in the survey sample for variables such as educational attainment, household composition, and labour force status, when compared to other ABS surveys and the 2006 Census of Population and Housing. As these variables were considered to have possible association with mental health characteristics, additional benchmarks were incorporated into the weighting strategy.

Initial person weights were simultaneously calibrated to the following population benchmarks:
  • state by part of state by age by sex; and
  • state by household composition; and
  • state by educational attainment; and
  • state by labour force status.

The state by part of state by age and sex benchmarks were obtained from demographic projections of the resident population, aged 16-85 years who were living in private dwellings, excluding very remote areas of Australia, at 31 October 2007. The projected resident population was based on the Census of Population and Housing using 30 June 2007 as the latest available Estimated Resident Population. Therefore, the 2007 SMHWB estimates do not (and are not intended to) match estimates for the total Australian resident population (which include persons and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.

The remaining benchmarks were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The Survey of Education and Work, Australia, 2007 (persons aged 16-64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (September to December 2007) provided the pseudo-benchmark for labour force status, as well as the resident population living in households by household composition. The pseudo-benchmarks were aligned to the projected resident population aged 16-64 years or 16-85 years depending on the characteristic (eg education, household composition), who were living in private dwellings in each state and territory, excluding very remote areas of Australia, at 31 October 2007. The pseudo-benchmark of household composition was also aligned to the projected household composition population counts of households. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.

Household weights were derived by separately calibrating initial household selection weights to the projected household composition population counts of households containing persons aged 16-85 years, who were living in private dwellings in each state and territory, excluding very remote areas of Australia, at 31 October 2007.

Estimation

Estimation is a mathematical technique for producing information about a population based on a sample of units from that population. Estimates from the survey were derived using a combination of:
  • information collected in the survey (ie responses); and
  • the response propensity of selected sample units, using independently available information about the underlying populations.

The survey's population estimates broadly reflect the population counts for:
  • age;
  • sex;
  • state/territory;
  • part of state (Capital city/balance of state);
  • educational attainment;
  • labour force status; and
  • household composition.

The majority of estimates contained in the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0) are based on benchmarked person weights. The survey also contains some household estimates based on benchmarked household level weights.


COMPARISON WITH THE 1997 SURVEY

The following table highlights the main differences between the 2007 and 1997 surveys. More detailed information on the 1997 survey design is provided in the National Survey of Mental Health and Wellbeing of Adults: Users' Guide, 1997 (cat. no. 4327.0).

3. COMPARISON WITH THE 1997 SURVEY, by survey design

1997 survey 2007 survey

Scope

Persons aged 18 years and over Persons aged 16-85 years
Usual residents of private dwellings in urban and rural areas of Australia. Usual residents of private dwellings in urban and rural areas of Australia.

Sample design/size

One randomly selected person per household One randomly selected person per household
- Higher chance of selection for people aged 16-24 years and 65-85 years
Final sample = 10,641 Final sample = 8,841
Response rate = 78% Response rate = 60%

Enumeration period

May - August 1997 August - December 2007

Main output units

Persons Persons
Household Household