|Page tools: Print Page Print All|
2. SURVEY DESIGN
This chapter contains the following segments:
The survey was designed to collect information on selected lifetime and 12-month mental disorders. The survey collected basic demographic information from one usually resident household member aged 18 years or over for each person in the selected household. The information collected through the survey's Household Form included:
Based on the demographic information, one person in the household aged 16-85 years was then randomly selected to complete a personal interview. People aged 16-24 years and 65-85 years had a higher chance of selection. For more information see 'Sample design' in this chapter. The selected person, or an elected household spokesperson also answered some financial and housing items on behalf of other household members, including:
The survey included the following topics:
MENTAL DISORDERS AND CONDITIONS (CHAPTER 3)
Mental disorders (diagnostic component)
The survey collected information on selected mental disorders, which are considered to have the highest rates of prevalence in the population and that are able to be identified in an interviewer-based survey. The mental disorders include:
A brief overview of the differences between the 1997 and 2007 diagnostic assessment criteria is also provided in Chapter 4.
PHYSICAL HEALTH (CHAPTER 5)
Hypochondriasis and somatisation
Brief questions to determine possible hypochondriasis and/or somatisation.
Health risk factors
OTHER SCALES AND MEASURES (CHAPTER 6)
The Kessler Psychological Distress Scale (K10)
Self-assessed health rating
Mini Mental State Examination
SOCIAL NETWORKS AND CAREGIVING (CHAPTER 7)
HEALTH SERVICE UTILISATION (CHAPTER 8)
Consultations for specific mental disorders
Services used for mental health problems
Perceived need for help
POPULATION CHARACTERISTICS (CHAPTER 9)
Selected person characteristics
More information about the survey contents is available in the chapters indicated above and in the Glossary. A detailed list of data items from the survey (in spreadsheet format) has been released with the Technical Manual on the ABS website <www.abs.gov.au>. Appendix 1 also contains a diagram of the structure and flow of the survey.
COMPOSITE INTERNATIONAL DIAGNOSTIC INTERVIEW (CIDI)
Measuring mental health in the community through household surveys is a complex task, as mental disorders are usually determined through detailed clinical assessment. The 2007 National Survey of Mental Health and Wellbeing was based on a widely-used international survey instrument, developed by the World Health Organization (WHO) for use by participants in the World Mental Health Survey Initiative.
The survey used the World Mental Health Survey Initiative version of the World Health Organization's Composite International Diagnostic Interview, version 3.0 (WMH-CIDI 3.0). The WMH-CIDI 3.0 is a comprehensive interview for adults which can be used to assess the lifetime, 12-month and 30-day prevalence of selected mental disorders through the measurement of symptoms and their impact on day-to-day activities. The WMH-CIDI 3.0 was chosen because it:
The WMH-CIDI 3.0 provides an assessment of mental disorders based on the definitions and criteria of two classification systems:
Each classification system lists sets of criteria that are necessary for diagnosis. The criteria specify the nature and number of symptoms required; the level of distress or impairment required; and the exclusion of cases where symptoms can be directly attributed to general medical conditions, such as a physical injury, or to substances, such as alcohol.
Diagnostic algorithms are specified in accordance with the DSM-IV and ICD-10 classification systems. As not all modules contained in the WMH-CIDI 3.0 were operationalised for the 2007 SMHWB, it was necessary to tailor the diagnostic algorithms to fit the Australian context. More information on the WMH-CIDI 3.0 diagnostic assessment criteria according to the ICD-10 and DSM-IV is provided in Chapter 3.
A screener was introduced to the WMH-CIDI 3.0 to try to alleviate the effects of learned responses. The module included a series of introductory questions about the respondent's general health, followed by diagnostic screening questions for the primary disorders assessed in the survey, eg depressive episode. This screening method has been shown to increase the accuracy of diagnostic assessments, by reducing the effects of learned responses due to respondent fatigue. Other disorders, such as Obsessive-Compulsive Disorder (OCD), were screened at the beginning of the individual module.
The WMH-CIDI 3.0 was also used to collect information on:
More information on the WMH-CIDI 3.0 is available from the World Mental Health website <www.hcp.med.harvard.edu/wmh/>.
Adapting content for the Australian context
Most of the survey was based on the international survey modules, however, some modules were tailored to fit the Australian context. The adapted modules were designed in consultation with subject matter experts.
A Survey Reference Group, comprising experts and key stakeholders in the field of mental health, provided the ABS with advice on survey content, including the most appropriate topics for collection, and associated concepts and definitions. They also provided advice on issues that arose during field tests and the most suitable survey outputs. Group members included representatives from government departments, academic institutions, health research organisations, carers organisations and consumer groups.
Staff from the University of New South Wales (UNSW) were also contracted by the Australian Government Department of Health and Ageing (DoHA) to provide technical support for the survey. Changes to the survey were intentionally restricted so as not to alter the underlying definitions or concepts being assessed, and to maintain comparability of the survey data between countries.
Other considerations in the questionnaire design were:
Where possible, adapted modules used existing ABS questions. For example, demographic items, labour force status, education, disability status, and days out of role are all based on standard ABS questions. Some parts of the survey were changed more extensively than others, minor changes may have been made to question wording to reflect commonly understood Australian terms or whole modules may have been changed. For example, when discussing payment for health services, 'paid in full by state-funding' was amended to provide examples of state-funded health services, such as 'public hospital outpatient, public community health service or public community mental health service'. Whereas, the Chronic Conditions module used the format of the CIDI questions, but the content was changed to collect information on physical conditions that are National Health Priority Areas for Australia (eg asthma, cancer, diabetes, etc). The set of physical conditions chosen for this survey may therefore vary from physical conditions collected in other country's surveys. Other modules that were either tailored to fit the Australian context or contain substantial changes include:
As with all ABS surveys, extensive testing was conducted to ensure that the survey would collect objective and high quality data.
Pre-testing covers a range of testing techniques, the common feature being that the testing is conducted prior to taking a survey into the field (ie 'field test'). This phase of testing is critical for identifying problems for both respondents and interviewers, particularly regarding question content. The techniques used are designed to identify problems with the part of the instrument being tested and give some indication of the source of the error.
Techniques that were available during the pre-testing phase of questionnaire development:
A major advantage of pre-testing is that small samples are used and limited survey specific documentation and training is required as the testing is performed by people working on the survey. Consequently the process can allow several iterative tests of a set of questions in the time it would take to conduct a pilot test.
The broad objectives of the 2007 SMHWB pre-testing were to assess:
Cognitive interviews are semi-structured interviews in which the interviewer asks a person about their interpretation of questions and formulation of answers. Two rounds of cognitive interviewing were conducted from June to July 2006. One round was conducted in Canberra and one in Sydney, with 15 and 12 participants respectively. There were some changes to questions between tests in Canberra and Sydney, to assess the flow and understanding of some concepts.
Cognitive testing assists with the assessment of the effectiveness of a survey instrument. The process yields a large amount of information, often identifying unanticipated issues in the instrument. Cognitive testing generally focuses on respondent error, which is only one aspect of non-sampling error. To understand the potential for non-sampling error, cognitive testing is usually done in conjunction with peer review and field testing.
To ensure that concepts were understood by potential respondents, cognitive interviews were conducted for several topics, including:
The first three topics from the above list relate to modules in the survey instrument. Not all questions in each of these modules were tested. Where questions were used in the same context, or with the same wording, testing was not repeated. For example, asking the same question about different health professionals in the Health Service Utilisation module. There were also a number of questions that could not be modified, as they were part of the CIDI.
A number of recommendations were made as a result of the cognitive testing, including:
There were also recommendations to monitor particular questions, which posed some difficulty in the cognitive testing, in the Pilot Test.
The next phase of survey development involved field testing the survey questionnaire and procedures.
The Pilot Test is the first field test of the entire question set. Testing is designed to:
A Pilot Test was conducted in Brisbane in November 2006 and consisted of approximately 250 households. It was conducted in both urban and rural areas. The interviews were conducted by 10 ABS trained interviewers.
The Dress Rehearsal is the final test in the development cycle and mainly focuses on the procedural and timing aspects of the survey. Primarily, it is an operational test. Questionnaire design errors (eg sequencing errors) can be identified, investigated and corrected. Objectives of the Dress Rehearsal are to:
The Dress Rehearsal was conducted in Perth and Sydney from April to May 2007 and consisted of approximately 250 households.
The final enumeration of the survey was conducted from August to December 2007.
The survey was designed to provide national estimates that can be compared internationally, rather than to provide comparisons with the 1997 survey. More information on the differences between the 1997 and 2007 surveys is provided in Chapter 10 and throughout each of the chapters describing mental disorders and conditions, physical health, and other scales and measures.
The survey was conducted under the authority of the Census and Statistics Act 1905. The ABS sought the willing cooperation of households and due to the sensitive nature of the survey contents, participation was voluntary. The confidentiality of all information provided by respondents is guaranteed. Under this legislation, the ABS cannot release identifiable information about households or individuals. All aspects of the survey's implementation were designed to conform to the Information Privacy Principles set out in the Privacy Act 1988, and the Privacy Commissioner was informed of the details of the proposed survey.
Trained ABS interviewers conducted personal interviews at selected private dwellings from August to December 2007. Interviews were conducted using a Computer-Assisted Interview (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews.
Selected households were initially sent a Primary Approach Letter (PAL) by mail to inform the household of their selection in the survey and to advise that an interviewer would call to arrange a suitable time to conduct an interview. A brochure, providing some background to the survey, information concerning the interview process, and a guarantee of confidentiality was included with the letter. For a small number of households where the ABS did not have an adequate postal address, this was not possible.
On the first face-to-face contact with the household by an interviewer, general characteristics of the household were obtained from one person in the household aged 18 years or over. This information included basic demographic characteristics of all usual residents of the dwelling (eg age and sex) and the relationships between household members (eg spouse, son, daughter, not related). This person, or an elected household spokesperson also answered some financial and housing items, such as income and tenure, on behalf of other household members.
From the information provided by the household spokesperson, the survey instrument identified those persons in scope of the survey and randomly selected one person aged 16-85 years to be included in the survey. A personal interview was conducted with this randomly selected person.
In order to conduct a personal interview with the selected person (ie the respondent), interviewers made appointments to call-back to the household, as necessary. In some cases appointments for call-backs were made by telephone, however, all interviews were conducted face-to-face. Due to the sensitive nature of the survey questions, it was suggested that interviews be conducted in private. However, interviews may have been conducted in private or in the presence of other household members, according to the wishes of the respondent. Interviews, including the household assessment, took on average 90 minutes per fully-responding household.
Proxy, interpreted or foreign language interviews were not conducted. This decision ensured that the survey could be conducted by lay interviewers using a standard format. It also ensured that survey questions were asked exactly as described and the onus of interpretation was on the respondent, rather than being influenced by a third party. Additionally, the sensitive nature of the survey questions made them unsuitable for use with proxies or interpreters. Given the complexity of the survey, the concepts involved and the additional costs, translation of the survey into foreign languages was not considered viable.
ABS interviewers received training which assisted them to monitor respondent reactions to the survey interview. For example, changes in their voice (eg quieter), facial complexion (eg draining of colour), body tension (eg shaking hands), or extended pauses. If these signs were observed, the interviewer was instructed to stop the interview and check that the respondent was okay.
If the respondent had a very sudden intense emotional reaction to the questions (eg bursting into tears), the interviewer was instructed to provide supportive comments, such as 'take your time' or some other similar gesture, such as getting a glass of water or some tissues. It is possible that some interviews were concluded after such a reaction, however, other interviews may have continued.
During interviewer training, information was also provided on the types of assistance that could be offered to a respondent who was distressed or needed support. This included encouraging respondents to:
Where any respondent showed signs of stress or requested mental health support during the interview, they were offered assistance. A brochure containing telephone numbers for state/territory counselling providers, such as Lifeline and Beyond Blue, was offered to all respondents. Additionally, respondents were able to access crisis counselling through the OSA Group, who operate a 24-hour, seven day a week service. The OSA Group were contracted by the ABS specifically for this survey, to provide up to two sessions of private counselling for respondents in need.
In the event of a crisis situation, procedures were in place for interviewers to be able to:
Interviewers were instructed to remove themselves from a situation where they felt in danger, or to contact emergency services (000) if they felt anyone was in crisis or danger. Interviewers were also given two emergency telephone numbers, which could be used to access support for the interviewer and the respondent in a crisis situation (eg the respondent was threatening to harm themselves or was crying uncontrollably). The two emergency contacts were trained psychiatrists, who were able to provide advice on how to proceed in a crisis situation and where necessary, facilitate access to services. In the event that the emergency contacts were unavailable, interviewers were also provided with a list of the nearest mental health services and procedures to follow if they felt they needed an urgent assessment.
Refusals and exclusions
In cases where the household spokesperson provided initial demographic and household information, but the selected person then refused to participate in the survey, a follow-up letter was sent. The letter explained the aims and importance of the survey and encouraged participation. Of the approximate 2,500 letters dispatched, only a handful of respondents subsequently completed the survey questionnaire.
Persons excluded from the survey through non-contact or refusal were not replaced in the sample. People who were identified as having possible cognitive impairment, through the Mini Mental State Examination (MMSE), were also excluded.
A group of ABS officers were trained in the use of the Composite International Diagnostic Interview (CIDI) by staff from the CIDI Training and Reference Center, University of Michigan. These officers then provided training to ABS interviewers, who were recruited from a pool of trained interviewers with previous experience on ABS household surveys. A comprehensive four-day training program run by the ABS included:
All phases of the training emphasised understanding of the survey concepts and definitions, survey questions, interpretation of possible responses and adherence to interview procedures to ensure that a standard approach was used by all interviewers.
Sensitivity awareness training was provided by the OSA Group. The interactive session included:
Interviewers were allocated a number of dwellings (a workload) at which to conduct interviews. The size of the workload was dependent upon the geographical area. Interviewers living close to their workload area usually had larger workloads. Overall, each workload averaged approximately 25 dwellings per four week period. This meant that throughout the enumeration period (August to December 2007), interviewers may have completed multiple workloads.
Regular communication between field and office staff was maintained throughout the survey via database systems set up for the survey.
The questionnaire was administered by experienced ABS interviewers, who had received specific training for the survey. The questionnaire was further supported by detailed interviewer instructions, covering general procedural issues as well as specific instructions relating to individual questions.
The questionnaire is not fully indicative of the range of information available from the survey, as additional items were created in processing the data. For example, ABS classifications were applied to raw data inputs to create labour force status. Additionally, some questions were asked solely for the purpose of enabling or clarifying other questions, and are not available in survey results. For example, height and weight measurements were only collected to enable the output of Body Mass Index (BMI) classifications.
Initial household information was collected from one usually resident household member aged 18 years and over using a Household Form. This was similar in design to the household forms used by other ABS household surveys. From this information, one person in the household aged 16-85 years was randomly selected to complete a personal interview. The personal interview consisted of a number of separate modules, collecting information on demographics, physical conditions, mental disorders, use of health services for mental health problems, medications, social networks and caregiving. A diagram depicting the structure and flow of the survey modules is provided in Appendix 1. For a more detailed list of the questionnaire contents see 'Survey content' in this chapter.
Computer Assisted Interviewing (CAI)
Interviews were conducted using a Computer Assisted Interviewing (CAI) questionnaire. CAI involves the use of a notebook computer to record, store, manipulate and transmit the data collected during interviews. This type of instrument offers important advantages over paper questionnaires, including:
The questionnaire employed a number of different approaches to recording information at the interview:
To ensure consistency of approach, interviewers were instructed to ask the interview questions as shown in the questionnaire. In certain areas of the questionnaire, interviewers were asked to use indirect and neutral prompts, at their discretion, where the response given was, for example, inappropriate to the question asked or lacked sufficient detail necessary for classification and coding.
Copies of the survey instrument
With the release of this publication, a paper copy of the 2007 National Survey of Mental Health and Wellbeing (SMHWB) instrument has also been made available on the ABS website <www.abs.gov.au>.
The survey is based on the World Mental Health Survey Initiative version of the World Health Organization's (WHO) Composite International Diagnostic Interview (CIDI), version 3.0 (WMH-CIDI 3.0). The paper copy of the survey instrument represents the Computer Assisted Personal Interview (CAPI) version of the CIDI used by the ABS to collect the 2007 SMHWB. For more information on the WMH-CIDI 3.0 please refer to the World Mental Health website <www.hcp.med.harvard.edu/wmhcidi/>
In order to use any version of the WHO's Composite International Diagnostic Interview (CIDI) training must be obtained through an authorised CIDI Training and Reference Centre. These Centres offer training and support for the use of the CIDI.
Australian CIDI Training and Reference Centre (TRC)
For more information on CIDI training please contact the Department of Rural and Indigenous Health at Monash University:
A paper copy of the survey instrument has been released with this publication and is available from the ABS website <www.abs.gov.au>. The survey instrument is provided as a reference to the 2007 SMHWB and should not be used for administering survey interviews. Please note the above information regarding CIDI training.
To obtain an electronic copy of the CIDI instrument, please contact the CIDI Training and Reference Centre (details above). A Blaise licence is required to use the electronic version of the survey instrument. Blaise is a software package designed primarily for survey data collection and processing.
Computer-based systems were used to collect and process data from the survey. The survey used computer-assisted interviewing (CAI) for data collection and the ABS Household Survey Facilities (HSF) system to process the survey. The code for the survey instrument was provided by the World Mental Health Survey Initiative and then modified by the ABS in line with requirements for the Australian version of the survey. For more information see 'Survey development' in this chapter.
The use of CAI ensured that respondents were correctly sequenced throughout the questionnaire. Inbuilt edits meant that some issues could be clarified with the respondent at the time of interview. However, there were only a small number of automatic edits to check the consistency of ages provided by the respondent across the various modules of the survey, resulting in some adjustments being applied during output processing. For more information see 'Output processing' in this chapter.
Interviewer workloads were electronically loaded on receipt in the ABS office in each state or territory. Checks were made to ensure the workloads were fully accounted for and that questionnaires for each household and respondent were completed. Problems with the questionnaire identified by interviewers were resolved, where possible by using other information contained in the questionnaire, or by referring to the comments provided by interviewers.
During the processing and validation of the main data file, a small number of errors were also picked up in the diagnostic modules of the instrument. Minor changes were made to question sequencing to amend errors identified in the initial code provided and to correct for the differences between the ICD-10 and DSM-IV classifications. For example, the sequencing of questions in one segment of the PTSD module did not account for the differences in the ICD-10 and DSM-IV diagnostic criteria. This meant that a small number of respondents were excluded from questions on 12-month symptoms (and therefore not diagnosed with a 12-month disorder).
Computer-assisted coding was performed on responses to questions on:
Country of birth
Data were classified according to the:
Data were classified according to the:
Coding of open-ended questions
The survey contained a number of open-ended questions, for which there were no predetermined responses. These responses were coded manually by the ABS and externally. For example, the Depression module asked people whether their episodes of symptoms were the result of physical causes, such as illness or injury, or the use of medications, drugs or alcohol. If a person thought their episodes always occurred as the result of physical causes, they were asked to briefly describe what the physical cause was. This information was recorded as stated, using a free-form text field.
Some of the open-ended questions formed part of the assessment to determine whether a respondent met the criteria for diagnosis of a mental disorder. These open-ended questions were designed to probe causes of a particular episode or symptom. Responses were then used to eliminate cases where there was a clear physical cause (ie organic exclusion criteria). As part of the processing procedures set out for the WMH-CIDI 3.0, responses provided to the open-ended questions are required to be interpreted by a suitably qualified person, ie a psychiatrist or a clinical psychologist.
External technical assistance
For this survey, technical assistance for coding the open-ended diagnostic-related questions was provided the School of Psychiatry at the University of New South Wales (UNSW). In order for the data files to be released to UNSW, a set of procedures ensuring respondent confidentiality were required to be met. These procedures are outlined in the ABS Policy and Legislation (Confidentiality and Disclosure) and include the removal of any information that may aid identification or enable spontaneous recognition. Staff from UNSW were also required to sign an undertaking of confidentiality, which included the appropriate handling of the data and non-disclosure to any third party.
There were 20 open-ended questions that formed part of the diagnosis of particular mental disorders, resulting in approximately 1,900 individual responses to be coded. The text responses to 18 of these questions were designed to probe the causes of a particular episode or symptom. These were then used to eliminate cases where there was a clear physical cause for the episode or symptom. The responses to the other two questions were used to classify the type of fear associated with Social Phobia. Text responses that required office coding related to:
Apart from the text responses outlined above, no other information collected during the interview was released. Data to be released was first examined and assessed by the ABS Micro Data Review Panel, to ensure respondent confidentiality. Where responses were considered to be 'at risk' they were revised. For example, if the text field included any part of the respondent's name, reference to a country of birth, or the description of any unusual or one-off events.
ABS office coding
Two additional groups of open-ended questions, representing approximately 2,000 individual responses, were also office coded. These questions did not form part of the diagnostic assessment criteria and therefore did not require interpretation by a qualified psychiatrist/clinical psychologist.
The first group of open-ended questions related to health service utilisation. The responses formed part of understanding the reasons why health services were used. The responses for these questions were office coded by staff at the ABS.
An example of office coding related to health service utilisation occurred when people were asked about their most recent admission to hospital for physical health problems. People were asked to nominate a response category from a provided list to best describe the main reason for their admission. People could nominate (a) reason/s from the list or could provide an 'other' reason, which was then specified in a text field. These 'other' reasons were office coded. Text responses may have included:
The text responses were then allocated a code, which related to a list of categories, including:
Other health service utilisation text responses that required office coding related to:
The second group of open-ended questions related to other non-diagnostic questions, apart from health service utilisation (see above). The responses for these questions did not require specialist coding and were therefore office coded by ABS staff. An example of office coding related to these types of questions occurred when people were asked about the method they used to attempt suicide. People were asked to nominate a response category from a provided list, or they could provide an 'other' type of method, which was then specified in a text field. The text responses were allocated a code, which was based on the original list of categories.
The WMH-CIDI 3.0 is a fully structured interview that maps the extent to which a person's symptoms meet the ICD-10 and DSM-IV diagnostic criteria. The CIDI comprises standardised interview questions with probing, coding, training and data analysis procedures. The probe questions are used to determine whether the symptom was ever enough to cause either impairment or professional help seeking, as well as to what degree symptoms may be attributed to either physical conditions or the effect of substances.
The information collected through the CIDI is entered into standard data entry and scoring programmes which report whether the diagnostic criteria are satisfied. Assessment against the diagnostic criteria is performed through computerised algorithms, ensuring a high degree of consistency.
More information on the diagnostic assessment criteria is provided in Chapter 3.
A set of algorithms (in SAS code) were provided to the ABS to be used with the WMH-CIDI 3.0. These were used to determine diagnoses of mental disorders. Extensive validation of the algorithms was done to ensure that any logic and/or coding errors were rectified prior to enumeration, and again prior to output processing. Where an error was identified, it was resolved through consultation with technical experts from the UNSW and the University of Michigan (where the CIDI team are based).
As not all modules contained in the WMH-CIDI 3.0 were operationalised for the 2007 SMHWB, it was also necessary to tailor the diagnostic algorithms to fit the Australian context. For example, a person is excluded from a DSM-IV diagnosis of Agoraphobia where there is a co-occurring Separation Anxiety Disorder. However, Separation Anxiety Disorders were not collected in the 2007 SMHWB which meant that the diagnostic algorithm for Agoraphobia had to be edited accordingly. The SAS code for the different diagnoses also had to be adjusted to meet the requirements of ABS processing systems.
For more information on the SAS codes applied to the diagnostic algorithms please contact Dr Tim Slade at the National Drug and Alcohol Research Centre:
Fax: +61 (0)2 9385 0222
Mail: University of New South Wales, Sydney NSW 2052
Information from the questionnaire, other than names and addresses, was stored on a computer output file in the form of data items. In some cases, items were formed from answers to individual questions, while in other cases data items were derived from answers to several questions. For example, self-reported height and weight measurements were used to calculate Body Mass Index (BMI), which is the output item.
During processing of the data, checks were performed on records to ensure that specific values lay within valid ranges and that relationships between items were within limits deemed acceptable for the purposes of this survey. These checks were also designed to detect errors which may have occurred during processing and to identify instances which, although not necessarily an error, were sufficiently unusual or close to agreed limits to warrant further examination.
Throughout processing, frequency counts and tables containing cross-classifications of selected data items were produced for checking purposes. The purpose of this analysis was to identify any problems in the input data which had not previously been identified, as well as errors in derivations or other inconsistencies between related items.
Adjustments during processing
With the emphasis of the survey being on 'lifetime' mental disorders, it was expected that there would be some inconsistencies in the information provided due to the accuracy of respondent recall. As only a limited number of automatic edits were contained in the mental disorder modules, a number of inconsistencies within the data, specifically related to age were observed. Some of the inconsistencies had flow-on implications for output items which were used to provide estimates of:
Resolving these inconsistencies required extensive consultation with technical experts from the University of New South Wales (UNSW).
Adjustments made during output processing were minimised as much as possible, to ensure that the data remained 'as reported'. Some adjustments were made to account for age inconsistencies, but only where the inconsistency would impact on diagnosis or estimates of onset, recency or persistence. The adjustments were generally based on the use of the 'minimum' age reported. For example, if a person reported they were 21 (years) when they first tried a drink, but then say that their drinking problem started at age 16, the lower of these two ages would be used as the response for both questions. The adjustments are briefly described in the following list:
Adjustments for each of these identified inconsistencies were made on a case-by-case basis to ensure the minimum age was logical and that there would be no flow-on impacts to other variables.
A multi-level hierarchical data file was produced. The contents of the person and household levels are briefly described below:
The output data file was extensively validated through item-by-item examination of input and output frequencies, checking populations through derivations, internal consistency of items and across levels of the file, data confrontation, etc. Despite these checks, it is possible that some small errors remain undetected on the file.
As a result of the validation processes, some adjustments have been made to data on a record-by-record basis. Changes were done with reference to other information provided by respondents and only to correct clear errors that were not identified during the survey interview. For example, age data provided was inconsistent within a module or across numerous modules of the survey instrument. Adjustments may also have occurred as the result of an edit not being applied or being by-passed. For example, where the response to a question was recorded as 'don't know' and was subsequently answered. In cases where the interviewer did not or was unable to return to the original question, the details may have been recorded in a text file.
In general, unless data were 'impossible' they have not been corrected, and results are essentially 'as reported'. To hide 'improbable' responses (eg extremely high alcohol consumption, excessive amounts of time exercised, income) some outliers have been reclassified, primarily to 'not stated' values. Some of these adjustments were made record-by-record; for others a global change was used for all records where reported values lay outside acceptable limits.
Decisions to apply treatments or adjustments to the data were made, as appropriate, by the ABS. Adjustments to data associated with the mental disorder diagnoses were done in consultation with technical experts from the UNSW.
In the final stages of processing, extensive analyses, including data confrontation, was undertaken to ensure the survey estimates conformed to known or expected patterns, and were broadly consistent with data from the 1997 SMHWB, from other ABS data sources or from international sources, allowing for methodological and other factors which might impact comparability. A list of the data sources used for comparison is provided in Chapter 10.
Due to the lower than expected survey response rate, the ABS undertook extensive data comparisons and also conducted a purposive small sample Non-Response Follow-Up Study (NRFUS). The NRFUS provided some qualitative analysis on the possible differing characteristics of fully-responding and non-responding persons. More information on non-response is provided in Chapter 10. Comparisons of numerous demographic and socio-economic characteristics indicated that some of the 2007 SMHWB estimates did not align with other ABS estimates. As a result, additional benchmarks were incorporated into the survey's weighting strategy. For more information see 'Weighting, benchmarking and estimation' in this chapter.
Data available from the survey are essentially 'as reported' by respondents. The procedures and checks outlined above were designed primarily to minimise errors occurring during processing. In some cases it was possible to correct errors or inconsistencies in the data which was originally recorded in the interview, through reference to other data in the record; in other cases this was not possible and some errors and inconsistencies may remain on the data file.
RESPONSE RATES AND SAMPLE ACHIEVED
Ideally, interviews would be conducted with all people selected in the sample. In practice, some level of non-response is likely. Non-response occurs when the person selected for interview is unable or unwilling to participate or where they cannot be contacted during the enumeration of the survey. Unit and item non-response by persons/households selected in the survey can affect both sampling and non-sampling error. The loss of information on persons and/or households (unit non-response) and on particular questions (item non-response) reduces the effective sample and increases sampling error.
Non-response can also introduce non-sampling error by creating a biased sample. The magnitude of any non-response bias depends upon the level of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not within population subgroups as determined by the weighting strategy.
The ABS sought the willing participation of selected households. However, due to the sensitive nature of the survey content, participation was voluntary. Measures taken to encourage participation included:
Through call-backs and follow-up at selected households, every effort was made to contact the occupants in order to conduct the survey. Interviewers made several call-backs before a household was classified as a ‘non-contact’. Call-backs occurred at different times during the day to increase the chance of contact. If any person who was selected to be included in the survey was absent from the household when the interviewer called, arrangements were made to return at a later time. Interviewers made return visits as necessary in order to complete the survey with the selected person. In some cases, the selected person within a household could not be contacted or interviewed, and these were classified as 'non-contacts'.
People who refused to participate were sent a follow-up letter, emphasising the importance of their participation in the survey. This measure resulted in a small number of refusals being converted. Instances occurred where a person was willing to answer some, but not all, of the questions asked, or did not know an answer to a particular question. The survey instrument was programmed to accept 'don't know' responses, as well as refusals on sensitive topics, such as income.
There were 17,352 private dwellings initially selected for the survey. This sample was expected to deliver the desired fully-responding sample, based on an expected response rate of 75% and sample loss. The sample was reduced to 14,805 dwellings due to the loss of households with no residents in scope for the survey and where dwellings proved to be vacant, under construction or derelict. Of the eligible dwellings selected, there were 8,841 fully-responding households, representing a 60% response rate at the national level. The table below provides the proportion of fully-responding households for each state/territory:
Of the dwellings selected with usual residents in-scope of the survey, 40% did not respond fully. Reflecting the sensitivity of the survey, the average expected interview length (of around 90 minutes) combined with the voluntary nature of the survey, almost two-thirds (61%) of these dwellings were full refusals. Household details were provided by more than a quarter (27%) of these dwellings, but then the selected person did not complete the main questionnaire. The remainder of these dwellings (12%) provided partial or incomplete information or were assigned as full non-contacts. As the level of non-response for this survey was significant, extensive non-response analyses to assess the reliability of the survey estimates were undertaken.
The non-response analyses included extensive comparisons of the 2007 SMHWB to other data sources, as well as a purposive small sample/short-form Non-Response Follow-Up Study (NRFUS). The NRFUS was developed for use with non-respondents in Sydney and Perth and was conducted from January to February 2008. It achieved a response rate of 40%, yielding information on 151 non-respondents. It used a short-form questionnaire containing demographic questions and the Kessler Psychological Distress Scale (K10). The short-form approach precluded the use of the full diagnostic assessment modules. As a minor proxy of the mental health questions, the K10 was included. The aim of the NRFUS was to provide a qualitative assessment of the likelihood of non-response bias. Given the small size and purposive nature of the NRFUS, the results were not explicitly incorporated into the survey's weighting strategy.
Categorisation of interviewer remarks from the survey and the NRFUS indicated that the majority of persons who refused to participate stated that they were 'too busy' or 'not interested' in the survey.
For more information on non-response and the analyses undertaken see Chapter 10.
COMPARISON WITH THE 1997 SURVEY
The sample sizes differed between the two surveys. In 2007, there were 8,841 fully-responding households, compared to approximately 10,600 in 1997. The 2007 survey approached a larger initial sample (17,352 approached dwellings) compared to 1997 (15,500 approached dwellings). There was a reduction in achieved proportions of the initial sample sizes, ie the response rates varied, 60% in 2007 compared to 78% in 1997. These differences, as well as those outlined throughout this publication, should be considered when comparing results. For the 1997 survey results see Mental Health and Wellbeing: Profile of Adults, Australia, 1997 (cat. no. 4326.0) available from the ABS website <www.abs.gov.au>.
Scope and coverage
The scope of the survey is people aged 16-85 years, who were usual residents of private dwellings in Australia, excluding very remote areas. Private dwellings are houses, flats, home units and any other structures used as private places of residence at the time of the survey. People usually resident in non-private dwellings, such as hotels, motels, hostels, hospitals, nursing homes, and short-stay caravan parks were not in scope. Usual residents are those who usually live in a particular dwelling and regard it as their own or main home.
Proxy and foreign language interviews were not conducted. Therefore, people who were unable to answer for themselves were not included in the survey coverage, but are represented in statistical outputs through inclusion in population benchmarks used for weighting.
The projected Australian adult resident population aged 16 years and over, as at 31 October 2007 (excluding people living in non-private dwellings and very remote areas of Australia) was 16,213,900, of which, 16,015,300 were aged 16-85 years.
For this survey, the population benchmarks were projections of the quarterly Estimated Resident Population (ERP) data, released 30 June 2007. For information on the methodology used to produce the ERP see Australian Demographic Statistics (cat. no. 3101.0). To create the population benchmarks for the 2007 SMHWB reference period, the most recently released quarterly ERP estimates were projected forward two quarters past the period for which they were required. The projection was based on the historical pattern of each population component - births, deaths, interstate migration and overseas migration. By projecting two quarters past that needed for the current population benchmarks, demographic changes are smoothed in, thereby making them less noticeable in the population benchmarks.
The survey was designed to provide reliable estimates at the national level. The survey was not designed to provide state/territory level data, however, some data may be available (on request) for states with larger populations, eg New South Wales. Users should exercise caution when using estimates at this level due to high sampling errors. Relative Standard Errors (RSEs) for all estimates provided in the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0) were released in spreadsheet format as an attachment to that publication. The summary publication and spreadsheets are available from the ABS website <www.abs.gov.au>.
Dwellings included in the survey in each state and territory were selected at random using a stratified, multistage area sample. This sample included only private dwellings from the geographic areas covered by the survey. Sample was allocated to states and territories roughly in proportion to their respective population size. The expected number of fully-responding households was 11,000. The sample allocations for each state/territory appear in the following table:
The multistage sample of dwellings was based on the framework of the Population Survey Master Sample. The appropriate fraction of the Monthly Population Survey (MPS) parallel blocks was selected in each state/territory in order to achieve the above sample allocations (based on an assumed response rate of 75% and a sample loss rate of approximately 14%).
Overlap with the MPS is avoided because the Population Survey Master Sample assigns a block for the MPS and a separate block for Special Social Surveys within each selected Collection District (CD). Overlap with other Special Social Surveys is avoided by rotating to the next cluster of dwellings in the 2007 SMHWB selected parallel blocks.
To improve the reliability of estimates for younger (16-24 years) and older (65-85 years) persons, these age groups were given a higher chance of selection in the household person selection process. That is, if you were a household member within the younger or older age group, you were more likely to be selected for interview than other household members.
There were 17,352 private dwellings initially selected for the survey. This sample was expected to deliver the desired fully-responding sample, based on an expected response rate of 75% and sample loss. The sample was reduced to 14,805 dwellings due to the loss of households with no residents in scope for the survey and where dwellings proved to be vacant, under construction or derelict.
Of the eligible dwellings selected, there were 8,841 fully-responding households, representing a 60% response rate at the national level. Interviews took on average, around 90 minutes to complete.
Some survey respondents provided most of the required information, but were unable or unwilling to provide a response to certain data items. The records for these persons were retained in the sample and the missing values were recorded as 'don't know' or 'not stated'. No attempt was made to deduce or impute for these missing values.
Due to the lower than expected response rate, the ABS undertook extensive non-response analyses as part of the validation and estimation process, including a Non-Response Follow-Up Study (NRFUS). Further information is provided in Chapter 10.
Weighting, benchmarking and estimation
Weighting is the process of adjusting results from a sample survey to infer results for the total in-scope population. To do this, a 'weight' is allocated to each sample unit corresponding to the level at which population statistics are produced, eg household or person level. The weight can be considered an indication of how many population units are represented by the sample unit. For the 2007 SMHWB, separate person and household weights were developed.
The first step in calculating weights for each person or household is to assign an initial weight, which is equal to the inverse of the probability of being selected in the survey. In the sample design, the sample size was allocated proportional to the population density of the state by:
For this survey, due to the length of the interview, only one in-scope person was selected per household. Therefore, the initial person weight was derived from the initial household weight according to the total number of in-scope persons in the household and the differential probability of selection by age used to obtain more younger (16-24 years) and older (65-85 years) people in the sample.
Apart from the 8,841 fully-responding households, basic information was obtained from the survey's household form for an additional 1,476 households and their occupants. This information was provided by a household member aged 18 years or over. In the case of these 1,476 households, the selected person did not complete the main questionnaire (eg they were unavailable or refused to participate). The information provided by these additional 1,476 households was analysed to determine if an adjustment to initial selection weights could be made as a means of correcting for non-response. However, no explicit adjustment was made to the weighting due to the negligible impact on survey estimates.
The person and household weights were separately calibrated to independent estimates of the population of interest, referred to as 'benchmarks'. Weights calibrated against population benchmarks ensure that the survey estimates conform to the independently estimated distributions of the population rather than to the distribution within the sample itself. Calibration to population benchmarks helps to compensate for over- or under-enumeration of particular categories which may occur due to either the random nature of sampling or non-response. This process can reduce the sampling error of estimates and may reduce the level of non-response bias.
A standard approach in ABS household surveys is to calibrate to population benchmarks by state, part of state, age and sex.
'State' consists of the six states and two territories:
'Part of state' is divided into two classifications:
Due to the small populations of the NT and ACT, these were not divided into parts. The NT was classified as 'Balance of state' and the ACT as 'Capital city'.
The 'age' of respondents was categorised into seven age groups, defined as:
'Sex' consists of two categories:
In terms of the effectiveness of 'correcting' for potential non-response bias, it is assumed that the characteristics being measured by the survey for the responding population are similar to the non-responding population within weighting classes, as determined by the benchmarking strategy. Where this assumption does not hold, biased estimates may result.
Given the relatively low response rate for the 2007 SMHWB, extensive analysis was done to ascertain whether further benchmark variables, in addition to geography, age, and sex, should be incorporated into the weighting strategy. Analysis showed that the standard weighting approach did not adequately compensate for differential undercoverage in the survey sample for variables such as educational attainment, household composition, and labour force status, when compared to other ABS surveys and the 2006 Census of Population and Housing. As these variables were considered to have possible association with mental health characteristics, additional benchmarks were incorporated into the weighting strategy.
Initial person weights were simultaneously calibrated to the following population benchmarks:
The state by part of state by age and sex benchmarks were obtained from demographic projections of the resident population, aged 16-85 years who were living in private dwellings, excluding very remote areas of Australia, at 31 October 2007. The projected resident population was based on the Census of Population and Housing using 30 June 2007 as the latest available Estimated Resident Population. Therefore, the 2007 SMHWB estimates do not (and are not intended to) match estimates for the total Australian resident population (which include persons and households living in non-private dwellings, such as hotels and boarding houses, and in very remote parts of Australia) obtained from other sources.
The remaining benchmarks were obtained from other ABS survey data. These benchmarks are considered 'pseudo-benchmarks' as they are not demographic counts and they have a non-negligible level of sample error associated with them. The Survey of Education and Work, Australia, 2007 (persons aged 16-64 years) was used to provide a pseudo-benchmark for educational attainment. The monthly Labour Force Survey (September to December 2007) provided the pseudo-benchmark for labour force status, as well as the resident population living in households by household composition. The pseudo-benchmarks were aligned to the projected resident population aged 16-64 years or 16-85 years depending on the characteristic (eg education, household composition), who were living in private dwellings in each state and territory, excluding very remote areas of Australia, at 31 October 2007. The pseudo-benchmark of household composition was also aligned to the projected household composition population counts of households. The sample error associated with these pseudo-benchmarks was incorporated into the standard error estimation.
Household weights were derived by separately calibrating initial household selection weights to the projected household composition population counts of households containing persons aged 16-85 years, who were living in private dwellings in each state and territory, excluding very remote areas of Australia, at 31 October 2007.
Estimation is a mathematical technique for producing information about a population based on a sample of units from that population. Estimates from the survey were derived using a combination of:
The survey's population estimates broadly reflect the population counts for:
The majority of estimates contained in the National Survey of Mental Health and Wellbeing: Summary of Results, 2007 (cat. no. 4326.0) are based on benchmarked person weights. The survey also contains some household estimates based on benchmarked household level weights.
COMPARISON WITH THE 1997 SURVEY
The following table highlights the main differences between the 2007 and 1997 surveys. More detailed information on the 1997 survey design is provided in the National Survey of Mental Health and Wellbeing of Adults: Users' Guide, 1997 (cat. no. 4327.0).
These documents will be presented in a new window.