1530.0 - ABS Forms Design Standards Manual, 2010  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 25/01/2010  First Issue
   Page tools: Print Print Page Print all pages in this productPrint All  
Contents >> Telephone and Face to Face Interviews >> Computer Assisted Interviewing (CAI) Interface Design Standards

Computer Assisted Interviewing (CAI) Interface Design Standards

While aspects of these standards will be of interest to those outside the ABS, they were developed for internal use. As such, some information contained in these standards will not be applicable to an external audience. ABS staff should refer to the Corporate Manuals database for the most recent version of these documents, as some details (names, phone numbers etc.) have been removed from the online version.



These Computer Assisted Interviewing (CAI) standards are intended to guide the development of templates and instruments for (a) data collection for business and household surveys, and (b) instruments for telephone intensive follow-up (IFU) for business surveys. For ABS business surveys, Computer Assisted Telephone Interviewing (CATI) is commonly used, while for household surveys, Computer Assisted Personal Interviewing (CAPI) is most prevalent. In this document, the term "CAI" covers both CATI and CAPI instruments.

The standards have been developed based on international research, ABS recommendations for CAI screen design for household interviewing, and results of usability testing for several ABS business survey CATIs. The current standards are applicable to a Blaise environment, however the intention is to guide best practice independent of the technology used.

The graphics used in these standards are illustrative only and do not necessarily reflect the ABS standard question wording. The graphics are designed to be viewed on screen; the larger illustrations may not display properly when printed.

For consistency and convenience, these standards will refer to the user of the interface as "the interviewer" even though when using the CAI interface for IFU or editing purposes, an actual interview may not take place.
It is strongly recommended that usability testing is conducted to evaluate the following ABS CAI instruments:
  • all new business survey CAI instruments.
  • household survey CAI instruments that differ substantially from the standard household survey CAI instruments.

Typography and text formatting

ABS CAI instruments should follow the typography standards listed below.
All the text in the instrument should be left justified. The standard font is Arial. The standard text sizes and styles are as follows:
  • Main question text which is always read out - bold, 14 point black.
  • Other question text which may be read out - plain, 12 point black (for business survey CATIs); text in brackets (for household surveys CAPIs).
  • Completion instructions to the interviewer - bold, 12 point blue and indented (about 10mm).
  • Question numbers, which are not read out unless directing respondents to a paper form - plain, 12 point black.
  • Question specific notes, on screen plain 12 point black, and in pop-up bold 12 point black.
  • Labels for multiple choice answer options in information pane:
  • Response lists where the interviewer is to read out the options to the respondent - bold, 12 point blue for "Don't know" and "None of the above" options; bold, 12 point black for other options. See "Running prompt questions" section for more information on this type of question.
  • Response lists where the interviewer is not to read out the options to the respondent - plain, 12 point black for response options such as "Yes" and "No"; plain, 12 point blue for "Don't know" and "None of the above".
  • Labels for response fields in form pane - plain, 11 point black when unselected, dark blue when selected.
  • Heading text for response fields in form pane - plain, 11 point dark magenta.
  • Entered responses show up as - plain, 11 point black.
  • Words that require emphasis are to be underlined (not in italics or bold). Italics should not be used at all, as it can be difficult to read on the screen.

Other text formatting standards include:

Numerical fields should be formatted so that commas will automatically separate the digits for numbers at and over 1,000. This reduces the risk of order of magnitude errors. The commas should be included whenever numerical answers are automatically filled into a later question to make it easier for the interviewer to read out.

Diagram 1. Automatic commas in numeric fields

Date fields should be automatically separated by slashes or dots. Whichever format is chosen, it should be used consistently throughout the instrument. Date fields should accept commonly-used formats that may be used by interviewers (e.g. d/m/yyyy, dd.mm.yy). When "enter" is pressed after the date has been typed in, the field should display the date in the standard format that has been chosen for the instrument.

Diagram 2. Automatic slashes in date fields

When a reference period date is included in the script, the appropriate date should be automatically generated for all relevant questions. The date should appear in full (e.g. 30 June 2008, not 30/6/08 ) (see Diagram 3). This is so that interviewers can read out the date without the extra burden of having to convert a numeric month to the actual month.

Diagram 3. Display of automatically generated date in script

The time field for the business survey "time taken" question should be in hours and minutes. The preferred format is to split the hours and minutes into two separate boxes, e.g. as shown in Diagram 4.

Diagram 4. Layout of Time Taken fields

In questions asking for the duration of an event, when the duration could be reported in different time periods (e.g. weeks or years), the standard approach is to channel respondents to a question which asks for the duration in the appropriate time period. For example:

Q1. How long have you worked for your current employer?

1. Less than 1 year ---> Q2
2. One year or more ---> Q3

Q2. Record full weeks

Q3. Record full years

Mixed case should be used where possible - words in full capitals should be avoided. An exception to this is information in full capitals that is extracted from PIMS and automatically inserted into the script (e.g. unit name).


Horizontal and vertical layouts

The current application being used for CAI instrument development (Blaise) allows for a variety of form layouts. Different layouts are suitable for different types of questions. Whichever layout is chosen, the same layout should be maintained throughout the form.

The most commonly used layout for ABS CATIs is currently the 'horizontal' layout. 'Horizontal' refers to the line between the 'information pane' which contains the question, response options, and interviewer instructions at the top of the screen, and the 'form pane' which contains the responses and data entry cells at the bottom (see Diagram 5). The answer can be chosen by selecting the radio button next to response option, or the respective number can be typed into the bottom space for the question (the former option will automatically fill the respective number into the space). In the horizontal layout, one question appears on the screen at a time in the information pane.

For questions that have many yes/no answers or have multiple choice pick-lists, the horizontal layout is the most suitable. This layout is also generally most appropriate for forms that utilise automatic sequencing, as questions can be automatically skipped without affecting the layout of the questions.

Diagram 5. Horizontal screen layout

A 'vertical' layout allows for the question on the left side of the screen and the responses on the right. The information pane is hidden, and the question wording is extended within the field pane, with the notes added under the questions (see Diagram 6). This layout is similar to the layout of some ABS Blaise editing instruments.

The vertical layout can be suitable for forms that collect financial or other numeric data. This layout allows more than one question to appear on the screen at a time. It can therefore be utilised when it is desirable for interviewers to be able to see the questions and responses for more than one question on a single screen. The vertical layout is not suitable for forms that have a large amount of sequencing, because in Blaise, this means that skipped questions will be appear as gaps in the list of questions.

Diagram 6. Vertical screen layout

An alternative to the layout described in paragraph 10 is for the form pane (rather than the information pane) to contain the answer choices (see Diagram 7). For horizontal CATIs that require historical data to be displayed, this layout is suitable, as it allows the data from previous reference periods to be displayed in columns in the form pane. Displaying columns of historical data in the 'traditional' horizontal layout is not possible in Blaise. See paragraphs 28-32 for more information on displaying historical data.

Diagram 7: Alternative horizontal layout for CATIs displaying historical data

In the vertical layout, the field labels (for open response fields) appear in the panel to the right of the question, with the corresponding response fields in the next column (see Diagram 8).

Diagram 8. Vertical layout of questions and responses.

The screen layout should be as simple and uncluttered as possible to help the interviewer find the elements they need easily. There should be top and left margins of about 5mm in the information pane and form pane. The only lines that should be used are those that separate the following: the information pane from the form pane, the field labels from the response fields, and columns of historical data. Task bar icons and other screen elements should also be minimised so that only those required for the instrument are visible, for example see Diagram 9.

Diagram 9. Minimal clutter task bar

Screen colours

Screen colours should be muted to aid readability and reduce distraction. For CATIs using the horizontal layout, the standard is pale yellow for the information pane background and light grey for the field pane. These are the Blaise default colours. For the vertical layout, the current standard is for all parts of the screen to be light grey (the default colour of the form pane).

Questions and responses

For the horizontal layout, the standard is for one question to appear on the screen at a time. In the vertical layout, several questions can appear on the screen at a time - these should be visible without vertical scrolling.

The question text that is read out should be at the top (horizontal) or left (vertical) of the screen, with any explanatory notes which may be required underneath the question. Explanatory notes that will not fit on the screen can be set up to appear in a pop-up box accessible by pressing F9 (see 'Help information and the F9 function').

If there is a self-administered form (paper or electronic) which the respondent is referring to, there should be question numbers in the CAI instrument which match the question numbers used in the paper form, to avoid any confusion. If sequencing is used in the CAI instrument, this may mean that some question numbers are missing from the instrument. In some cases the CAI instrument may have additional questions that are not on the paper form (for example, questions to establish whether the person is the right person to answer the survey and that they are able to complete the survey at that time). In these cases, the extra questions should be given an alphanumeric identifier. For example, two additional questions that need to appear in the instrument between questions 5 and 6 would be labelled 5a and 5b.

Below the instructions should be an indicator relating to other notes or help which may be available. The lines of question text should be only about 60 characters wide for comfort in reading, leaving some blank space to the right. There may also be completion instructions for the interviewer - this should be placed at the point of the interview where they will be required. All of this information needs to fit on one screen. The interviewer should never have to scroll the information pane to see everything. When the whole question will not fit it should be split somehow or use a pop-up.

If choices between answers need to be made, e.g. yes or no, in an answer space in the information pane as described above and illustrated in Diagram 5, the choices should be indented and use common conventions to suggest type of entry (e.g. radio buttons for mutually exclusive options, check boxes for 'all that apply').

For horizontal CATIs, the form pane at the bottom of the screen contains fields for the responses to all the related questions in a set, whatever type of question is used. Each field should have a short label which is meaningful enough to recognise the question it belongs to. This is to allow the interviewer to see the context of the current question and scan previous answers without having to go back through the form. This can be important to help the respondent make sense of what is being asked.
Back to top
Diagram 10. Labels for response fields in form pane

For the horizontal layout, the response fields should be laid out in columns, with appropriate headings (referring to Part titles, main question numbers etc.) when necessary. Field lists which go to two columns need to be balanced so that there is roughly the same number of fields in each column - if one field is by itself the interviewer may not see it and make errors. Response fields for the vertical layout appear as a single list in the column adjacent to the question. It is important that the field labels are kept brief, so that interviewers don't mistake the field text for the actual question. All response fields on a particular screen should be visible without requiring scrolling.

The field space should be to the right of the field label. Where text responses are entered, the field space should extend across the screen as far as generally required. Small scrolling fields should not be used and pop-up text boxes should only be used where a great deal of text is possible. This usually applies only for the comments question at the end of a business survey CATI, because long text responses should be avoided in interview-based surveys.

Back to top

Screens containing no questions

Screens which do not include a question should be avoided as much as possible, as this slows the process down and can be confusing for the interviewer as well as the respondent. An example of this type of screen is one that contains instructions for the interviewer only, followed by "Select 3 to continue" (used commonly in electronic SFMP instruments when interviewers must make a choice between options based on previous answers or information filled from PIMS). Where possible, this type of screen should be removed or combined with another screen.

However, this type of screen may sometimes be unavoidable (for example, when the following question is too long to share a screen). In this case, the screen should be designed to be consistent with the other screens, so that the instructions appear in roughly the same location on each screen, and the space where the question would normally appear is left blank when there is no question. This will allow the interviewer to learn "where to look" and reduce confusion as to what they should do at these screens.

Historical data

For some surveys, it is desirable for interviewers to be able to view the data provided by respondents in previous reference periods (for example, to compare the current quarter's data with data from the previous quarter, or the same quarter in the previous year). The historical information may feed into edits that prompt the interviewer to check the answer the respondent has provided (see "Edits" for more information). For example, "The number of vacancies for this quarter is a noticeable decrease compared to last quarter. Please comment on any reasons for this change". A prompt such as this may lead to the respondent providing an explanation for the change (which the interviewer should record); or in some cases the respondent may want to revise the previously reported or current figure. Historical data is generally not able to be amended by the interviewer; rather, they make a note of the respondent's revised figure (in a comment field), for the reference of editing staff.

The actual figure provided in the last period should generally not be given to the respondent - rather, they should be told that it has significantly increased or decreased since last time. Providing the exact figure may lead to confidentiality problems, for example, if the business has changed ownership since the last reporting period.

For some household surveys, an option is to incorporate historical data into the question wording to remind respondents of what they reported in the previous period, and to check whether it has changed (e.g. "Last month, you reported you were working at X, is this still correct?"). This type of questioning is sometimes referred to as proactive dependent interviewing. This type of questioning can be useful because it helps the respondent remember the beginning of the time period they should be considering. However, this type of questioning is generally less appropriate for business surveys (where the data requested often requires record-checking) - respondents may be more likely to incorrectly say that their data hasn't changed since last time, because this is easier than figuring out the correct value. Furthermore, confidentiality issues arise when the person the interviewer is speaking to may not be the same person who reported in the previous period.

Historical data can be displayed using both the horizontal and vertical layouts, in column format in the form pane. Each column should have a descriptive heading so that it is clear which reference period it refers to (e.g. Diagram 11). Depending on how many columns are displayed, scrolling may be required to view them all. The relevant survey area should be involved in deciding how many previous reference periods should be shown in the CAI instrument.
Diagram 11: Columns displaying historical data

When only a single response from the previous reporting period is required to be displayed, an option is to have it appear as text in the information pane (e.g. Diagram 12).

Diagram 12: Alternative display of historical data

Parallel blocks

The CAI instrument interface is made up of "parallel blocks" represented by worksheet-like tabs, for example:

Diagram 13. Tabs of parallel blocks

The interviewer can move between the blocks whenever they want during the interview and automatic sequencing may also link between them. For consistency, the blocks should only be used for distinct but related pieces. For example, the intensive follow up (IFU) script module for a particular survey, the data collection module for the same survey, a shorter "key data items" version of the survey, and a summary sheet of information about the current provider would be a sensible group of parallel blocks. Parallel blocks may also be used for multiple "forms" for a particular respondent, e.g. for multiple building jobs, or for different respondents from the same household.

The order of the parallel blocks should be kept consistent across surveys, so that interviewers know where to find them. For example, the "Exit form" block used in business survey CATIs for recording the form status to be written back to PIMS, should appear last.

Menu lists and icons should be used for the remaining links to information and functions. These should also be used in a consistent manner.

Question Structure


The introduction of an interview-based survey must do all the work done by the front page of an equivalent paper form and the cover letter as well. Even if the provider has been mailed information in advance, the person interviewed may not be the same person as the one who received this material. It may have been lost or forgotten. Therefore the introduction needs to find the correct or best available contact person, explain important points like the purpose of the survey and the confidentiality of their responses, and convince the person to respond, immediately if possible.

The introduction should also make up for what the respondent misses through not being able to see the form. This includes explaining to the respondent how long the interview should take (and where relevant, explaining that this will differ depending on their answers if sequencing is involved). The interviewer should also urge the respondent to ask for clarification or additional information if required, to encourage them to access the notes not read out by the interviewer initially. For business survey CATIs that require respondents to provide financial information, the interviewer should let them know in the introduction that estimates are acceptable.

All this needs to be done simply and quickly, so it is essential that this part of the interview is carefully scripted or emphasised in interviewer training. It may be appropriate for some parts of the introduction to be in the form of instructions rather than scripting (e.g. "Establish that you are speaking to the right business contact"), as trying to script such processes may end up sounding robotic. Interviewer training should provide tips on and practice in introducing the survey, and the CAI interface should support doing this well.

Converting self-administered questions into interview questions

It is important to note that an interview, especially a telephone interview is a very different experience for the respondent compared to a self-administered paper form, and keeping the question words and structure very similar may not have the desired effect of similar responses. For most reworded questions, some testing will be necessary to make sure the mode change as well as the wording changes does not result in significantly poorer (or better) data. This is especially important if the collection will continue with both modes.

Many of the questions used in ABS self-administered business surveys are not actually phrased as questions. They are really just titles designed to match sections in our respondents' accounts e.g. "Interest income". When converting these questions into an interview, all these items will need to be reworded to be actual questions e.g. "What was this business's interest income for the period?". Another example is the standard question "Financial period" which in a paper form would normally look like this:
Diagram 14. Self-completed paper form standard

To use in a CATI interview, this question needs both the question and the note following to be reworded:

Diagram 15. CATI standard question version

The answer options also need to be more flexible to help the interviewer capture the respondent's answers, for example:

Diagram 16. Flexible answer options

Similarly, information not presented as questions on the paper forms such as contact details will need to be asked about explicitly. Matrices will need to be broken into their component questions, and decisions made as to whether to ask the questions based on rows or columns.

Questions which require the respondent to select one or more options from a list will need to be reworded so that each option is asked about separately, requiring a yes or no answer. This is because reading out even a short list in an interview for the respondent to choose from is prone to bias due to the cognitive difficulty of remembering all the items while making a decision.

It is generally not necessary for each item in a list to be read out as a complete question, e.g. "Did your business have a.... Did your business have a ..." as this repetition is not needed for the respondent to understand and can become very long and annoying. Beginning with e.g "Which of the following ... did your business have" is generally enough. However, in these cases it is important to include the question stem on each screen in plain text. For long lists the original question may be several screens back and if e.g. the respondent asks for clarification on one sub-question, the momentum is lost and more information may be necessary for the next sub-question to be meaningful. For example, this question would appear on one screen:
Diagram 17: Use of question stems and sub-questions

a) First question of a related set

and would be followed by this on the next screen:

b) Second question of a related set

When using a plain stem question and bold sub-questions, the bold text should start on a new line below the plain text, except where the sub-questions are very short (see Diagram 18). The bold text indicates that the text is always to be read out, while the plain text may be read out.
Diagram 18. Layout of lines of question text using stems:

a) long question

b) short question

Question wording

For a new survey using CATI, the questions can be worded appropriately from the start. If using a paper form as well, or any other mode, testing should be conducted concurrently so that any particular wording or sequencing required by one mode can be reflected in the others appropriately.

Care should be taken to ensure questions and other scripted components are phrased as clearly as possible, to avoid interviewers misreading the question and making errors. For example, "Which reference numbers are you not familiar with?" is more difficult for interviewers to read and comprehend compared with the more positively phrased "Which reference numbers are you familiar with?"

Avoid using abbreviations in the script - for instance, use "for example" instead of "e.g.", as some interviewers may read out the abbreviation exactly as it appears on the screen. Similarly, avoid using slashes to separate items (e.g. "retainer/wage/salary"), as this makes it harder for the interviewer to read it out - instead, the words should appear as they are to be read out (e.g. "retainer, wage or salary").

Running prompt questions

Running prompt questions require the interviewer to read out a list of response categories one at a time, pausing after each category for the respondent to indicate whether or not the category applies to them. In "single response" running prompt questions, the categories are only to be read out until a "yes" response is given. In "multiple response" questions, all categories are read out, with "yes" recorded against each applicable category.

It is important to provide consistent instructions to interviewers as to how they should read such lists. It is recommended that the following standard instructions appear in blue text below the question and before the response options:
  • Single response questions: "Read out each category until a 'yes' response is given".
  • Multiple response questions: "Read out categories, pause after each one for a 'yes' or 'no' response".

In these types of questions, "Don't know" and "None of the above" are often provided as categories at the end of the list for the interviewer to select. These options are not intended to be read out to respondents, and as such, generally appear in the Form Pane rather than the Information Pane. Sometimes the text to be read out is displayed in the Form Pane only (i.e. read-out text and response categories are combined into one). In these cases, to help interviewers remember not to read these options out, it is recommended that these categories appear in blue "interviewer instruction" text (see "Typography" section).
Response options

It is important to consider how many response options to include in a list, especially when it is an interviewer-coded checklist (i.e. the interviewer chooses the appropriate option/s based on the respondent's answer). When the list is very long, it can be difficult for the interview to quickly find the most appropriate options, while still holding the respondent's answers in their working memory. Primacy effects (being biased towards the items at the top of the list) may therefore be a problem.

The way the respondent provides their answer (e.g. how quickly they provide it, how many different answers they give, and how closely their answers match the words used in the response categories) will impact on the ability of interviewers to quickly and accurately code their answers.

To assist interviewers in their coding, each response item should be as short as possible, and where the list is long, the items should be meaningfully grouped (e.g. alphabetical order for items with easily-recognisable labels). As a rough guide based on research on primacy effects in self-administered questionnaires, 20 items is too many to include.

For 'Yes/No' response options, 'Yes' should have a code of 1, and 'No' should have a code of 5. Coding 'No' as 5 rather than 2 makes it less likely that interviewers will accidentally select 'No' instead of 'Yes', or vice versa. 5 can be converted to 2 during processing if required.

If there are more than 9 response options in a multiple-choice field, the first one should start with code 10. The reason for this is that the key sequence 113, for example, may be interpreted as 1-13 or 11-3. This confusion can be avoided if all values have the same number of digits.

Explanatory Notes

A survey with a large number of explanatory notes is unsuitable for an interview, especially telephone interviewing. The interviewer can't read out extensive notes because this breaks the conventions for telephone conversation and will strain the respondent's comprehension and memory. It is also difficult to encourage respondents to read and remember notes sent to them beforehand. Where practical, a CAI interview should incorporate short notes into the question itself. For example, this Standard Question Wording (SQW) for self-administered forms:
"What was the total gross income of this business during the financial period?"
with the separate explanation:
  • Extraordinary items
  • Goods and Services Tax (GST)
becomes the interview question
"What was the total gross income of this business during the financial period, excluding extraordinary income items and GST?"

For most notes this would make the question too long and awkward. In these cases the notes should be scripted appropriately but not read out unless the respondent asks for or appears to need clarification. This may lead to a reduction in the quality of the interview responses, however this would also occur due to fatigue and frustration should the respondent have to listen to long explanations they don't believe they need. Selective explanations are considered the equivalent to many self-administered survey respondents only reading notes when they get stuck.

Scales and Measurement units

Where the measurement unit to be used for a particular question is not obvious, in a CAI interview this should be incorporated into the question wording (for example "How long in years..." or "What was your business's income.. in thousands of dollars"). Even when there is a reasonably small range of ways the respondent could answer, requiring the interviewer to convert the answer to the right measurement unit before entering it in this time-pressured situation may lead to error.

Prompt cards can be used in face-to-face interviews to visual present response scales to respondents, allowing them to read through the list and select the most appropriate option/s. This helps alleviate memory and other response problems that may arise from respondents having to remember a verbally-administered scale. See the Prompt Card Chapter in this manual for information on designing prompt cards.

Prompt cards are generally not used used in telephone interviews, and even when the respondent has been sent a paper form (including telephone reporting cards) they often will not have this available during the interview. This means that if the respondent is required to report a particular way, this must be made explicit in the question if prompt cards are not used. For example, if the survey requires the respondent to rate the importance of various items to their business, it is not enough to ask them "How important are the following factors...". They can't see the scale in front of the interviewer, and their answers will generally not match it very well if allowed to answer freely. The question will need to be more like "On a 5 point scale of Very important, Important, Somewhat important, Not very important and Not at all important, please rate the following factors..". Note that this sort of question can get quite long and awkward leading the respondent to forget the scale anyway, so care must be taken when choosing what to ask.

Open-ended questions

There are several types of open-ended questions in CAI instruments. For example, for the "occupation status" question in household survey interviews, interviewers record respondents' description of their occupation, for later coding.
For CATIs based on ABS business surveys, it is mandatory to ask the respondent if they have any comments at the end of the survey. The following wording is recommended:
"Do you have any comments on this survey or the interview?"

All the text an interviewer enters in a text field should be visible at once and should not require scrolling. The answer box for this comments question should be quite large and in Blaise this means it has to be a pop-up. Interviewers need to be able to skip past this box easily if they don't have anything to enter.

Another area where large text boxes may be required for ABS surveys is for "Other" questions, where the respondent is required to specify any activity which has not yet been covered. In a Blaise CATI this needs to be presented as "Do you have any other..." with a checklist or yes/no filter followed by a separate text option, because there can't be a choice between an open text answer vs a "no" selection for a single field. Generally the CATI also needs to allow for and record when respondents say yes, they have an "other" but refuse to specify what it is, especially when there is a paper form as well which allows this. For example, these answer options:
Diagram 19. Asking a question about "Other"

a) Answer choices when asking about "Other"

with these answer fields:
b) Response field for asking about "other"

would be followed by a screen with this answer field by itself:

c) Text field for recording "other"

If the respondent had answered "no" others, the "record others" screen would be skipped. The text field can also be left blank.

Time taken

As with all ABS business surveys, collecting "Time taken" is mandatory for CATI surveys. For a full picture of the provider load and better comparison with paper form equivalents, it is important to ask the respondent about any time they may have spent on the survey, apart from the interview. The interviewer can then add this reported time to the interview time, as a total measure of time taken. The recommended wording is as follows:

There should be two separate fields (Hours and Minutes) for the interviewer to enter Time taken (see Diagram 4). Time taken is one exception where it will generally not be necessary to explicitly mention the measurement unit to the respondent. However, if a particular survey obtains a significant number of answers in "days" for example, extra words explaining that hours and minutes are required may be added.

Help information and the F9 function

There are several different kinds of help which may be required in a CAI instrument, including:
  • technical help for how to use the instrument, generally accessed using the F1 command;
  • question specific notes, definitions, includings etc.;
  • general FAQ-type information such as why the respondent was selected, whether the survey is compulsory; and
  • survey specific explanations of e.g. the purpose of the survey, who else it goes to.

It is important to provide interviewers with general FAQ-type information and survey specific information to have on hand, to supplement formal training provided on the survey and CAI instrument. This information may be included in the instrument by presenting it in a separate "parallel block", or from a drop-down menu. Whichever is chosen, the presentation should be consistent.

Question specific information should be presented under the question, as much as will fit sensibly. Any information that will not fit should be included in a pop-up, accessed by using the F9 command. The F9 command is used to display additional information relevant to a particular question or module, that will not fit on the main screen, and/or may not be required by all interviewers for every interview. The F9 function provides a quick and easy way for interviewers to access key information that is not available on the main screen, to help them clarify a question when required.
Where possible, the most important information for interviewers (e.g. common inclusions and exclusions) should appear as an interviewer instruction on the main screen with the corresponding question. Where necessary, the F9 function can be used to display the following types of information:
  • additional definitions, inclusions, and exclusions for a particular question;
  • an explanation of why a question is being asked (this is useful for questions that respondents may be reluctant to answer, e.g. income);
  • prompt card categories, where these are not reflected in the question or response options (e.g. some questions that utilise prompt cards ask for a yes/no response only, and the response categories do not appear on the main screen);
  • conversion tables (e.g. for converting months to weeks);
  • other information that will assist the interviewer in selecting the correct response (e.g. for a question about the respondent's highest educational qualification, a hierarchical list of education levels in an F9 is very useful).

Information that already appears on the main screen should not be repeated in F9.

Interviewers are alerted to the presence of F9 information via one of the following instructions, that appears below the question:
  • "Press F9 to display further information"
  • "Press F9 to display prompt card details"
  • "Press F9 to display Months to Weeks conversion table, if necessary".

It is important that the F9 appears at the place where it is needed - e.g. if it relates to a particular question, it should appear with the question, not at the introduction to a topic/module.When the F9 key is hit, the information is displayed in a "pop-up" box, using the "Help Language" feature of Blaise. The pop-up box should appear in the blank space to the right of the question, so that as much of the question text and original notes as possible are still visible. There should be a small top and left margin within the pop-up so the text is not cramped against the edge of the box.

There is also the option in Blaise for a "Windows HTML Help File" to be opened (instead of a pop-up box) when F9 is hit. Rather than help text being compiled within the instrument source code as is the case with the Help Language feature, help files are built using the "HTML Help" tool. Only one F9 function can be used within an instrument, i.e. it is not possible to switch between the Help Language and HTML Help function. This function is currently being trialled on a case by case basis.

The advantages of using HTML Help files rather than the Help language feature of Blaise are that:
  • content creation and formatting can be done by the HSC;
  • the content could feasibly be changed after the instrument is out in the field (as the instrument would not require recompilation, provided only the content referred to by an existing tag is changed);
  • formatting is more flexible (e.g. hotlinks can be used); and
  • there is a reduced effort by TA to program and format the Help Text.

The way in which interviewers are informed that F9 information is available, and the formatting of the F9 information presented, should be consistent within an instrument.

Consistency in the use of F9 between instruments for different surveys is also important. In terms of the specific content of F9s, it is recommended that for standard modules such as education and income, standard F9s are developed and used in CAI instruments. F9 content is developed as part of the question specifications for a survey; advice on wording for questions or F9s is beyond the scope of these guidelines.

The definitions, explanations, inclusions, exclusions etc. contained in household survey Interviewer Instructions (IIs) can be appropriate to include in F9s. During an interview, it is often easier for interviewers to be able to access the relevant information by pressing the F9 key, rather than searching for the information in the printed IIs. It should be made clear to interviewers that the F9 information is there to assist them if required, i.e. they are not expected to view every F9 in an instrument.

For some surveys it may be useful for there to be some cross-referencing between a CAI question or module and the relevant part of the IIs. This should be done by adding references to the IIs (e.g. module codes in the table of contents), rather than by adding references (e.g. II page numbers) to the F9 text. IIs are often subject to a large amount of change in the lead-up to a survey, and adding page number references to F9 text (whether created in a pop-up or HTML Help file) creates too great a risk of errors in the cross-referencing.

Formatting the F9 information in a clear and intuitive way is important to ensure that interviewers can find the relevant information quickly. Ideally, F9 information should be sufficiently concise that it fits on a single pop-up screen without requiring scrolling. However, this may not always be possible, and scrolling will be required to view all the information. As an approximate guide, a single screen fits around 20 lines of text before scrolling is required. Each line fits approximately 40 characters.

If the information runs for longer than one screen, a risk is that interviewers will not realise more information is available by scrolling down. To overcome this, the information should be positioned so that it is reasonably obvious that more text exists below. For example, when there are several blocks of text, it is best to ensure that the shorter section is first and the second section runs over onto the next page (rather than the longer section fitting perfectly on the screen, with the shorter section entirely out of view).

There should be sufficient text spacing to maintain the readability of the text. For example, in a list of prompt card categories, there should be a blank line between each category to make it clear when one category ends and another begins. A clearly laid-out F9 that requires scrolling is preferable to an F9 in which the text has been condensed to fit on one screen.

Presenting information in dot points, and using clear headings to organise information, is also recommended to enable interviewers to find relevant information more easily.

Ensuring that the presentation of the F9 information is clear is an important part of instrument validation.



Electronic forms such as CAI interfaces allow for much more complex sequencing than paper forms (whether self-administered or interviewer-administered) because the sequencing is automatic. There can be large numbers of branches off to separate modules and back again. Making sure the logic is correct can be tricky and requires thorough testing.

An important thing to remember in this context is how to account for differences when using a paper form as well as a CAI instrument. While this does not happen very often for household surveys, for business surveys, CATI is often used in conjunction with a paper form option. In some cases, the respondent may have the paper form in front of them while they are completing the telephone interview. It is tempting to include more sequencing in the electronic version than in the paper version. However, while the forms may be theoretically the same, a paper form allows incorrect routing such as filling in questions out of order, which may lead to differences in the data collected due to the mode (i.e. a mode effect).

Editing rules (for processing the data later) need to be determined to address the mode effect that results. For instance, should all the answers a respondent puts in questions they should have skipped be removed? If the electronic version has more skips, is a skipped question equivalent to a "no" or a "non-response" in the data files from the paper version? Decisions should be made on the treatment of these issues at the development stage and implemented consistently.

When testing the sequencing, it is important to ensure that if the interviewer is required to go backwards through the interview after having gone down the wrong sequencing path (e.g. during SFMP), any information already entered which is relevant to both paths will transfer to/remain in the new path. Having to ask for the same information more than once in the one interview will be frustrating and confusing for the interviewer and respondent.

Wherever possible for any electronic form, navigation should be possible using either or both of keyboard and mouse. The current standards are that:
  • The interviewer can't go forward through the form within a parallel block unless valid data is entered in every applicable field. While a menu option allowing the interviewer to skip a field is available, it is recommended that an appropriate answer field (e.g. "Don't know") is used instead wherever this kind of response is likely, so that a better picture of what has happened is available.
  • Pressing Enter after completing each field takes the interviewer to the next field. For some comments fields such as the final question, where it is valid to have no response, Enter should take the interviewer to the next field anyway. Typing anything but Enter opens the pop-up window and while in the window, Enter starts a new line of text and the interviewer should click on "save" to close. Navigating back to a comments field allows the interviewer to type over the previous answer. Double-clicking or pressing "insert" allows the interviewer to append extra information.
  • Clicking with the mouse or arrow keys may also be used to go to the next field to complete, or go up and down between completed fields within one field pane set (which will change the question presented at the top of the screen).
  • The "Tab" key takes the interviewer to the next field, however this method includes the radio buttons and other answer choice fields which can cause the interviewer to become lost. This method should therefore be discouraged.
  • Page up and down on the keyboard, and icons at the top of the screen for the mouse, may be used to take the interviewer backwards and forwards through whole completed screens.
  • A drop down menu option "Search tag" allows the option of going directly to a completed field rather than needing to skip through the intervening fields. The interviewer needs to type in whatever is programmed into the instrument as the tag for that particular question (e.g. EDUC_05).
  • The bar at the very bottom of the screen should tell the interviewer what question they are up to and also give an indication of how much they have to go - the current standard is a count of what screen out of the total they are at.
Diagram 20. Progress indicator


In addition to the Comments parallel block (which allows the interviewer to make comments about the interview or respondent as a whole) there should also be a function to make comments specific to a question e.g. "This was a wild guess". Blaise allows this with the "Remark" function which creates an attached comment using a paper clip icon. The icon is shown here:

Diagram 21. Interviewer Comments tab and Remark icon

Selecting it causes a pop-up, and when a remark is saved in the pop-up this is indicated next to the answer field as below, with a double-click bringing the remark back up:

Diagram 22. Response field with attached Remark


Although automatic edits can be one of the main advantages of any electronic form, they should be used with caution. Constant error messages can be very frustrating for both the interviewer and the respondent. The current standard for newly developed CAI instruments is minimal edits. These can include simple range and data type restrictions, as well as internal consistency checks such as the end date for a given financial period being later than the start date, and numerical breakdowns each being smaller than the relevant total.

The emphasis of the edits for new instruments should be to prevent data entry mistakes without slowing the process down too much. They should not be used to question the accuracy of the respondent's answers. The edit message should be meaningful and usually allow the interviewer to continue without correcting their entry (i.e. a "soft" rather than "hard" edit). However, hard edits may be appropriate in some circumstances to ensure the data provided is in the correct format (e.g. ABNs must have eleven digits).
Diagram 23. Internal consistency edit message

Fill capability

It is important that when the CAI script refers back to previously learned information, such as the contact's name or the answer to a related question, this information is filled automatically into the current text. The filled information should be the same font, size and style as the text around it so that it appears to belong there.

Back to top


These standards were developed based on the research of other statistical agencies as well as ABS experience and Blaise standards. Some relevant references regarding the design and appropriateness of CATI and other telephone surveys include:

Aquilino, W.S. (1998) "Effects of interview mode on measuring depression in younger adults" Journal of Official Statistics, vol. 14.

Catlin, G., Ingram, S., & Hunter, L. (1988) "The effects of CATI on cost and data quality" Symposium 88: The impact of High technology on Survey Taking, Statistics Canada.

Donohue, Kathleen R., Clayton, R.L. & Werking, George S. (1992) "Integrating CATI centers in a decentralised data collection environment" Proceedings of the Section on Survey Research Methods, American Statistical Association.

Edwards, Teresa Parsley, Suresh, R., & Weeks, Michael F. (1998) "Automated call scheduling: current systems and practices" in Mick P. Couper, Reginald P. Baker, Jelke Bethlehem, Cynthia Z.F. Clark, Jean Martin, William L. Nicholls II & James M. O'Reilly (eds) Computer Assisted Survey Information Collection, John Wiley & Sons.

Groves, Robert M. & Nicholls William L. II (1986) "The status of Computer-assisted Telephone Interviewing: Part II- Data quality issues" Journal of Official Statistics, Statistics Sweden.

Lester, A. & Wilson, Ian (1995) "Surveying businesses by telephone- a case study of methodology" Proceedings of the International Conference on Survey Measurement and Process Quality, American Statistical Association.

Rosenthal, Miriam D. & Hubble, David L. (1993) "Results from the National Crime Victimisation Survey (NCVS) CATI experiment" Proceedings of the Section on Survey Research Methods, American Statistical Association.

Sangster, R.L., Rockwood, T.H. & Dillman, D.A. (1994) "The influence of administration mode on responses to numeric rating scales" American Statistical Association: Proceedings of the section on survey research methods.

Saris, Willem E. (1991) Computer assisted interviewing, Sage publications.

Tarnai, John, Kennedy, John, & Scudder, David (1998) "Organisational effects of CATI in small to medium survey centres" in Mick P. Couper, Reginald P. Baker, Jelke Bethlehem, Cynthia Z.F. Clark, Jean Martin, William L. Nicholls II & James M. O'Reilly (eds) Computer Assisted Survey Information Collection, John Wiley & Sons.

Werking, George S. & Clayton, Richard L. (1995) "Automated telephone methods for Business surveys" in Cox, Binder, Chinappa, Colledge, & Kott (eds) Business Survey Methods, John Wiley & Sons.

Werking, George S., Tupek, Alan, & Clayton, Richard (1988) "CATI and touchtone self-response applications for establishment surveys" Journal of Official Statistics, Vol. 4, Statistics Sweden.

Wensing, Fred, Barresi, Jane & Finlay, David (2003) "Developing an optimal screen layout for CAI". ABS paper presented at IBUC 2003 - Copenhagen.

Previous PageNext Page