4906.0.55.003 - Personal Safety Survey, Australia: User Guide, 2016  
ARCHIVED ISSUE Released at 11:30 AM (CANBERRA TIME) 08/11/2017   
   Page tools: Print Print Page Print all pages in this productPrint All

DATA QUALITY AND TECHNICAL NOTES

Although care was taken to ensure that the results of the 2016 PSS are as accurate as possible, there are certain factors which may affect the reliability of the results to some extent and for which no adequate adjustments can be made. One such factor is known as sampling error. Other factors are collectively referred to as non-sampling errors. These factors, which are discussed below, should be kept in mind when interpreting results of the survey.

Data quality aspects, covered in this page, are:


SAMPLING ERROR

Since the estimates are based on information obtained from a sample of the population, they are subject to sampling error (or sampling variability). That is, the estimates may differ from those that would have been produced had all persons been included in the survey.

The magnitude of the sampling error associated with a sample estimate depends on the following factors:
  • Sample design - there are many different methods which could have been used to obtain a sample from which to collect data on incidence of violence. The final design attempted to make key survey results as representative as possible within cost and operational constraints (for further details see Methodology page of this User Guide).
  • Sample size - the larger the sample on which the estimate is based, the smaller the associated sampling error.
  • Population variability - the extent to which people differ on the particular characteristic being measured. This is referred to as the population variability for that characteristic. The smaller the population variability of a particular characteristic, the more likely it is that the population will be well represented by the sample, and, therefore the smaller the sampling error. Conversely, the more variable the characteristic, the greater the sampling error.

Measures of sampling variability

Sampling error is a measure of the difference between published estimates, derived from a sample of persons, and the value that would have been produced if the total population (as defined for the scope of the survey) had been included in the survey.

One measure of the likely difference is given by the standard error estimate (SE), which indicates the extent to which an estimate might have varied because only a sample of dwellings was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about 19 chances in 20 that the difference will be less than two SEs.

Diagram: visual representation of how confidence intervals are calculated as discussed in the above text

For estimates of population sizes, the size of the SE generally increases with the level of the estimate, so that the larger the estimate the larger the SE. However, the larger the sampling estimate the smaller the SE becomes in percentage terms. Thus, larger sample estimates will be relatively more reliable than smaller estimates.

Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate to which it related. The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate.

RSE% = (SE/estimate) x 100

Relative standard errors for 2012 and 2016 estimates are published in 'direct' form. RSEs for estimates are calculated for each separate estimate and published individually using a replicate weights technique (Jackknife method). Direct calculation of RSEs can result in larger estimates having larger RSEs than smaller ones, since these larger estimates may have more inherent variability. More information about the replicate weights technique can be found below.

Estimates with relative standard errors less than 25% are considered sufficiently reliable for most purposes. However, estimates with relative standard errors of 25% or more are included in ABS publications of results from this survey. Estimates with RSEs greater than 25% but less than or equal to 50% are annotated with an asterisk (*) to indicate they are subject to high SEs relative to the size of the estimate and should be used with caution. Estimates with RSEs of greater than 50%, annotated by a double asterisk (**), are considered too unreliable for most purposes. These estimates can be aggregated with other estimates to reduce the overall sampling error.

Another measure is the Margin of Error (MoE), which describes the distance from the population value that the sample estimate is likely to be within, and is specified at a given level of confidence. Confidence levels typically used are 90%, 95% and 99%. For example, at the 95% confidence level, the MoE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MoE from the population value (the figure obtained if all dwellings had been enumerated). The MoE at the 95% confidence level is expressed as 1.96 times the SE.

A confidence interval expresses the sampling error as a range in which the population value is expected to lie at a given level of confidence. The confidence interval can easily be constructed from the MoE of the same level of confidence, by taking the estimate plus or minus the MoE of the estimate. In other terms, the 95% confidence interval is the estimate +/- MoE i.e. the range from minus 1.96 times the SE to the estimate plus 1.96 times the SE.

The 95% MoE can also be calculated from the RSE by the following, where y is the value of the estimate:

Image of formula depicting Margin of Error calculated from the RSE


The imprecision due to sampling variability, which is measured by the SE, should not be confused with inaccuracies that may occur because of imperfections in reporting by interviewers and respondents and errors made in coding and processing of data. Inaccuracies of this kind are referred to as the non-sampling error, and they may occur in any enumeration, whether it be in a full count or only a sample. In practice, the potential for non-sampling error adds to the uncertainty of the estimates caused by sampling variability. However, it is not possibly to quantify the non-sampling error. For more details on non-sampling error, see below.

Proportion estimates annotated with a hash (#) in published PSS data, such as in Personal Safety, Australia 2016 (cat. no. 4906.0), have a margin of error greater than 10 percentage points. Users should give the margin of error particular consideration when using this estimate.

Note that MoEs for 1996 proportion estimates in the tables for this publication were calculated using the RSEs presented in the RSE tables found in the Women’s Safety Survey, 1996 (cat. no. 4128.0).

Calculation of Standard Error

Standard errors can be calculated using the estimates (counts or percentages) and the corresponding RSEs. For example in Personal Safety, Australia 2016 (cat. no. 4906.0), the estimated number of males aged 18 years and over who experienced physical assault in the last 12 months was 309,400. The RSE corresponding to this estimate is 8.7%. The SE is calculated by:

Equation: SE of Estimate = (RSE/100) multiplied by Estimate


= (8.7 / 100) * 309400

= 26,900 (rounded to the nearest 100)

Note that RSEs for percentage estimates are not presented in the Personal Safety, Australia 2016 publication, but can be produced from the TableBuilder or Detailed Microdata products or by request.

Standard errors can also be calculated using the MoE. For example the MoE for the estimate of the proportion of females aged 18 years and over who experienced sexual harassment in the last 12 months (17.3%) is +/- 1.1 percentage points. The SE is calculated by:

Equation: SE of Estimate = (MOE/1.96)


= 1.1/1.96
= 0.6

Note due to rounding, the SE calculated from the RSE may be slightly different to the SE calculated from the MoE for the same estimate.

There are about 19 chances in 20 that the estimate of the proportion of females aged 18 years who experienced sexual harassment in the last 12 months is within +/- 1.1 percentage points from the population value.

Similarly, there are about 19 chances in 20 that the proportion of females aged 18 years and over who experienced sexual harassment in the last 12 months is within the confidence interval of 16.2% to 18.4%.

Standard errors of derived estimates of proportions

Proportions formed from the ratio of two estimates are also subject to sampling errors. The size of the error depends on the accuracy of both the numerator and denominator. For proportions where the denominator is an estimate of the number of persons in a group, and the numerator is the number of persons in a sub-group of the denominator population, a formula to approximate the RSE is:

Equation: RSE (x / y) = square root of ([RSE (x)] squared - [RSE (y)] squared)


For example, the proportion of those who experienced physical assault (denominator) who reported the most recent incident to the police (numerator).

Using this formula, the RSE of the estimated proportion will be lower than the RSE estimate of the numerator. Therefore a simple and conservative approximation for SEs of proportions may be derived by neglecting the RSE of the denominator; i.e. obtaining the RSE of the number of persons corresponding to the numerator of the proportion and then applying this figure to the estimated proportion.

Standard error of a difference

The difference between two survey estimates is itself an estimate, and is therefore subject to sampling variability. The sampling error of the difference between the two estimates depends on their individual SEs and the level of statistical association (correlation) between the estimates. An approximate SE of the difference between two estimates (x-y) may be calculated by the following formula:

Equation: SE (x minus y) = square root of ([RSE (x)] squared + [RSE (y)] squared)


For example, the number of females who have been stalked minus the number of males who have been stalked.

While this formula will only be exact for differences between separate sub-populations or uncorrelated characteristics of sub-populations, it is expected to provide a reasonable approximation for most differences likely to be of interest in relation to this survey.

Standard error of a sum

The sum of two survey estimates is itself an estimate and is therefore subject to sampling variability. The sampling error of the sum of the two estimates depends on their individual SEs and the level of statistical association (correlation) between the estimates. An approximate SE of the sum of two estimates (x+y) may be calculated by the following formula:

Equation: SE (x plus y) is approximately equal to the square root of ([RSE (x)] squared + [RSE (y)] squared)


For example the number of people who experienced sexual assault plus the number of people who experienced physical assault.

While this formula will only be exact for sums of separate sub-populations or uncorrelated characteristics of sub-populations, it is expected to provide a reasonable approximation for most estimates likely to be of interest in relation to this survey.

Relative standard error and Margin of Error for derived proportions, differences and sums

The approximate RSE for differences and sums can be calculated from the SE by:

Equation: RSE (x minus y) is approximately equal to SE(x minus y) divided by x minus y

Equation: RSE (x plus y) is approximately equal to SE(x plus y) divided by x plus y

The 95% MoE for proportions, differences and sums can be calculated by:

Image of fomula depicting MoE for proportions, differences and sums.


Replicate Weights Technique

A class of techniques called 'replication methods' provide a general method of estimating variances for the types of complex sample designs and weighting procedures employed in ABS household surveys.

The basic idea behind the replication approach is to select sub-samples repeatedly from the whole sample, for each of which the statistic of interest is calculated. The variance of the full sample statistic is then estimated using the variability among the replicate statistics calculated from these sub-samples. The sub-samples are called 'replicate groups', and the statistics calculated from these replicates are called 'replicate estimates'.

There are various ways of creating replicate sub-samples from the full sample. The replicate weights produced for the 2016 PSS were created under the delete-a-group Jackknife method of replication (described below).

There are numerous advantages to using the replicate weighting approach, including the fact that:

  • The same procedure is applicable to most statistics such as means, percentages, ratios, correlations, derived statistics and regression coefficients
  • It is not necessary for the analyst to have available detailed survey design information if the replicate weights are included with the data file.

Derivation of replicate weights

Under the delete-a-group Jackknife method of replicate weighting, weights were derived as follows:
  • 60 replicate groups were formed, with each group formed to mirror the overall sample. Units from a cluster of dwellings all belong to the same replicate group, and a unit can belong to only one replicate group.
  • For each replicate weight, one replicate group was omitted from the weighting and the remaining records were weighted in the same manner as for the full sample.
  • The records in the group that was omitted received a weight of zero.
  • This process was repeated for each replicate group (i.e. a total of 60 times).
  • Ultimately each record had 60 replicate weights attached to it with one of these being the zero weight.

Application of replicate weights

As noted above, replicate weights enable variances of estimates to be calculated relatively simply. They also enable unit record analyses such as chi-square and logistic regression to be conducted, which take into account the sample design.

Replicate weights for any variable of interest can be calculated from the 60 replicate groups, giving 60 replicate estimates. The distribution of this set of replicate estimates, in conjunction with the full sample estimate, is then used to approximate the variance of the full sample.

The formulae for calculating the standard error (SE), relative standard error (RSE) and 95% Margin of Error (MoE) of an estimate using this method are shown below:

Equation: "SE" (y)=√((59/60) ∑_(g=1)^60▒(y_g-y)^2 )

where:
  • g = (1, ..., 60) (the number of replicate weights)
  • y(g) = estimate from using replicate weighting
  • y = estimate from using full person weight.

The RSE(y) = SE(y)/y*100.

The 95% MoE(y)=SE(y)*1.96.

This method can also be used when modelling relationships from unit record data, regardless of the modelling technique used. In modelling, the full sample would be used to estimate the parameter being studied (such as a regression coefficient); i.e. the 60 replicate groups would be used to provide 60 replicate estimates of the survey parameter. The variance of the estimate of the parameter from the full sample is then approximated, as above, by the variability of the replicate estimates.

Availability of RSEs calculated using replicate weights

Actual RSEs for estimates (excl. proportion estimates) have been calculated in the publication Personal Safety, Australia 2016 (cat. no. 4906.0). The RSEs for estimates are available in spreadsheet format (datacubes) accessed by clicking on the downloads tab of the 2016 PSS publication. The RSEs in the spreadsheets were calculated using the replicate weights methodology.

Availability of MoEs calculated using replicate weights


Actual MoEs for proportion estimates have been calculated at the 95% confidence level for the publication Personal Safety, Australia 2016 (cat.no. 4906.0) and are available in spreadsheet format (datacubes) accessed by clicking on the downloads tab of the publication. The MoEs in the spreadsheets were calculated using the replicate weights methodology.

Significance Testing on Differences Between Survey Estimates

For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant.

A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Differences section. This standard error is then used to calculate the test statistic:

Graphic: (x-y/SE(x-y))

If the value of this test statistic is greater than 1.96 then there is good evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence (at the 95% confidence level) that there is a real difference between the populations.

Example of estimates where there was a statistically significant difference

An estimated 5.4% of all men aged 18 years or over and 3.5% of all women aged 18 years or over had experienced physical violence during the 12 months prior to the survey.
  • The estimate of 5.4% of men who had experienced physical violence in the 12 months prior to the survey has an RSE of 7.0%. There are 19 chances out of 20 that an estimate of between 4.7% and 6.1% (or +/- 0.7% MoE) of men would have been obtained if all dwellings had been included in the survey.
  • The estimate of 3.5% of women who had experienced physical violence in the 12 months prior to the survey has an RSE of 5.9%. There are 19 chances out of 20 that an estimate of between 3.1% and 3.9% (or +/- 0.4% MoE) women would have been obtained if all dwellings had been included in the survey.
Diagram: Estimates of significantly statistical differences as in the above paragraph.

  • The value of the test statistic (using the formula shown in the significance testing section above) is 4.62

As the value of this test statistic, at 4.62, is greater than 1.96, this showed there was evidence, with a 95% level of confidence, of a statistically significant difference in the two estimates. By calculating the confidence interval for the proportion of men and women who experienced physical violence in the 12 months prior to the survey, it can be seen that the confidence intervals for estimates for men and women do not overlap (where the confidence intervals do not overlap there is always a statistically significant difference). Therefore there is evidence to suggest that men were more likely than women to have experienced physical violence in the 12 months prior to the survey.

NON SAMPLING ERROR

Non-sampling error may occur in any data collection, whether it is based on a sample or a full count such as a census. Non-sampling errors occur when survey processes work less effectively than intended.

Through careful design and testing of the questionnaire, training of interviewers, and extensive editing and quality control procedures at all stages of data collection and processing, other non-sampling error has been minimised. However, the information recorded in the survey is essentially 'as reported' by respondents, and hence may differ from information available from other sources, or collected using a different methodology.

The major sources of non-sampling error are:
  • Errors related to scope and coverage
  • Response errors due to incorrect interpretation or wording of questions
  • Errors related to recall
  • Bias due to non-response, because health status, health-related behaviour and other characteristics of non-responding persons may differ from responding persons
  • Errors in processing such as mistakes in the recording or coding of the data obtained.

These sources of error are discussed below.

Errors related to survey scope

Some dwellings may have been inadvertently included or excluded due to inaccuracies in the lists of dwellings in the selected areas. In addition, some people may have been inadvertently included or excluded, due to difficulties in applying the scope rules for household visitors or people over 18 years. However, since the ABS has gained considerable experience in and refined these procedures over many years, any resultant errors are considered to be minimal.

Response errors

Response errors may have arisen from three main sources:
  • Flaws in questionnaire design and methodology
  • Flaws in interviewing technique
  • Inaccurate reporting by the respondent.
    Errors may be caused by misleading or ambiguous questions, inadequate or inconsistent definitions of terminology used, or poor overall survey design (for example, context effects where responses to a question are directly influenced by the preceding questions). In order to overcome problems of this kind, individual questions and the questionnaire overall, were thoroughly tested before being finalised for use in the survey and interviewers appropriately trained (for more details on testing undertaken and interviewer training, see the Survey Development and Data Collection page of this User Guide).

    During testing, particular attention was given to the wording of questions and respondent interpretation of them, as well as to the interviewer instructions, to ensure that information collected fitted within the relevant definitions.

    While the questionnaire has been improved and streamlined through testing, the type and amount of data required from the survey resulted in a complex questionnaire. In some cases, such as when a person had experienced incidents of violence by a number of different perpetrators, errors may have resulted from the interviewer and/or the respondent's confusion.

    In any survey, inaccurate reporting may occur due to respondents misunderstanding the questions or answering incorrectly to protect their personal integrity, their personal safety or to protect somebody else. For example, some people may not have reported incidents they experienced, particularly if the perpetrator was somebody close to them, such as a partner or family member. However, conducting the interviews alone with respondents, and the introduction of the Computer Assisted Self Interview (CASI) for the voluntary component, were procedures used to minimise this effect.

    In addition extensive editing and quality control procedures were applied at all stages of data processing. In situations where known inconsistencies still remain in the data, or areas have been identified as potentially open to misinterpretation, these are identified in the interpretation section of the relevant content topic pages of this User Guide.

    Errors related to recall

    Recall errors may arise in a number of ways. People may forget to report incidents that occurred in the past, or they may report incidents as occurring in a more recent time period. Recall errors are likely to be greater for information collected about incidents that occurred a long time ago.

    When collecting information about the characteristics of a person's most recent incident of the 8 types of violence, due to the possibility of recall errors (in addition to reducing respondent burden) detailed information about the most recent incident was not collected when the violence occurred 10 years ago or more.

    Non-response bias

    Non-response occurs when people cannot or will not cooperate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends on the rate of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not.

    The 2016 PSS achieved an overall response rate of 68.7% (fully responding households, after sample loss). Data to accurately quantify the nature and extent of the differences in experiences of violence between respondents in the survey and non-respondents are not available. Under or over-representation of particular demographic groups in the sample are compensated for at the state, section of state (i.e. capital city and balance of state), sex, age group, employment status, country of birth and marital status levels in the weighting process. Other disparities are not adjusted for.

    The following methods were adopted to reduce the level and impact of non-response:
    • Introduction of the Computer Assisted Self-Enumerated Interview (CASI) for the sensitive topics, or the alternative option of continuing with a face-to-face interview (CAPI) with the respondent conducted in a private setting
    • The use of interviewers, where available, who could speak languages other than English (where the language spoken was able to be established)
    • Follow-up of respondents if there was initially no response
    • Weighting to population benchmarks to reduce non-response bias

    Households with incomplete interviews were treated as fully responding for estimation purposes where the only questions that were not answered allowed 'don't know' or refusal options, such as income, current partner demographics, or abuse before the age of 15. These responses were coded to ‘Not known’ or ‘Refusal’ categories as applicable.

    In addition, the characteristics of an additional 3700 respondents who completed only the compulsory component of the survey, were able to be analysed for non-response bias.

    Processing Errors

    Processing errors may occur at any stage between the initial collection of the data and the final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or may occur during the manipulation of raw data to produce the final survey data files. For example, in the course of deriving new data items from raw survey data, or during the estimation procedures or weighting of the data file.

    To minimise the likelihood of these errors occurring, a number of quality assurance processes were employed.
    • Within the instruments, trigram coders were used to aid the interviewer with the collection of demographic data, such as education level, country of birth and language spoken. This was complemented by manual coding of text fields where interviewers could not find an appropriate response in the coder.
    • Computer editing. Edits were devised to ensure that logical sequences were followed in the questionnaires, that necessary items were present, and that specific values lay within certain ranges. These edits were designed to detect reporting and recording errors, incorrect relationships between data items, and missing data items. With the introduction of the CASI, a reduced number of edits were used in order to not confuse the respondent, with only key edits (such as those associated with perpetrator type and where sequencing would be impacted) applied to the instrument. As such there are a number of areas where there are known inconsistencies in reporting, and these are raised in the relevant module chapters.
    • Data file checks. At various stages during processing (such as after computer editing and subsequent amendments, weighting of the file, and derivation of new data items), frequency counts and/or tabulations were obtained from the data file showing the distribution of persons for different characteristics. These were used as checks on the contents of the data file, to identify unusual values which might have significantly affected estimates, and illogical relationships not previously identified by edits. Further checks were conducted to ensure consistency between related data items, and between relevant populations.
    • Comparison of data. Where possible, checks of the data were undertaken to ensure consistency of the survey outputs against results of previous PSS cycles and data available from other sources.

    Other Factors Affecting Estimates

    In addition to data quality issues, there are a number of both general and topic specific factors which should be considered in interpreting the results of this survey. The general factors affect all estimates obtained, but may affect topics to a greater or lesser degree depending on the nature of the topic and the uses to which the estimates are put. This section outlines these general factors. Additional issues relating to the interpretation of individual topics are discussed in the topic descriptions provided in other sections of this User Guide.

    Collection mode

    For the 2016 PSS, the use of a CASI to collect information was introduced for the voluntary modules. The CASI mode allowed respondents to report their information directly into the questionnaire on the interviewer laptop without the need to verbalise their experiences to an interviewer. As such, this mode may elicit additional experiences which may otherwise have gone unreported. Just over half of respondents opted to report via the CASI, with the remaining respondents continuing the survey with the interviewer.

    Analysis of the data by mode has identified that generally mode has not had a significant impact on the data, with similar trends being replicated in both CAPI and CASI, There are some areas where there appears to be significantly more responses in one mode than the other, however these may be more related to the characteristics of the people using that mode than any real mode affect.

    Concepts and Definitions

    The scope of each topic and the concepts and definitions associated with individual pieces of information should be considered when interpreting survey results. This information is available for individual topics of this User Guide.

    Reference Periods

    All results should be considered within the context of the time references that apply to the various topics. Different reference periods were used for specific topics (e.g. ‘ever’ for sexual harassment, ‘since the age of 15’ for experiences of violence and stalking, ‘before the age of 15’ for experiences of abuse or witness violence) or questions (e.g. ‘in the last 12 months’ and ‘in the 12 months after the incident’ for experiences of anxiety or fear).

    Although it can be expected that a larger section of the population would have reported taking a certain action if a longer reference period had been used, the increase is not proportionate to the increase in time. This should be taken into consideration when comparing results from this survey to data from other sources where the data relates to different reference periods.

    Classifications and Categories

    The classifications and categories used in the survey provide an indication of the level of detail available in survey output. However, the ability of respondents to provide the data may limit the amount of detail that can be output. Classifications used in the survey can be found in Appendix 2: ABS Standard Classifications.

    Collection Period

    The 2016 PSS was enumerated from 6th November 2016 to 3rd June 2017. When considering survey results over time or comparing them with data from another source, care must be taken to ensure that any differences between the collection periods take into consideration the possible effect of those differences on the data.

    CONFIDENTIALITY (incl. Perturbation)

    The Census and Statistics Act, 1905 provides the authority for the ABS to collect statistical information, and requires that statistical output shall not be published or disseminated in a manner that is likely to enable the identification of a particular person or organisation. This requirement means that the ABS must take care and make assurances that any statistical information about individual respondents cannot be derived from published data.

    To minimise the risk of identifying individuals in aggregate statistics, a technique is used to randomly adjust cell values. This technique is called perturbation. Perturbation involves a small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics.

    After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals. Where possible, a footnote has been applied to an estimated total where this is apparent in a diagram or graph (for example, if males who experienced violence and females who experienced violence don’t add to persons who have experienced violence).

    The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as TableBuilder. Perturbation has been applied to 2016 PSS published data. Data from previous PSS or WSS presented in the Personal Safety, Australia 2016 (cat. no. 4906.0) have not been perturbed, but have been confidentialised if required using suppression of cells.