|Page tools: Print Page Print All RSS Search this Product|
DATA QUALITY AND TECHNICAL NOTES
Since the estimates are based on information obtained from a sample of the population, they are subject to sampling error (or sampling variability). That is, the estimates may differ from those that would have been produced had all persons been included in the survey.
The magnitude of the sampling error associated with a sample estimate depends on the following factors:
Measures of sampling variability
Sampling error is a measure of the difference between published estimates, derived from a sample of persons, and the value that would have been produced if the total population (as defined for the scope of the survey) had been included in the survey.
One measure of the likely difference is given by the standard error estimate (SE), which indicates the extent to which an estimate might have varied because only a sample of dwellings was included. There are about two chances in three (67%) that the sample estimate will differ by less than one SE from the figure that would have been obtained if all dwellings had been included, and about 19 chances in 20 that the difference will be less than two SEs.
For estimates of population sizes, the size of the SE generally increases with the level of the estimate, so that the larger the estimate the larger the SE. However, the larger the sampling estimate the smaller the SE becomes in percentage terms. Thus, larger sample estimates will be relatively more reliable than smaller estimates.
Another measure of the likely difference is the relative standard error (RSE), which is obtained by expressing the SE as a percentage of the estimate to which it related. The RSE is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer also to the size of the estimate.
Relative standard errors for 2012 and 2016 estimates are published in 'direct' form. RSEs for estimates are calculated for each separate estimate and published individually using a replicate weights technique (Jackknife method). Direct calculation of RSEs can result in larger estimates having larger RSEs than smaller ones, since these larger estimates may have more inherent variability. More information about the replicate weights technique can be found below.
Estimates with relative standard errors less than 25% are considered sufficiently reliable for most purposes. However, estimates with relative standard errors of 25% or more are included in ABS publications of results from this survey. Estimates with RSEs greater than 25% but less than or equal to 50% are annotated with an asterisk (*) to indicate they are subject to high SEs relative to the size of the estimate and should be used with caution. Estimates with RSEs of greater than 50%, annotated by a double asterisk (**), are considered too unreliable for most purposes. These estimates can be aggregated with other estimates to reduce the overall sampling error.
Another measure is the Margin of Error (MoE), which describes the distance from the population value that the sample estimate is likely to be within, and is specified at a given level of confidence. Confidence levels typically used are 90%, 95% and 99%. For example, at the 95% confidence level, the MoE indicates that there are about 19 chances in 20 that the estimate will differ by less than the specified MoE from the population value (the figure obtained if all dwellings had been enumerated). The MoE at the 95% confidence level is expressed as 1.96 times the SE.
A confidence interval expresses the sampling error as a range in which the population value is expected to lie at a given level of confidence. The confidence interval can easily be constructed from the MoE of the same level of confidence, by taking the estimate plus or minus the MoE of the estimate. In other terms, the 95% confidence interval is the estimate +/- MoE i.e. the range from minus 1.96 times the SE to the estimate plus 1.96 times the SE.
The 95% MoE can also be calculated from the RSE by the following, where y is the value of the estimate:
Standard error of a difference
The 95% MoE for proportions, differences and sums can be calculated by:
Derivation of replicate weights
Under the delete-a-group Jackknife method of replicate weighting, weights were derived as follows:
Application of replicate weights
As noted above, replicate weights enable variances of estimates to be calculated relatively simply. They also enable unit record analyses such as chi-square and logistic regression to be conducted, which take into account the sample design.
Replicate weights for any variable of interest can be calculated from the 60 replicate groups, giving 60 replicate estimates. The distribution of this set of replicate estimates, in conjunction with the full sample estimate, is then used to approximate the variance of the full sample.
The formulae for calculating the standard error (SE), relative standard error (RSE) and 95% Margin of Error (MoE) of an estimate using this method are shown below:
The RSE(y) = SE(y)/y*100.
The 95% MoE(y)=SE(y)*1.96.
This method can also be used when modelling relationships from unit record data, regardless of the modelling technique used. In modelling, the full sample would be used to estimate the parameter being studied (such as a regression coefficient); i.e. the 60 replicate groups would be used to provide 60 replicate estimates of the survey parameter. The variance of the estimate of the parameter from the full sample is then approximated, as above, by the variability of the replicate estimates.
Availability of RSEs calculated using replicate weights
Actual RSEs for estimates (excl. proportion estimates) have been calculated in the publication Personal Safety, Australia 2016 (cat. no. 4906.0). The RSEs for estimates are available in spreadsheet format (datacubes) accessed by clicking on the downloads tab of the 2016 PSS publication. The RSEs in the spreadsheets were calculated using the replicate weights methodology.
Availability of MoEs calculated using replicate weights
Actual MoEs for proportion estimates have been calculated at the 95% confidence level for the publication Personal Safety, Australia 2016 (cat.no. 4906.0) and are available in spreadsheet format (datacubes) accessed by clicking on the downloads tab of the publication. The MoEs in the spreadsheets were calculated using the replicate weights methodology.
Significance Testing on Differences Between Survey Estimates
For comparing estimates between surveys or between populations within a survey it is useful to determine whether apparent differences are 'real' differences between the corresponding population characteristics or simply the product of differences between the survey samples. One way to examine this is to determine whether the difference between the estimates is statistically significant.
A statistical significance test for a comparison between estimates can be performed to determine whether it is likely that there is a difference between the corresponding population characteristics. The standard error of the difference between two corresponding estimates (x and y) can be calculated using the formula shown above in the Differences section. This standard error is then used to calculate the test statistic:
If the value of this test statistic is greater than 1.96 then there is good evidence, with a 95% level of confidence, of a statistically significant difference in the two populations with respect to that characteristic. Otherwise, it cannot be stated with confidence (at the 95% confidence level) that there is a real difference between the populations.
Example of estimates where there was a statistically significant difference
An estimated 5.4% of all men aged 18 years or over and 3.5% of all women aged 18 years or over had experienced physical violence during the 12 months prior to the survey.
As the value of this test statistic, at 4.62, is greater than 1.96, this showed there was evidence, with a 95% level of confidence, of a statistically significant difference in the two estimates. By calculating the confidence interval for the proportion of men and women who experienced physical violence in the 12 months prior to the survey, it can be seen that the confidence intervals for estimates for men and women do not overlap (where the confidence intervals do not overlap there is always a statistically significant difference). Therefore there is evidence to suggest that men were more likely than women to have experienced physical violence in the 12 months prior to the survey.
NON SAMPLING ERROR
Non-sampling error may occur in any data collection, whether it is based on a sample or a full count such as a census. Non-sampling errors occur when survey processes work less effectively than intended.
Through careful design and testing of the questionnaire, training of interviewers, and extensive editing and quality control procedures at all stages of data collection and processing, other non-sampling error has been minimised. However, the information recorded in the survey is essentially 'as reported' by respondents, and hence may differ from information available from other sources, or collected using a different methodology.
The major sources of non-sampling error are:
These sources of error are discussed below.
Errors related to survey scope
Some dwellings may have been inadvertently included or excluded due to inaccuracies in the lists of dwellings in the selected areas. In addition, some people may have been inadvertently included or excluded, due to difficulties in applying the scope rules for household visitors or people over 18 years. However, since the ABS has gained considerable experience in and refined these procedures over many years, any resultant errors are considered to be minimal.
Response errors may have arisen from three main sources:
During testing, particular attention was given to the wording of questions and respondent interpretation of them, as well as to the interviewer instructions, to ensure that information collected fitted within the relevant definitions.
While the questionnaire has been improved and streamlined through testing, the type and amount of data required from the survey resulted in a complex questionnaire. In some cases, such as when a person had experienced incidents of violence by a number of different perpetrators, errors may have resulted from the interviewer and/or the respondent's confusion.
In any survey, inaccurate reporting may occur due to respondents misunderstanding the questions or answering incorrectly to protect their personal integrity, their personal safety or to protect somebody else. For example, some people may not have reported incidents they experienced, particularly if the perpetrator was somebody close to them, such as a partner or family member. However, conducting the interviews alone with respondents, and the introduction of the Computer Assisted Self Interview (CASI) for the voluntary component, were procedures used to minimise this effect.
In addition extensive editing and quality control procedures were applied at all stages of data processing. In situations where known inconsistencies still remain in the data, or areas have been identified as potentially open to misinterpretation, these are identified in the interpretation section of the relevant content topic pages of this User Guide.
Errors related to recall
Recall errors may arise in a number of ways. People may forget to report incidents that occurred in the past, or they may report incidents as occurring in a more recent time period. Recall errors are likely to be greater for information collected about incidents that occurred a long time ago.
When collecting information about the characteristics of a person's most recent incident of the 8 types of violence, due to the possibility of recall errors (in addition to reducing respondent burden) detailed information about the most recent incident was not collected when the violence occurred 10 years ago or more.
Non-response occurs when people cannot or will not cooperate, or cannot be contacted. Non-response can affect the reliability of results and can introduce a bias. The magnitude of any bias depends on the rate of non-response and the extent of the difference between the characteristics of those people who responded to the survey and those who did not.
The 2016 PSS achieved an overall response rate of 68.7% (fully responding households, after sample loss). Data to accurately quantify the nature and extent of the differences in experiences of violence between respondents in the survey and non-respondents are not available. Under or over-representation of particular demographic groups in the sample are compensated for at the state, section of state (i.e. capital city and balance of state), sex, age group, employment status, country of birth and marital status levels in the weighting process. Other disparities are not adjusted for.
The following methods were adopted to reduce the level and impact of non-response:
Households with incomplete interviews were treated as fully responding for estimation purposes where the only questions that were not answered allowed 'don't know' or refusal options, such as income, current partner demographics, or abuse before the age of 15. These responses were coded to ‘Not known’ or ‘Refusal’ categories as applicable.
In addition, the characteristics of an additional 3700 respondents who completed only the compulsory component of the survey, were able to be analysed for non-response bias.
Processing errors may occur at any stage between the initial collection of the data and the final compilation of statistics. These may be due to a failure of computer editing programs to detect errors in the data, or may occur during the manipulation of raw data to produce the final survey data files. For example, in the course of deriving new data items from raw survey data, or during the estimation procedures or weighting of the data file.
To minimise the likelihood of these errors occurring, a number of quality assurance processes were employed.
Other Factors Affecting Estimates
In addition to data quality issues, there are a number of both general and topic specific factors which should be considered in interpreting the results of this survey. The general factors affect all estimates obtained, but may affect topics to a greater or lesser degree depending on the nature of the topic and the uses to which the estimates are put. This section outlines these general factors. Additional issues relating to the interpretation of individual topics are discussed in the topic descriptions provided in other sections of this User Guide.
For the 2016 PSS, the use of a CASI to collect information was introduced for the voluntary modules. The CASI mode allowed respondents to report their information directly into the questionnaire on the interviewer laptop without the need to verbalise their experiences to an interviewer. As such, this mode may elicit additional experiences which may otherwise have gone unreported. Just over half of respondents opted to report via the CASI, with the remaining respondents continuing the survey with the interviewer.
Analysis of the data by mode has identified that generally mode has not had a significant impact on the data, with similar trends being replicated in both CAPI and CASI, There are some areas where there appears to be significantly more responses in one mode than the other, however these may be more related to the characteristics of the people using that mode than any real mode affect.
Concepts and Definitions
The scope of each topic and the concepts and definitions associated with individual pieces of information should be considered when interpreting survey results. This information is available for individual topics of this User Guide.
All results should be considered within the context of the time references that apply to the various topics. Different reference periods were used for specific topics (e.g. ‘ever’ for sexual harassment, ‘since the age of 15’ for experiences of violence and stalking, ‘before the age of 15’ for experiences of abuse or witness violence) or questions (e.g. ‘in the last 12 months’ and ‘in the 12 months after the incident’ for experiences of anxiety or fear).
Although it can be expected that a larger section of the population would have reported taking a certain action if a longer reference period had been used, the increase is not proportionate to the increase in time. This should be taken into consideration when comparing results from this survey to data from other sources where the data relates to different reference periods.
Classifications and Categories
The classifications and categories used in the survey provide an indication of the level of detail available in survey output. However, the ability of respondents to provide the data may limit the amount of detail that can be output. Classifications used in the survey can be found in Appendix 2: ABS Standard Classifications.
The 2016 PSS was enumerated from 6th November 2016 to 3rd June 2017. When considering survey results over time or comparing them with data from another source, care must be taken to ensure that any differences between the collection periods take into consideration the possible effect of those differences on the data.
CONFIDENTIALITY (incl. Perturbation)
The Census and Statistics Act, 1905 provides the authority for the ABS to collect statistical information, and requires that statistical output shall not be published or disseminated in a manner that is likely to enable the identification of a particular person or organisation. This requirement means that the ABS must take care and make assurances that any statistical information about individual respondents cannot be derived from published data.
To minimise the risk of identifying individuals in aggregate statistics, a technique is used to randomly adjust cell values. This technique is called perturbation. Perturbation involves a small random adjustment of the statistics and is considered the most satisfactory technique for avoiding the release of identifiable statistics while maximising the range of information that can be released. These adjustments have a negligible impact on the underlying pattern of the statistics.
After perturbation, a given published cell value will be consistent across all tables. However, adding up cell values to derive a total will not necessarily give the same result as published totals. Where possible, a footnote has been applied to an estimated total where this is apparent in a diagram or graph (for example, if males who experienced violence and females who experienced violence don’t add to persons who have experienced violence).
The introduction of perturbation in publications ensures that these statistics are consistent with statistics released via services such as TableBuilder. Perturbation has been applied to 2016 PSS published data. Data from previous PSS or WSS presented in the Personal Safety, Australia 2016 (cat. no. 4906.0) have not been perturbed, but have been confidentialised if required using suppression of cells.
These documents will be presented in a new window.