4530.0 - Crime Victimisation, Australia, 2018-19 Quality Declaration 
Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 18/02/2020   
   Page tools: Print Print Page Print all pages in this productPrint All RSS Feed RSS Bookmark and Share Search this Product

Data quality (Technical note)

Reliability of the estimates

The estimates in this publication are based on information obtained from a sample survey. Errors in data collection or processing, known as non-sampling error, can impact on the reliability of the resulting statistics. In addition, the reliability of estimates based on sample surveys are also subject to sampling variability. That is, the estimates may differ from those that would have been produced had all persons in the population been included in the survey. This is known as sampling error.

Non-sampling error

Non-sampling error may occur in any statistical collection, whether it is based on a sample or a full count such as a census. Sources of non-sampling error include non-response, errors in reporting by respondents or recording of answers by interviewers, and errors in coding and processing data.

It is not possible to quantify non-sampling error, however, every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and rigorous quality control procedures at all stages of data processing.

Sampling error

Sampling error refers to the difference between an estimate obtained from surveying a sample of persons, and the result that would have been obtained if all persons had been surveyed.

One measure of sampling error is given by the standard error (SE), which indicates the extent to which an estimate might have varied by chance because only a sample of persons was surveyed. There are about two chances in three (67%) that a sample estimate will differ by less than one SE from the number that would have been obtained if all persons had been surveyed, and about 19 chances in 20 (95%) that the difference will be less than two SEs.

Relative standard error

In this publication, the standard error of the estimate is expressed as a percentage of the estimate, known as the relative standard error (RSE), which is a useful measure as it indicates the size of the error relative to the estimate.



Only estimates (counts or percentages) with an RSE of less than 25% are considered sufficiently reliable for most analytical purposes. However, estimates with an RSE over 25% are also published. Estimates with an RSE in the range 25% to 50% are less reliable and should be used with caution, while estimates with an RSE greater than 50% are considered too unreliable for general use.

The Excel files available from the Downloads tab contain all the data tables produced for this release, including all estimates and their corresponding RSEs. All cells in the Excel spreadsheets containing an estimate with an RSE of 25% or greater are annotated with asterisks, indicating whether the RSE of the estimate is in the range 25% to 50% (single asterisk *) or is greater than 50% (double asterisk **).

For more details see What is a Standard Error and Relative Standard Error, Reliability of estimates for Labour Force data.

Calculation of standard error

Standard error (SE) can be calculated using the estimate (count or percentage) and the corresponding RSE. For example, if the estimated number of persons who experienced physical assault in the last 12 months was 462,200, with a corresponding RSE of 5.0%, the SE (rounded to the nearest 100) is calculated by:



Therefore, there is about a two in three chance that the result that would have been obtained had all persons been included in the survey falls within the range of one standard error below to one standard error above the estimate (439,100 to 485,300), and about a 19 in 20 chance that the result would have fallen within the range of two standard errors below to two standard errors above the estimate (416,000 to 508,400). This example is illustrated in the diagram below:



Relative standard error of proportions

Proportions formed from the ratio of two estimates are also subject to sampling error. The size of the error depends on the accuracy of both the numerator and the denominator. A formula to approximate the RSE of a proportion is given below. This formula is only valid when x is a subset of y:



As an example, if 86,600 persons experienced physical assault by an intimate partner, representing 29.5% of all persons who experienced physical assault by a known person (293,600); and if the RSE for the number of persons experiencing physical assault by an intimate partner is 7.7% and the RSE for the number of persons experiencing physical assault by a known person is 6.7%; then, applying the above formula, the RSE of the proportion is:



Using the formula given above, the standard error (SE) for the proportion of persons who experienced physical assault by an intimate partner (as a proportion of those who experienced physical assault by a known person) is 1.1% (0.038 x 29.5). There are about two chances in three that the true proportion of persons who experienced physical assault by an intimate partner (as a proportion of those who experienced physical assault by a known person) is between 28.4% and 30.6%, and 19 chances in 20 that the true proportion is between 27.3% and 31.7%.

Standard error of the difference between estimates

The difference between two survey estimates (counts or percentages) is also subject to sampling error, and can therefore be measured using standard error. The standard error of the difference between two estimates is determined by the individual standard errors of the two estimates and the relationship (correlation) between them. An approximate standard error of the difference between two estimates (x,y) can be calculated using the following formula:



While this formula will only be exact for differences between separate and uncorrelated characteristics or sub populations, it provides a good approximation for the differences likely to be of interest in this publication.

Significance testing

The difference between two survey estimates can be tested for statistical significance, in order to determine the likelihood of there being a real difference between the populations with respect to the characteristic being measured. The standard error of the difference between two survey estimates (x and y) can be calculated using the formula in the preceding section. This standard error is then used in the following formula to calculate the test statistic:



If the value of the test statistic is greater than 1.96, then this supports, with a 95% level of confidence, a real (i.e. statistically significant) difference between the two populations with respect to the characteristic being measured. If the test statistic is not greater than 1.96, it cannot be stated with a 95% level of confidence that there is a real difference between the populations with respect to that characteristic.

The following survey estimates have been significance tested to determine whether any differences are statistically significant:

  • Annual changes between 2017–18 and 2018–19 in personal and household crime victimisation rates (Tables 4c and 6c);
  • Annual changes between 2017–18 and 2018–19 in personal and household crime reporting rates (Tables 5c and 7c);
  • Annual changes between 2017-18 and 2018-19 in the proportion of persons who believed alcohol or any other substance contributed to their most recent incident of physical assault and face-to-face threatened assault (Table 8c);
  • Differences between state and territory personal and household crime victimisation rates and equivalent national victimisation rates for 2018–19 (Tables 2, 3, 4c, and 6c); and
  • Differences between state and territory personal and household crime reporting rates and equivalent national reporting rates for 2018–19 (Tables 2, 3, 5c, and 7c).

Significant differences have been annotated with a footnote in the above tables. In all other tables which do not show the results of significance testing, users should take RSEs into account when comparing estimates for different populations, or undertake significance testing using the formula provided to determine whether there is a statistically significant difference between any two estimates.

Only data with a relative standard error (RSE) of less than 25% are included in the publication commentary, unless otherwise indicated, and any differences between populations and changes over time that are referred to are statistically significant. All data contained in the commentary are available for download as data cubes from the Downloads tab.