4500.0 - Crime and Justice News, July 2012 to June 2013
Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 01/08/2013  Final
Page tools: Print All RSS Search this Product
Contents >> Crime and Justice News, July 2012 to June 2013 >> Understanding National Crime and Justice Statistics: A statistical literacy feature

UNDERSTANDING NATIONAL CRIME AND JUSTICE STATISTICS: A STATISTICAL LITERACY FEATURE

How do I know what this statistic means?

Statistics are a useful research tool and a necessary part of evidence-based decision making. They help us to quantify and summarise aspects of the world around us, report on the state of our economy or society at a point in time and provide points of comparison to help us to understand changes over time. However, to be truly informative, it is important to understand the concepts that lie behind the numbers.

This is because statistics are constructs; representations of people, places or transactions that reflect the definitions and methods by which the information is collected, recorded, counted and presented. Different statistics from different collections represent measurements of different concepts. For example, the ABS publishes national statistics relating to:

• crime victimisation experiences of people and households;
• victimisation as recorded by police;
• offenders proceeded against by police;
• defendants in Australian Criminal Courts; and
• people detained in Australian Prisons and serving community based orders.

Each of these statistical collections tells a story about a different, but related, part of the criminal justice system.

How do I know if a reported statistic is reliable?

The criminal justice system is measured through both survey data and administrative by-product data. Administrative by-product data is collected from police, court and prison records and represents a count of the people who went through those systems in the reference period. Survey data is based on a sample of the population and therefore the measurements are estimates. As an indicator of the accuracy of the estimate, figures in the Crime Victimisation, Australia publication are presented along with relative standard error (RSE) values. From the 2012-13 publication onwards (released in 2014), margin of error (MOE) values will also be presented to provide more information to help users assess whether any published estimate is suitably reliable for their purposes. This section gives a brief introduction to these concepts and how they can be used to assess and interpret the quality of data from sample surveys.

RSE and MOE are both measures of sampling error. Sampling error occurs because a sample of observations from the population is selected for data collection rather than collecting observations from the entire population. Sampling error reflects the difference between an estimate derived from a sample survey and the true value that would be obtained if the whole population was surveyed. Sampling error can be estimated to give an indication of the accuracy of an estimate and thereby provide the basis for appropriate analysis and interpretation of the data.

Relative Standard Error

The relative standard error (RSE) expresses the standard error as a percentage of the estimate. The RSE avoids the need to refer to the estimate and can be useful when comparing two different estimates. However, because it is calculated by the standard error divided by the estimate, a small proportion estimate may have a large RSE, giving the impression that the estimate is poor quality even though it is of comparable accuracy to other larger estimates. For example, an estimate of 1.0% with a standard error of 0.3 has an RSE of 30%, however, an estimate of 10% with the same standard error has an RSE of 3%. A more informative measure for proportion estimates is the margin of error (MOE).

Standard Error and Margin of Error

Standard error is a quantitative measure of the accuracy of the estimate in terms of the extent to which an estimate derived from the sample survey can be expected to deviate from this true population value. The standard error can be added and subtracted from the estimate to give ranges for different levels of confidence. Plus and minus one standard error gives a range at the 67% confidence level and plus and minus two standard errors gives a range at the 95% confidence level. That is, there is a 67% chance that the true value of the estimate is within a range of plus and minus one standard error, and a 95% chance that the value is within plus and minus two standard errors. These ranges are referred to as the ‘margin of error’ of the estimate. The 95% MOE will be presented in the publications.

Which measure of reliability should I use?

RSE values will continue to be presented for each published estimate and where the sampling variability of the estimate is considered to be high, the estimate will be flagged as follows:

• a single asterix (*) denotes an RSE of between 25% and 50%, indicating that the estimate should be used with caution;
• a double asterix (**) denotes an RSE of greater than 50%, indicating that the estimate is considered too unreliable for general use.
In general, users interested in count estimates (such as the number of victims of physical assault compared to threatened assault) can continue to use the RSE as an indicator of reliability of the estimate, as this measure presents sampling variability relative to the size of the estimate. Users should be cautious, or avoid analysing or interpreting estimates with low reliability; that is, those flagged as having high RSEs. Alternatively, users can use the MOE to provide a range that will contain the true value with 95% confidence.

However, where the estimates are being used to understand proportion (that is, how common the attribute is within a population), margin of error provides a clearer indication of reliability.

Examples where MOE should be used include:

• to test whether the proportion or count of people with a certain characteristic/experience differs between various groups (such as the proportion of male victims of assault compared to female victims);
• to test whether the proportion or count of the population with a characteristic/experience is changing over time (such as the number of victims of physical assault in 2010-11 compared to those in 2011-12).
Users should always consider the size of the sampling error for all estimates to determine whether the estimate is of sufficient quality to be useful for their purposes.

Other forms of error

Statistics collected by all methods are also impacted by non-sampling error. Non-sampling error is caused by a variety of factors other than those relating to sampling methodology, such as non-response and errors in reporting by respondents, or recording of answers by interviewers, and errors in coding and processing the data. Non-sampling error cannot be measured nor completely eradicated, however, every effort is made to reduce non-sampling error by careful design and testing of questionnaires, training and supervision of interviewers, and extensive editing and quality control procedures at all stages of data processing. Information about non-sampling error can be found in the Explanatory Notes section of ABS publications. Understanding likely sources of non-sampling error provides important context for understanding survey data and is particularly important to consider when comparing data from different sources.

For more information about statistical error and related concepts, please refer to Understanding Statistics on the ABS website.

 Previous Page Next Page