Australian Bureau of Statistics

Rate the ABS website
ABS Home > Statistics > By Release Date
ABS @ Facebook ABS @ Twitter ABS RSS ABS Email notification service
8166.0 - Summary of IT Use and Innovation in Australian Business, 2011-12 Quality Declaration 
Previous ISSUE Released at 11:30 AM (CANBERRA TIME) 20/06/2013   
   Page tools: Print Print Page Print all pages in this productPrint All RSS Feed RSS Bookmark and Share Search this Product

TECHNICAL NOTE DATA QUALITY


INTRODUCTION

1 When interpreting the results of a survey, it is important to take into account factors that may affect the reliability of the estimates. Estimates in this publication are subject to both non-sampling and sampling errors.


NON-SAMPLING ERRORS

2 Non-sampling errors may arise as a result of errors in the reporting, recording or processing of the data and can occur even if there is a complete enumeration of the population. These errors can be introduced through inadequacies in the questionnaire, treatment of non-response, inaccurate reporting by respondents, errors in the application of survey procedures, incorrect recording of answers and errors in data capture and processing.

3 The extent to which non-sampling error affects the results of the survey is difficult to measure. Every effort is made to reduce non-sampling error by careful design and testing of the questionnaire, efficient operating procedures and systems, and the use of appropriate methodology.

4 Some of the items collected in the BCS are dynamic in nature and the concepts measured are subject to evolution and refinement over time; it is not possible to measure the impact of these changes on data quality.

5 The approach to quality assurance for the BCS aims to make the best use of ABS resources to meet user prioritised requirements - both in terms of data quality and timing of release. The approach specifies the level and degree to which each data item is quality assured, noting that only some of the total output from the BCS is able to be quality assured to the highest standards. Different priorities are assigned to groups of data items, with highest priority being assigned to key point in time data on business use of IT and innovation.

6 The 2011-12 BCS had a response rate of 95%.


SAMPLING ERROR

7 The difference between estimates obtained from a sample of businesses, and the estimates that would have been produced if the information had been obtained from all businesses, is called sampling error. The expected magnitude of the sampling error associated with any estimate can be estimated from the sample results. One measure of sampling error is given by the standard error (SE), which indicates the degree to which an estimate may vary from the value that would have been obtained from a full enumeration (the 'true' figure). There are about two chances in three that a sample estimate differs from the true value by less than one standard error, and about nineteen chances in twenty that the difference will be less than two standard errors.

8 The following is an example of the use of standard error on the total proportion of businesses placing orders via the internet. As presented in this release, the estimated proportion of businesses placing orders via the internet was 55.3%. The standard error of this estimate was 1.2%. There would be approximately two chances in three that a full enumeration would have given a figure in the range 54.1% to 56.5%, and nineteen chances in twenty that it would be in the range of 52.9% to 57.7% respectively.

9 In this publication, indications of sampling variability are measured by relative standard errors (RSEs). The relative standard error is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer to the size of the estimate. Relative standard errors are shown in the Relative Standard Error table in this section. RSEs for all data included in this release (including data cube content) are available upon request.

10 To annotate proportion estimates, a value of 50% has been used in the calculation of RSE rather than the estimated proportion from the survey data. This avoids inconsistencies between the way very low and very high proportions are annotated. Relative standard errors for estimates in this publication have been calculated using the actual standard error and the survey estimate (referred to as x) in the following manner: RSE%(x) = (SE(x)*100)/50.

11 Using the previous example, the standard error for the estimated proportion of businesses placing orders via the internet was 1.2%. Multiplied by 100 and then divided by 50 gives an RSE calculated on this basis of 2.4. It is these figures that appear in the table appended to this chapter.

12 For the tables in this publication, estimates with RSEs between 10% and 25% are annotated with the symbol '^'. These estimates should be used with caution as they are subject to sampling variability too high for some purposes. Estimates with RSEs between 25% and 50% are annotated with the symbol '*', indicating that the estimates should be used with caution as they are subject to sampling variability too high for most practical purposes. Estimates with an RSE greater than 50% are annotated with the symbol '**', indicating that the sampling variability causes the estimates to be considered too unreliable for general use.

13 For estimates of proportion the symbol '^' means that the estimate from full enumeration could lie more than a decile away so the estimate should be used with caution. For example a proportion estimate of 30% annotated with '^' means the full enumeration value could lie beyond the range 20% to 40%. The symbol '*' means the estimate from full enumeration could lie more than a quartile away and is subject to sampling variability too high for most practical purposes. A proportion estimate of 30% annotated with '*' means the full enumeration value could lie beyond the range 5% to 55%. Proportion estimates annotated with the symbol '**' have a sampling error that causes the estimates to be considered too unreliable for general use.

14 Readers of this release should note that most of the data have an RSE of less than 10% in this publication

Relative Standard Error - Summary of IT use and innovation, selected indicators(a), by employment size, 2011-12(b)

0-4 persons
5-19 persons
20-199 persons
200 or more persons
Total
%
%
%
%
%

Total number of businesses(c)
1.40
2.50
4.14
6.26
0.50
IT indicators
Businesses with:
internet access
1.60
2.04
1.59
0.13
1.21
web presence
2.59
3.80
4.57
0.71
2.08
Businesses that:
placed orders via the internet
3.07
3.14
4.82
5.03
2.37
received orders via the internet
2.55
3.43
4.52
6.65
1.83
Internet income
15.35
12.92
17.33
3.91
5.65
Businesses with internet access that reported broadband as their main connection
1.63
2.16
1.82
0.14
1.27
Innovation indicators
Businesses with:
introduced innovation
2.65
3.82
5.13
5.66
2.09
innovation still in development
2.27
3.16
5.65
5.83
1.88
abandoned innovation
1.29
2.05
2.55
0.80
1.04
any innovative activity (innovation-active businesses)
2.55
3.64
5.14
4.48
2.04

(a) RSEs for 2011-12 are on proportions basis.
(b) Proportions are of all businesses in each output category.
(c) Business counts are provided for contextual information only. Refer to Explanatory Notes 20 and 21.



Bookmark and Share. Opens in a new window

Commonwealth of Australia 2014

Unless otherwise noted, content on this website is licensed under a Creative Commons Attribution 2.5 Australia Licence together with any terms, conditions and exclusions as set out in the website Copyright notice. For permission to do anything beyond the scope of this licence and copyright terms contact us.