1 When interpreting the results of a survey, it is important to take into account factors that may affect the reliability of the estimates. Estimates in this publication are subject to both non-sampling and sampling errors.
2 Non-sampling errors may arise as a result of errors in the reporting, recording or processing of the data and can occur even if there is a complete enumeration of the population. These errors can be introduced through inadequacies in the questionnaire, treatment of non-response, inaccurate reporting by respondents, errors in the application of survey procedures, and incorrect recording of answers and errors in data capture and processing.
3 The extent to which non-sampling error affects the results of the survey is difficult to measure. Every effort is made to reduce non-sampling error by careful design and testing of the questionnaire, efficient operating procedures and systems, and the use of appropriate methodology.
4 Some of the items collected in the BCS are dynamic in nature and the concepts measured are subject to evolution and refinement over time; it is not possible to measure the impact of these changes on data quality.
5 The approach to quality assurance for the BCS aims to make the best use of Australian Bureau of Statistics (ABS) resources to meet user prioritised requirements - both in terms of data quality and timing of release. The approach specifies the level and degree to which each data item is quality assured, noting that only some of the total output from the BCS is able to be quality assured to the highest standards. Different priorities are assigned to groups of data items, with highest priority being assigned to key point in time data on business use of IT and innovation.
6 The 2016-17 BCS had a response rate of 94%.
7 The difference between estimates obtained from a sample of businesses, and the estimates that would have been produced if the information had been obtained from all businesses, is called sampling error. The expected magnitude of the sampling error associated with any estimate can be estimated from the sample results. One measure of sampling error is given by the standard error (SE), which indicates the degree to which an estimate may vary from the value that would have been obtained from a full enumeration (the 'true' figure). There are about two chances in three that a sample estimate differs from the true value by less than one standard error, and about nineteen chances in twenty that the difference will be less than two standard errors.
8 The following is an example of the use of standard error on the total proportion of businesses with a web presence. As presented in this release, the estimated proportion of businesses with a web presence was 51.1%. The standard error of this estimate was 0.9. There would be approximately two chances in three that a full enumeration would have given a figure in the range of 50.2% and 52.0%, and nineteen chances in twenty that it would be in the range of 49.3% to 52.9%.
9 In this publication, indications of sampling variability are measured by relative standard errors (RSEs). The relative standard error is a useful measure in that it provides an immediate indication of the percentage errors likely to have occurred due to sampling, and thus avoids the need to refer to the size of the estimate. Relative standard errors are shown in the Relative Standard Error table in this section. RSEs for all data included in this release (including data cube content) are available upon request.
10 To annotate proportion estimates, a value of 50% has been used in the calculation of RSE rather than the estimated proportion from the survey data. This avoids inconsistencies between the way very low and very high proportions are annotated. Relative standard errors for estimates in this publication have been calculated using the actual standard error and the survey estimate (referred to as x) in the following manner: RSE%(x) = (SE(x)*100)/50.
11 Using the previous example, the standard error for the estimated proportion of businesses with a web presence was 0.9%. Multiplied by 100 and then divided by 50 gives an RSE calculated on this basis of 1.8%. It is these figures that appear in the table appended to this chapter.
12 For the tables in this publication, estimates with RSEs between 10% and 25% are annotated with the symbol '^'. These estimates should be used with caution as they are subject to sampling variability too high for some purposes. Estimates with RSEs between 25% and 50% are annotated with the symbol '*', indicating that the estimates should be used with caution as they are subject to sampling variability too high for most practical purposes. Estimates with an RSE greater than 50% are annotated with the symbol '**', indicating that the sampling variability causes the estimates to be considered too unreliable for general use.
13 For estimates of proportion the symbol '^' means that the estimate from full enumeration could lie more than a decile away so the estimate should be used with caution. For example, a proportion estimate of 30% annotated with '^' means the full enumeration value could lie beyond the range 20% to 40%. The symbol '*' means the estimate from full enumeration could lie more than a quartile away and is subject to sampling variability too high for most practical purposes. A proportion estimate of 30% annotated with '*' means the full enumeration value could lie beyond the range 5% to 55%. Proportion estimates annotated with the symbol '**' have a sampling error that causes the estimates to be considered too unreliable for general use.
14 Readers of this release should note that most of the data have an RSE of less than 10% in this publication.
Relative Standard Error - Summary of IT use and innovation, selected indicators, by employment size(a)(b) - 2016-17
200 or more persons
|Estimated number of businesses(c) |
|IT indicators |
|Business with(d): |
|internet access |
|web presence |
|social media presence |
|Businesses with internet access(d): |
|broadband as main connection type |
|Businesses that: |
|placed orders via the internet |
|received orders via the internet |
|Internet income(e) |
|Innovation indicators |
|Businesses with: |
|introduced innovation |
|innovation still in development(d) |
|abandoned innovation |
|any innovative activity (innovation-active businesses) |
|(a) RSEs for 2016-17 are on proportions basis. |
|(b) Proportions are of all businesses in each output category. |
|(c) Business counts are provided for contextual information only, please refer to Explanatory Notes 21 and 22. |
|(d) As at the end of the reference period, 30 June 2017. |
|(e) Refer to Explanatory Notes 16 to 18. |