Part F - Methods of compilation

Latest release
Australian System of Government Finance Statistics: Concepts, Sources and Methods
Reference period


The description of the compilation methodology in this chapter is targeted more towards users of the statistics than compilers. It provides a broad overview rather than a detailed description of particular procedural or operational steps. Processes are described in logical terms that do not necessarily reflect the physical structure of the computer systems underlying the processes.


GFS compilation involves transforming the accounting data of public sector units into economic statistics. This is achieved through identification and classification of the units and analysis, classification and consolidation of economic flows and stocks recorded in the units’ accounting records. The following sequence of processes is involved:

  • Classification processes: 
    • GFS classification of units;
    • GFS classification of flows and stocks; 
  • Creating an input database containing unit level data;
  • Micro editing unit level data; 
  • Data aggregation, consolidation and derivation;
  • Macro editing the data; and 
  • Creating an output database containing aggregated data (used for dissemination of the statistics).


The compilation processes apply to all GFS data phases described earlier in this chapter (i.e. quarterly GFS and annual GFS). These processes are shown in Diagram 14.1 below. 

Diagram 14.1 - Schematic overview of GFS compilation process

Schematic overview of GFS compilation process

Classification processes


The first processes in compilation involve transforming accounting data into GFS data. This begins with identifying the unit for which data are recorded, verifying that the unit qualifies as an institutional unit (as described in Chapter 2 of this manual) and applying the relevant GFS units classifications to the unit. The second major step is analysing the source data for the unit, which essentially amounts to linking the accounting records of flows and stocks of the unit to GFS flows and stocks classifications.

Application of GFS units classifications


As described in Chapter 2 of this manual, the main GFS unit classifications are Level of government (LOG), Jurisdiction (JUR), and Institutional sector (INST).


Unit classifications are first applied at the time a unit comes into the coverage of GFS. This usually happens when a unit is created by a government in Australia, or when an existing unit is split to form more than one unit or is combined with another unit to form a new unit. Once determined, unit classifications are reviewed only when major changes occur to the functions of the unit.


The classification process involves examining Acts of Parliaments (where applicable) and the unit’s financial statements (i.e. operating statement and balance sheet). This process is intended to disclose the range of activities in which the unit engages and the legislative background to its creation. Such information is used to determine whether the unit qualifies as an institutional unit and whether it falls within the scope of GFS. The information, supplemented where necessary by information obtained directly from the unit, is used to determine the classification(s) applicable to the unit.

Application of GFS flows and stocks classifications


As discussed in Chapter 2 of this manual, the main GFS flows and stocks classification is the economic type framework (ETF). The following additional cross-classifications are applied to some ETF codes:

  • Classification of the function of government - Australia (COFOG-A) - a classification of the function or purpose of transactions;
  • Type of asset and liability classification (TALC) – a classification required to derive output items for stocks of assets and liabilities in the statement of stocks and flows.
  • Source / destination classification (SDC) - a classification that identifies: (i) for each transaction, the institutional sector and level of government (where applicable) of the unit (including non-government units) from which revenues are receivable (the source) or to which expenses are payable (the destination); and (ii) for each financial asset, the institutional sector and level of government of the unit against which the financial claim represented by the asset is held. The codes are used in the consolidation process and for producing output (e.g. grants to public non-financial corporations) that requires identification of the sector of the counterparties to transactions and stocks; and
  • Taxes classification (TC) - a classification required to produce taxation revenue data classified by type.


For the purpose of applying the classifications, ETF codes are grouped in the following categories. Note that the items included in each of the below classification components are explained in Chapter 5, Chapter 8, Chapter 11, Chapter 12, Appendix 1 Part A and Appendix 1 Part B of this manual and are not discussed in detail here:

  • Statement of operations items - input items required to derive output items in the operating statement;
  • Statement of sources and uses of cash items - input items required to derive output items in the cash flow statement;
  • Supplementary information - items of statistical interest that are not within the scope of the core GFS statements (e.g. a detailed breakdown of own-account capital formation).
  • Intra-unit transfers - input items identifying flows within a unit (e.g. transfers to reserves, certain provisions) other than revaluations and accrued transactions such as depreciation. Flows within a unit appear in accounting records and must be recorded in the system to ensure that a balance of debits and credits is maintained in the unit’s data. The flows cancel out in output. Revaluations and accrued transactions within units are required in output and so are not identified as intra-unit transfers;
  • Balance sheet items - input items required to derive output items for stocks of financial assets, liabilities and equity in the balance sheet and statement of stocks and flows.


Application of the flows and stocks classifications involves examining flow and stock items recorded in a unit’s accounting records and entering against each item the appropriate classification code(s) from each of the relevant classifications. A single item may have several codes entered against it. For example, an expense item will carry (at least) an ETF code to indicate the type of expense, an SDC code to indicate the destination code of the expense outflow, and a COFOG-A code to indicate the government purpose of the expense.


The classification process is applied initially to all flows and stocks of new units and to new flows and stock items of existing units. The process may also be re-applied to existing items that have changed description from the previous period, or have changed in value significantly and are suspected to have changed content.

Input of data for GFS processing


The next step in compiling the statistics is loading and editing the analysed data into the GFS processing system. Data are loaded by electronic processes or by manual intervention and are edited directly on the GFS input database. The electronic file supplied by each Treasury contains accounting data for each unit and contains data item descriptions as they appear in source records, the data (values) for each item in each period, and the GFS classifications for each item.


The purposes of the input database are to: 

  • store up-to-date unit-level data; and
  • serve as the source for the output database

Micro editing


Micro editing involves applying pre-specified edits to unit level data. The edits performed are unit edits, intra-sector edits and aggregate edits, each of which is described below. The process involves passing the unit data through editing programs, producing error reports, and making amendments to obtain a ‘clean’ data file.

Edits on units


Three main types of unit edits are applied in the system:

  • Classification edits,
  • Account balance edits, and 
  • Subtotalling edits.


Classification edits are edits designed to check the validity of the GFS classification codes assigned to flows and stocks. Four types of classification edits are applied:

  • Legality edits - these check that the unit and flows and stocks classification codes allocated actually exist in the classifications concerned;
  • Code combination edits - these check whether the combination of classification codes applied to each flow and stock item is valid within the GFS system;
  • Code existence edits - these check that where a given classification code has been allocated to a flow or stock item, codes from all of the relevant other classifications that are associated with that item have also been allocated; and
  • Level of coding edits - these check that the prescribed minimum level of coding has been observed.


Account balance edits are edits to check that the values for data items have been correctly entered, that data are not duplicated, and that data items entered into the system for each unit account for all items in that unit's source records.


As discussed previously (see Chapter 3 of this manual) under the double-entry convention, revenues, decreases in assets, and increase in liabilities and equity are treated as credits (Cr), and expenses, increases in assets, and decreases in liabilities and equity are treated as debits (Dr). Credits are stored in the GFS system with a negative sign and debits with a positive sign. The account balance edit checks that the total of debits for a unit equals the total of credits.


To help locate account balance errors within a unit, data items are divided into balance groups for assets, liabilities, revenues and expenses. The system checks that the accounting identity, Assets = Liabilities + Net Worth, is satisfied.


Subtotalling edits are used with account balance edits to pinpoint balancing errors within a unit. These edits are used whenever a set of data items should sum to a subtotal and where a set of subtotals should add to a control total.

Intra-sector edits


Intra-sector edits are performed in order to identify flow (and stock) imbalances, using the SDC code assigned to most GFS items. The SDC identifies the source of the funds if a transaction is an operating revenue or a cash receipt, and the destination of the funds if the transaction is an operating expense or a cash payment. For balance sheet items the asset SDC identifies the sector in which the asset is held and the liability SDC identifies the sector to which the liability is owed.


Identifying and reconciling flow imbalances is necessary in order to achieve reasonably accurate consolidated results. However, not all flow imbalances can be resolved within GFS publication deadlines which means that the remaining imbalances contribute to the non-additivity of GFS measures across sectors and levels of government. A different approach is taken for other users of GFS. For example, because the national accounts and international bodies such as the IMF and the OECD require 'balanced' GFS output, adjustments are made to force both sides to align based on an accepted order of precedence, e.g. Commonwealth figures take precedence over lower levels of government, state figures take precedence over local etc. This process is called 'flow balancing' and is only applied to GFS data that feeds into the national accounts, and GFS data supplied to the IMF and OECD.

Aggregate edits


Aggregate edits are applied after unit and intra-sector edits have been completed and resultant amendments made. These edits generally involve checking period to period changes in aggregates relating to the main GFS classifications.


The purpose of the edits is to identify any significant or unusual movements in important aggregates (e.g. expenses, net acquisition of non-financial assets, revenues, debt) so as to provide a check on the consistency of coding.

Data aggregation, derivation and consolidation


When micro editing has been completed, aggregation, derivation and consolidation processes are undertaken. During this phase, the unit information is no longer relevant and so is removed. The resulting data are classification and sector based.


These processes are summarised below:

  • The aggregation of records with identical classification combinations;
  • The derivation of items not collected;
  • Consolidation, i.e. the elimination of flows within and between sectors;
  • Estimation for uncollected or missing data; and
  • The creation of a classification and sector-based output data store.


The aggregation step involves summing records with identical classifications within each of the output sectors (listed in Table 14.1). This step results in the generation of unique aggregated lines, i.e. there are no duplicates in the final data store.


Deriving special output data items involves creating, in unit records of general government and public nonfinancial corporations, those items required specifically for the Australian national accounts. Because no direct sources of data exist for these items, they are derived by applying selected ratios to the relevant aggregates. The ratios are obtained from external data (e.g. Commonwealth employment by state).


The consolidation process eliminates the flows and stock holdings that occur between units for each unique aggregate. It is the process whereby the two sides of the same transaction or stock holding are matched and eliminated to avoid double counting. Consolidation ensures assets, liabilities and transactions are correctly presented at the sector level. Consolidation is further discussed in Chapter 3 of this manual.


Estimation and imputation are the processes of generating data not collected and are relevant only for quarterly GFS where, due to time and cost constraints, some items are not collected and smaller units are not approached.


The creation of the output data store is the process of moving the disaggregated data from the unit-based input store to an aggregated and consolidated output store, formatted to enable the efficient production of GFS outputs.

Macro editing


While unit and intra-sector edits (the micro edits) check that (i) the classifications applied are legal; (ii) the accounts balance; and (iii) flows between units are reconciled, these edits cannot establish that the correct values have been recorded. For this reason, another level of editing (macro editing) is carried out prior to releasing the statistics. Macro editing involves looking at the final results of the above processes to see


The first step in macro editing involves examining trend, revision and relationship edits to identify and correct errors. GFS tables are then examined to compare data trends and movements in GFS aggregates and GFS bottom-line measures with data published in Budget documents and other public sector financial reports.


Significant variations in trend, identified in percentage and/or in dollar value terms, are the main triggers for suspecting errors in output. However, the type of transaction must be taken into account. For example, because of their volatility, large or unusual movements in capital expenditures might be less likely to indicate a possible error than movements of similar magnitude in current expenditures. Nevertheless, significant movements are investigated to determine their cause and validity. Investigation involves retrieval of the unit record data and, if necessary, raising a query with the authorities responsible for supplying the data.


Relationships between aggregates are also examined. For example, increased borrowings generally lead to increased interest payments in subsequent periods. Thus, if marked increases in borrowings are not followed by commensurate increases in interest expenses, both the borrowings and the interest data are investigated.


Macro editing also aims to ensure that the statistics reflect the impact of changes in governments’ policies and overall trends in public sector finances. Current knowledge of changes in government policy, economic conditions and public sector finance issues is obtained from budget papers and press releases.


Where incorrect data are identified as a result of macro editing, the input data are corrected and a revised output store created to ensure that both stores remain consistent at the aggregate level.


Before aggregate data can be published or released outside the ABS in any form, they must be checked to ensure that they do not disclose any information that is confidential under the provisions of the Census and Statistics Act 1905.

Back to top of the page