|Page tools: Print Page Print All|
3. DATA LINKING METHODOLOGY
This project benefited from advances in methodology and technological resources to deliver improved integrated statistical outputs including:
3.2 DATA STANDARDISATION
Before records on two datasets are compared, the contents of each need to be as consistent as possible to facilitate comparison. This process is known as 'standardisation' and includes a number of steps such as verification, recoding and re-formatting variables, and parsing text variables (i.e. separating text variables into their components). Additionally, some variables such as name may require substantial repair prior to standardisation.
Some variables differ between the two datasets in a predictable way, and an adjustment is required to account for this variance. Variables may also be recoded or aggregated in order to obtain a more robust form of the variable. Standardisation takes place in conjunction with a broader evaluation of the dataset, in which potential linking variables are identified.
The standardisation procedure for the Death Registrations to Census linkage project involved coding imputed and invalid values for selected variables to a common missing value. These variables included name, address, day of birth, month of birth, year of birth, age, sex, year of arrival and marital status. Standardisation for hierarchical variables involved collapsing at higher levels of aggregation to allow for potential differences in the recording and coding of the variable. This was done to improve the quality of the linkage data for the purpose of increasing the likelihood that a link would be made. An example of this is country of birth. On the Death registration record a person may have been coded to 'Northern Europe' (two digit level of country of birth), while on the 2016 Census they may have reported a specific country such as 'England' or 'Norway' (four digit level of country of birth). If left in its original state, a comparison between 'Northern Europe' and 'England' would not agree, even though one is a sub-category of the other. To account for this all 2016 Census country of birth responses were coded to the two digit level to allow for accurate comparison.
First Name and Surname
In the 2011 Death Registrations to Census project, Census name data was subjected to an automated repair process. Both first names and surnames were compared against corresponding master name indexes, with names being repaired when a suitably close match to a value on the index was found. The name repair process was repeated for the 2016 Census data, with the addition of a number of enhancements. These enhancements optimised both the number and accuracy of names repaired, and included the following:
After repair, first names were then standardised by being compared against a nickname concordance, ensuring that different variations would be grouped into a common name for the purposes of linkage. The standardisation of the same name value may also vary depending on the reported gender. For example,the name 'Jess' for a female may be standardised to 'Jessica' whereas it may be standardised to 'Jesse' for a male. Any first names that could not be matched to a nickname retained their original form.
Name data on the death registrations were of considerably better quality than those on the Census, and as such were not required to go through a repair process. However the remainder of the First name standardisation process for death registrations was consistent with the Census.
Name information from both death registrations and 2016 Census was anonymised prior to being joined with other variables for linking.
To assist with linkage using name data, flags were created to define whether a name was common or uncommon. These name flags were used during linkage to identify how frequently a name appeared in the two datasets being linked, and influenced the assessment of the quality of links that agreed on name. For example, some links may match on names that are common (e.g. 'John Smith'), whereas others may match on name values that are rare. Assuming that agreement on all other variables is equal, the links that agree on rare name values are more likely to be 'true', as it is less likely that two different people with the same rare name have been linked. Therefore these links could be deemed as being of higher quality than links that agree on common name values.
Geography / Address
Linking was conducted based on the usual residential address of Census records and death registrations. Death registrations where only a residential title was supplied (e.g. nursing home, hospital, etc.) underwent additional repair.
In addition to address repair, the following standardisation techniques were applied:
A number of standardisation processes were undertaken on other key linking variables including:
3.3 DATA PREPARATION
An additional data preparation technique was used in this linkage for Census records where multiple responses had been provided for key linking variables. A record may have had multiple responses for a single linking variable in the following situations:
The process for allowing the use of multiple responses for a linking variable involved restructuring the data for affected records; multiple rows were created for the affected record, with the number of rows generated equal to the number of different combinations that could be created from the linkage information. This is demonstrated in Tables 1a and 1b below. A respondent with two different anonymised first name values and two different mesh blocks would have four permuted rows generated. Meanwhile, the information that only had one stated value (in this example surname and date of birth) was duplicated across all of the generated rows. Structuring the data in this manner allowed for all combinations of a respondent's linkage information to be considered in a highly efficient manner while increasing the likelihood of finding the true link for the record.
TABLE 1A - EXAMPLE OF DATA RESTRUCTURE, Original Record
TABLE 1B - EXAMPLE OF DATA RESTRUCTURE, Restructured Record
3.4 RECORD PAIR COMPARISON
Death registrations data and the 2016 Census were brought together using a combination of deterministic and probabilistic data linkage techniques. Deterministic linkage methods were initially used to identify matches that could be used as part of a training dataset for the creation of m and u probabilities for probabilistic linking (see Section 3.4.2 Probabilistic Linking for further information). Probabilistic linking was then used to link records that would be accepted for the final linked file.
The two datasets were linked in a way that was independent of reported Indigenous status so that any future analysis (including use in compiling the Life Tables for Aboriginal and Torres Strait Islander Australians - 2015-17 (cat. no. 3302.0.55.003)) would not be affected by bias introduced in the linking process. For this reason, Indigenous status was not used as a linking variable.
3.4.1 DETERMINISTIC LINKING
Deterministic data linkage, also known as rule-based linkage, involves assigning record pairs (i.e. potential links) across two datasets that match exactly or closely on common variables. This type of linkage is most applicable where the records from different sources consistently report sufficient information to efficiently identify links. It is less applicable in instances where there are problems with data quality or where there are limited characteristics.
Initially, a deterministic linkage method was used to identify links to create a training dataset that could be used to inform the creation of m and u probabilities. This involved using selected personal and demographic characteristics (first name (anonymised), surname (anonymised), sex, date of birth/age, geography, year of arrival, marital status and country of birth), to identify record pairs.
3.4.2 PROBABILISTIC LINKING
Probabilistic linking allows links to be assigned in spite of missing or inconsistent information, providing there is enough agreement on other variables to offset any disagreement. In probabilistic data linkage, records from two datasets are compared and brought together using several variables common to each dataset (Fellegi & Sunter, 1969).
A key feature of the methodology is the ability to handle a variety of linking variables and record comparison methods to produce a single numerical measure of how well two particular records match, referred to as the 'linkage weight'. This allows ranking of all possible links and optimal assignment of the link or non-link status (Solon and Bishop, 2009).
In probabilistic linkage, record pairs (consisting of one record from each file) can be compared to see whether they are likely to be a match, i.e. belong to the same person. However, if the files are even moderately large, comparing every record on File A with every record on File B is computationally infeasible. Blocking reduces the number of comparisons by only comparing record pairs where matches are likely to be found – namely, records which agree on a set of blocking variables. Blocking variables are selected based on their reliability and discriminatory power. For instance, sex is partially useful as it is typically well reported, however it is minimally informative as it only divides datasets into two blocks, and therefore does not sufficiently reduce the computational intensity of larger linkages. Accordingly, it is generally not used alone but rather in conjunction with other variables.
Comparing only records that agree on one particular set of blocking variables means a record will not be compared with its match if it has missing, invalid or legitimately different information on a blocking variable. To mitigate this, the linking process is repeated a number of times ('passes'), using a range of different blocking strategies. For example, on the first pass, a block using a fine level of geography (mesh block) was used to capture the majority of Death registrations that had matching information with their corresponding 2016 Census record. The second pass blocked on repaired surname and sex, which allowed for mesh block to disagree but potentially link on street address information. The blocking variables used for each pass are outlined in Section 3.4.3 Blocking and Linking Strategy.
Within a blocking pass, records on the two files which agree on the specified blocking variables are compared on a set of linking variables. Each linking variable has associated field weights, which are calculated prior to comparison. Field weights indicate the amount of information (agreement, disagreement, or missing values) a linking variable provides about whether or not the records belong to the same person (match status). Field weights are based on two probabilities associated with each linking variable: first, the probability that the field values agree given that the two records belong to the same person (match); and second, the probability that the field values agree given the two records belong to different persons (non-match). These are called m and u probabilities (or match and non-match probabilities) and are defined as:
m = P(fields agree | records belong to the same person)
u = P(fields agree | records belong to different people)
Given that the m and u probabilities require knowledge of the true match status of record pairs, they cannot be known exactly, but rather must be estimated. The ABS calculated the m and u probabilities based on the training dataset, under the assumption that each deterministic link on the dataset was a match. The deterministic links used in this phase included (1) the highest quality links accepted in the deterministic linking passes, and (2) additional slightly lower quality links expected to be confirmed as accurate in the probabilistic linking phase. This method estimated the likelihood that a record would have a match by taking deaths and net overseas migration into account when estimating the m and u probabilities. This method also generated probabilities for disagreement, which can be referred to as md and ud probabilities:
md = P(fields disagree | records belong to the same person)
ud = P(fields disagree | records belong to different people)
Note that m and u probabilities were calculated separately for each pass, as the probabilities depend upon the characteristics of the pass' blocking variables. For example, the m probability for country of birth when blocking on mesh block will be different to the m probability for country of birth when blocking on sex.
Match (m) and non-match (u) probabilities are then converted to agreement and disagreement field weights. They are as follows:
Agree = log2(m/u)
Disagree = log2(md/ud)
These equations give rise to a number of intuitive properties of the Fellegi–Sunter framework (Fellegi & Sunter, 1969). First, in practice, agreement weights are always positive and disagreement weights are always negative. Second, the magnitude of the agreement weight is driven primarily by the likelihood of chance agreement. That is, a low probability of two random people agreeing on a variable (for example, date of birth) will result in a large agreement weight being applied when two records do agree.
The magnitude of the disagreement weight is driven by the stability and reliability of a variable. That is, if a variable is well reported and stable over time (for example, sex) then disagreement on the variable will yield a large negative weight. For each record pair comparison, the field weights from each linking variable are summed to form an overall record pair comparison weight or 'linkage weight'.
Before calculating m and u probabilities for some variables it is first necessary to define what constitutes agreement. Typical comparison functions used in the linkage include:
For further details on comparison functions used for probabilistic linkage, see Christen & Churches (2005).
Near or partial agreement may also be factored into the linking process through calculation of m and u probabilities accounting for such agreement. For example, a person’s age on equivalent records will frequently be an exact match, and the m and u probabilities are calculated based on this definition. During linkage, however, a partial agreement weight was given for age within a one or two year difference to cater for persons who may have incorrectly reported age for a variety of reasons.
3.4.3 BLOCKING AND LINKING STRATEGY
The strategy employed for linking the 2016 Death Registrations to Census project builds on the 2011 linking strategy, using developments in linking methodology, software and available data to improve the approach. For further details on the 2011 linkage refer to Information Paper: Death registrations to Census linkage project - Methodology and Quality Assessment - 2011-12 (cat. no. 3302.0.55.004).
Table 2 displays the blocking and linking variables applied in this linking project for each pass.
TABLE 2 - BLOCKING AND LINKING VARIABLES, By Pass Number
(b) B - blocking variable
(c) L - linking variable
3.5 DECISION MODEL
In probabilistic linking, once record pairs are generated and weighted, a decision algorithm determines whether the record pair is linked, not linked, or requires further consideration as a possible link. The generation of record pairs from probabilistic linking can result in the records on one dataset linking to multiple records on the other, resulting in a file of ‘many-to-many’ links. The first phase of the decision process involves assigning a record to its best possible pairing. This process is known as one-to-one assignment. Ideally (and often true in practice) each record has a single, unique best pairing, which is its true match.
The 2011 Death Registrations to Census project used an auction algorithm to assign probabilistic links optimally from the pool of all possible links. The auction algorithm maximises the sum of all the record pair comparison weights through alternative assignment choices, such that if a record A1 on File A links well to records B1 and B2 on File B, but record A2 links well to B2 only, the auction algorithm will assign A1 to B1 and A2 to B2, to maximise the overall comparison weights for all record pairs.
For the 2016 project, a change was made to the assignment algorithm. Using the previous example, A1 may still link to B1, but A2 would only link to B2 if it was considered a better quality link than A1 to B2. This change ensured that links would only be assigned when they are the absolute best option for both records in the link, which subsequently improved the quality of the links output at this phase. The modified algorithm was also far more efficient than the auction method, with the assignment process completed in a matter of minutes compared to several hours or days when using the auction algorithm.
An additional change made for the linkage was that the one-to-one assignment was generated using the combined many-to-many results from all passes in the linkage (i.e. non-sequential approach), rather than running the assignment over the results from each pass individually and accepting links before moving to the next pass (sequential approach). This allowed the best links from all passes to be obtained from a single assignment procedure.
The second phase of the probabilistic decision rule stage takes the output of one-to-one assignment and decides which pairs should be retained as links, and which pairs should be rejected as non-links. The simplest decision rule uses a single ‘cut-off’ point, where all record pairs with a linkage weight at or above the cut-off are assigned as links, and all those pairs with a linkage weight below the cut-off are assigned as non-links. A more sophisticated decision rule was used in the 2016 Death Registrations to Census linkage project, employing lower and upper cut-offs. Record pairs with a weight at or above the upper cut-off were declared links while those with a weight below the lower cut-off were declared non-links. In order to establish the upper and lower cut-off values, a sample of the record pairs identified by the assignment algorithm was clerically reviewed. The upper cut-off was then set at a weight value such that no false links had been detected above the cut-off in the sample. The record pairs with weights between the upper and lower cut-offs were clerically reviewed to determine which links to retain for the final linked dataset.
3.5.1 CLERICAL REVIEW OF RECORD PAIRS
Each record pair was manually inspected to resolve its match status (i.e. if the link was ‘true’ or ‘false’). As part of this process, a clerical reviewer was often able to use information which could not be captured in the automated comparison process, but could be identified by the reviewer, such as common transcription errors (e.g. 1 mistaken as 7) or transposed information, such as the day of birth reported as the month or vice versa.
In addition to the linking variables, supplementary information was also used to confirm a link as true. This included:
These supplementary variables helped to inform difficult decisions, especially on record pairs belonging to children, allowing for greater insight into whether a record pair was an actual match or just contained similar demographic and personal characteristics for two different individuals.
Clerical review was performed on 62,115 links, resulting in the confirmation of 35,820 matches. Initially, reviewers assessed the ‘best’ option for a link, that is, where Death registrations were matched to Census records based on the greatest level of agreement on linking variables. However, for Death registrations where the best option was rejected, subsequent clerical review also assessed the second-best and, if relevant, third-best option. This was further supplemented by a specific investigation into Aboriginal and Torres Strait Islander links. Following the inspection of first, second and third options for Death registrations, reviewers also assessed the remaining potential links identified as Aboriginal and Torres Strait Islander on either Census or Death registrations datasets. By this late stage of clerical review, fewer than 10% of those potential links had an agreement weight comparable to the other accepted links.
While the 2011 project applied high standards for precision, the 2016 linkage placed an even greater emphasis on ensuring as many links as possible in the final set of results were ‘true’ (i.e. the linked records do in fact belong to the same individual). This was achieved through the following processes:
3.5.2 QUALITY ASSURANCE OF CLERICALLY REVIEWED RECORD PAIRS
Clerical review relies upon judgment by a well-trained individual, therefore, while efforts are taken to minimise the risk, it is possible for a link to be incorrectly assigned as a match or non-match.
Quality assurance (QA) techniques were applied to clerical review to assess the accuracy of the clerical review decisions. The QA process involved having a sample of the clerical record pairs reviewed a second time by a different reviewer. If the decision for a record pair made by the QA reviewer conflicted with the decision made in the original clerical review, this was identified as an 'adjudication' pair. Adjudication results were used to update the original decisions made on clerically reviewed links.
Performing QA on clerically reviewed record pairs enabled a basic measure of quality, referred to as a 'clerical review consistency rate' (CR), to be obtained. This rate is calculated by dividing the number of adjudication pairs against the total number of record pairs that were quality assured. Note that the CR is not strictly an estimate of clerical review accuracy, rather it is a measure of the level of consistency with which different coders applied decisions to record pairs. The QA results were not used to supplement the final linked results. The quality assurance process produced a clerical review consistency rate of 95%, indicating the clerical review process was of high quality.
These documents will be presented in a new window.
3302.0.55.004 - Linking Death registrations to the 2016 Census, 2016-17
Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 10/12/2018