4228.0 - Programme for the International Assessment of Adult Competencies, Australia, 2011-12 Quality Declaration 
Latest ISSUE Released at 11:30 AM (CANBERRA TIME) 09/10/2013   
   Page tools: Print Print Page Print all pages in this productPrint All RSS Feed RSS Bookmark and Share Search this Product

APPENDIX SCORES AND SKILL LEVELS


CALCULATION OF SCORES

For each skill domain, proficiency scores are derived on a scale ranging from 0 to 500 points. Item Response Theory is used so that the score reflects the percentage of items in the skill domain that the respondent answered correctly, as well as the probability of the respondent (or persons with similar characteristics) successfully completing tasks with a similar level of difficulty. For PIAAC a response probability (RP) value of 0.67 was chosen, meaning that the respondent (or persons with similar characteristics) had a 67 per cent chance of successfully completing tasks with a similar level of difficulty.

For each respondent in PIAAC, ten plausible values (scores) were generated for the domains measured. While simple population estimates for any domain can be produced by choosing at random only one of the ten plausible values, this publication uses an average of the ten values. For example in order to report an estimate of the total number of people at Level 1 for literacy, the weighted estimate of the number of respondents at Level 1 for each of the ten plausible values for literacy individually, was calculated. The ten weighted estimates were then summed. Finally, this result was divided by ten to obtain the estimate of the total number of people at Level 1 for literacy. The process was repeated for each skill level.

This process must be performed for each skill level by each variable category (e.g. males) when producing estimates for other tabulations. For example in order to report an estimate of the total number of males at Level 1 for literacy, the weighted estimate of the number of males at Level 1 for each of the ten plausible values for literacy individually was calculated. Then the ten weighted estimates were summed. Finally, this result was divided by ten to obtain the estimate of the total number of males at Level 1 for literacy. The process was then repeated for each skill level.

All estimates presented in this publication are obtained by using all ten plausible values in combination, as described above.

In order to minimise respondent burden, respondents did not complete exercises in all three of the skill domains. Respondents completed exercise tasks in only one or two of these domains, depending on the assessment path they followed. Refer to the appendix titled Pathways through the self-enumerated exercise for further information about the possible assessment paths. To address this, PIAAC used multiple imputation methodology to obtain proficiency scores for each respondent for the skill domains for which the respondent was not required to do an exercise. Problem solving in technology-rich environments scores were not imputed for respondents who were sequenced to the paper-based Core booklet (i.e. they had no computer experience, or they did not agree to do the exercise on the computer, or they did not pass the computer-based Core Stage 1). The effect of the significant imputation variability, due to the use of multiple possible assessment tasks and the complex scaling procedures on the estimation, can be reliably estimated and is included in the calculated standard errors (SEs). See the Data quality (Technical Note) for further information about the reliability of the estimates.

In this report, proficiency levels have a descriptive purpose. They are intended to aid the interpretation and understanding of the reporting scales by describing the attributes of the tasks that adults with particular proficiency scores can typically successfully complete. In particular, they have no normative element and should not be understood as “standards” or “benchmarks” in the sense of defining levels of proficiency appropriate for particular purposes (e.g. access to post-secondary education or fully participating in a modern economy) or for particular population groups.


LEVELS OF DIFFICULTY

Further information to assist with the interpretation of the skill levels is available in the OECD publication, The Survey of Adult Skills: Reader's Companion. The technical manual is available from the OECD website at www.oecd.org.

Literacy

Literacy is defined as the ability to understand, evaluate, use and engage with written texts to participate in society, to achieve one’s goals, and to develop one’s knowledge and potential. Literacy encompasses a range of skills from the decoding of written words and sentences to the comprehension, interpretation and evaluation of complex texts. It does not, however, involve the production of text (writing). Information on the skills of adults with low levels of proficiency is provided by an assessment of reading components that covers text vocabulary, sentence comprehension and passage fluency.

Below Level 1 (lower than 176)

The tasks at this level require the respondent to read brief texts on familiar topics to locate a single piece of specific information. There is seldom any competing information in the text and the requested information is identical in form to information in the question or directive. The respondent may be required to locate information in short continuous texts. However, in this case, the information can be located as if the text were non-continuous in format. Only basic vocabulary knowledge is required, and the reader is not required to understand the structure of sentences or paragraphs or make use of other text features. Tasks below Level 1 do not make use of any features specific to digital texts.

Level 1 (176 to 225)

Most of the tasks at this level require the respondent to read relatively short digital or print continuous, non-continuous, or mixed texts to locate a single piece of information that is identical to or synonymous with the information given in the question or directive. Some tasks, such as those involving non-continuous texts, may require the respondent to enter personal information onto a document. Little, if any, competing information is present. Some tasks may require simple cycling through more than one piece of information. Knowledge and skill in recognising basic vocabulary determining the meaning of sentences, and reading paragraphs of text is expected.

Level 2 (226 to 275)

At this level, the medium of texts may be digital or printed, and texts may comprise continuous, non-continuous, or mixed types. Tasks at this level require respondents to make matches between the text and information, and may require paraphrasing or low-level inferences. Some competing pieces of information may be present. Some tasks require the respondent to:


    • cycle through or integrate two or more pieces of information based on criteria;
    • compare and contrast or reason about information requested in the question; or
    • navigate within digital texts to access-and-identify information from various parts of a document.

Level 3 (276 to 325)

Texts at this level are often dense or lengthy, and include continuous, non-continuous, mixed, or multiple pages of text. Understanding text and rhetorical structures become more central to successfully completing tasks, especially navigating complex digital texts. Tasks require the respondent to identify, interpret, or evaluate one or more pieces of information, and often require varying levels of inference. Many tasks require the respondent to construct meaning across larger chunks of text or perform multi-step operations in order to identify and formulate responses. Often tasks also demand that the respondent disregard irrelevant or inappropriate content to answer accurately. Competing information is often present, but it is not more prominent than the correct information.

Level 4 (326 to 375)

Tasks at this level often require respondents to perform multiple-step operations to integrate, interpret, or synthesise information from complex or lengthy continuous, non-continuous, mixed, or multiple type texts. Complex inferences and application of background knowledge may be needed to perform the task successfully. Many tasks require identifying and understanding one or more specific, non-central idea(s) in the text in order to interpret or evaluate subtle evidence-claim or persuasive discourse relationships. Conditional information is frequently present in tasks at this level and must be taken into consideration by the respondent. Competing information is present and sometimes seemingly as prominent as correct information.

Level 5 (376 and higher)

At this level, tasks may require the respondent to search for and integrate information across multiple, dense texts; construct syntheses of similar and contrasting ideas or points of view; or evaluate evidence based arguments. Application and evaluation of logical and conceptual models of ideas may be required to accomplish tasks. Evaluating reliability of evidentiary sources and selecting key information is frequently a requirement. Tasks often require respondents to be aware of subtle, rhetorical cues and to make high-level inferences or use specialised background knowledge.

Numeracy

Numeracy is defined as the ability to access, use, interpret and communicate mathematical information and ideas in order to engage in and manage the mathematical demands of a range of situations in adult life. To this end, numeracy involves managing a situation or solving a problem in a real context, by responding to mathematical content/information/ideas represented in multiple
ways.

Below Level 1 (lower than 176)

Tasks at this level require the respondents to carry out simple processes such as counting, sorting, performing basic arithmetic operations with whole numbers or money, or recognising common spatial representations in concrete, familiar contexts where the mathematical content is explicit with little or no text or distractors.

Level 1 (176 to 225)

Tasks at this level require the respondent to carry out basic mathematical processes in common, concrete contexts where the mathematical content is explicit with little text and minimal distractors. Tasks usually require one-step or simple processes involving counting, sorting, performing basic arithmetic operations, understanding simple per cents such as 50%, and locating and identifying elements of simple or common graphical or spatial representations.

Level 2 (226 to 275)

Tasks at this level require the respondent to identify and act on mathematical information and ideas embedded in a range of common contexts where the mathematical content is fairly explicit or visual with relatively few distractors. Tasks tend to require the application of two or more steps or processes involving calculation with whole numbers and common decimals, per cents and fractions; simple measurement and spatial representation; estimation; and interpretation of relatively simple data and statistics in texts, tables and graphs.

Level 3 (276 to 325)

Tasks at this level require the respondent to understand mathematical information that may be less explicit, embedded in contexts that are not always familiar and represented in more complex ways. Tasks require several steps and may involve the choice of problem-solving strategies and relevant processes. Tasks tend to require the application of number sense and spatial sense; recognising and working with mathematical relationships, patterns, and proportions expressed in verbal or numerical form; and interpretation and basic analysis of data and statistics in texts, tables and graphs.

Level 4 (326 to 375)

Tasks at this level require the respondent to understand a broad range of mathematical information that may be complex, abstract or embedded in unfamiliar contexts. These tasks involve undertaking multiple steps and choosing relevant problem-solving strategies and processes. Tasks tend to require analysis and more complex reasoning about quantities and data; statistics and chance; spatial relationships; and change, proportions and formulas. Tasks at this level may also require understanding arguments or communicating well-reasoned explanations for answers or choices.

Level 5 (376 and higher)

Tasks at this level require the respondent to understand complex representations and abstract and formal mathematical and statistical ideas, possibly embedded in complex texts. Respondents may have to integrate multiple types of mathematical information where considerable translation or interpretation is required; draw inferences; develop or work with mathematical arguments or models; and justify, evaluate and critically reflect upon solutions or choices.

Problem solving in technology-rich environments (PSTRE)

Problems solving in technology-rich environments is defined as using digital technology, communication tools and networks to acquire and evaluate information, communicate with others and perform practical tasks. PIAAC focuses on abilities to solve problems for personal, work and civic purposes by setting up appropriate goals and plans, accessing and making use of information through computers and computer networks.

No computer experience

Adults in this category reported having no prior computer experience; therefore, they did not take part in computer-based assessment but took the paper-based version of the assessment, which does not include the problem solving in technology-rich environments domain.

Failed ICT core

Adults in this category had prior computer experience but failed the ICT core test, which assesses basic ICT skills, such as the capacity to use a mouse or scroll through a web page, needed to take the computer-based assessment. Therefore, they did not take part in computer-based assessment, but took the paper-based version of the assessment, which does not include the problem solving in technology-rich environments domain.

"Opted out" of taking computer-based assessment

Adults in this category opted to take the paper-based assessment without first taking the ICT core assessment, even if they reported some prior experience with computers. They also did not take part in the computer-based assessment, but took the paper-based version of the assessment, which does not include the problem solving in technology-rich environments domain.

Below Level 1 (lower than 241)

Tasks are based on well-defined problems involving the use of only one function within a generic interface to meet one explicit criterion without any categorical or inferential reasoning, or transforming of information. Few steps are required and no sub-goal has to be generated.

Level 1 (241 to 291)

At this level, tasks typically require the use of widely available and familiar technology applications, such as e-mail software or a web browser. There is little or no navigation required to access the information or commands required to solve the problem. The problem may be solved regardless of the respondent’s awareness and use of specific tools and functions (e.g. a sort function). The tasks involve few steps and a minimal number of operators. At the cognitive level, the respondent can readily infer the goal from the task statement; problem resolution requires the respondent to apply explicit criteria; and there are few monitoring demands (e.g. the respondent does not have to check whether he or she has used the appropriate procedure or made progress towards the solution). Identifying content and operators can be done through simple match. Only simple forms of reasoning, such as assigning items to categories, are required; there is no need to contrast or integrate information.

Level 2 (291 to 340)

At this level, tasks typically require the use of both generic and more specific technology applications. For instance, the respondent may have to make use of a novel online form. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) can facilitate the resolution of the problem. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, though the criteria to be met are explicit. There are higher monitoring demands. Some unexpected outcomes or impasses may appear. The task may require evaluating the relevance of a set of items to discard distractors. Some integration and inferential reasoning may be needed.

Level 3 (340 and higher)

At this level, tasks typically require the use of both generic and more specific technology applications. Some navigation across pages and applications is required to solve the problem. The use of tools (e.g. a sort function) is required to make progress towards the solution. The task may involve multiple steps and operators. The goal of the problem may have to be defined by the respondent, and the criteria to be met may or may not be explicit. There are typically high monitoring demands. Unexpected outcomes and impasses are likely to occur.
The task may require evaluating the relevance and reliability of information in order to discard distractors. Integration and inferential reasoning may be needed to a large extent.


COMPARABILITY OF TIME SERIES

Data released in the previous ALLS and SAL publications are not comparable with PIAAC data for the following reasons:


    • The literacy and numeracy scores previously published for ALLS and SAL have been remodelled to make them consistent with PIAAC. These scores were originally based on a model with a response probability (RP) value of 0.8 but are now based on a model with a RP value of 0.67. The latter value was used in PIAAC to achieve consistency with the OECD survey Programme for International Student Assessment (PISA), in the description of what it means to be performing at a particular level of proficiency. The new RP value does not affect the score that was calculated for a respondent. However, it does affect the interpretation of the score. Therefore, users of this data should refer to the new skill level descriptions provided (above) in this PIAAC publication when performing time-series comparisons.
    • The prose and document literacy scales from ALLS and SAL have been combined to produce a single literacy scale which is comparable to the PIAAC literacy scale.
    • The numeracy scores from ALLS have been recalculated using a model that incorporates the results of all countries that participated in ALLS. (The previous model was based only on countries that participated in the first round of ALLS). This has resulted in some minor changes to the ALLS numeracy scores. SAL did not collect a numeracy domain which is comparable with ALLS and PIAAC.

These remodelled literacy scores (from ALLS and SAL) and numeracy scores (from ALLS) are included in additional data cubes.