Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Using data from computer-delivered assessments to improve construct validity and measurement precision
    Ramalingam, Dara ( 2016)
    Rapid uptake of information and communication technologies (ICTs) has influenced every aspect of modern life. The increased use of such technologies has brought many changes to the field of education. The current work focuses on educational assessment, and in particular, on some hitherto unexplored implications of increased computer delivery of educational assessments. When an assessment is paper-delivered, what is collected is the final product of a test-taker's thinking. In form, this product might range from their choice of response to a multiple-choice item, to an extended written response, but, regardless of form, the final product can offer only limited insight into the thought process that led to the final product. By contrast when an assessment is computer-delivered, it is a trivial matter to collect detailed information about every student interaction with the assessment material. Such data are often called “process data”. The current work uses process data from computer-delivered assessments of digital reading and problem solving included in the 2012 cycle of the Programme for International Assessment (PISA) to explore issues of construct validity and measurement precision. In previous work, process data have either been used in purely exploratory ways, or, while a link to theory has been made, the central issues in the current work have been at most, a peripheral focus. A review of the literature suggested four indicators derived from process data: navigation behaviour (to be used in relation to digital reading items) and total time, decoding time, and number of actions (to be used in relation to both digital reading and problem solving items). While all the indicators were derived directly from frameworks of digital reading and problem solving, there were differences in the expected relationship between the indicator and ability. In particular, while effective navigation behaviour is part of good digital reading across items with different demands, the relationship between total time, decoding time and number of actions may be expected to vary depending on the demands of an individual item. Therefore in the current work, two different approaches were needed. In the case of navigation behaviour, the indicator was included directly in the scoring of items, so that students received some credit for reaching the target page containing the information needed to answer the question even if they did not answer correctly. By including an indicator that is explicitly valued in digital reading in scoring, we can better assess the intended construct and therefore improve construct validity. In the case of total time, decoding time and number of actions, these indicators were included as regressors in the scoring models used, thereby increasing measurement precision. Results of the current work suggest that the new data arising from computer-delivered assessments can be used to improve our measurement of digital reading and problem solving by better measuring the intended construct, and by increasing measurement precision. More generally, the current work suggests that process data can be used in a way that is responsible, and well-linked to theory.
  • Item
    Thumbnail Image
    From log file analysis to item response theory: an assessment template for measuring collaborative problem solving
    SCOULAR, CLAIRE ( 2017)
    Recent economic, educational and psychological research has highlighted shifting workplace requirements and the change that is required in education and training to equip the emerging workforce with the skills for the 21st century. The emergence of these demands highlights the importance of new methods of assessment. An earlier study, ATC21S, pioneered assessment of individuals’ collaborative problem solving (CPS). The study represented a major advance in educational measurement, although the issue of efficiency, reliability and validity remained to be resolved. This study addresses some of these issues by proposing and developing an assessment template for measuring CPS in online environments. The template presented, from conceptualisation to implementation, centres on its generalizable application. The first part of the template outlines task design principles for the development of CPS tasks. The second part of the template presents a systematic process of identifying, coding and scoring behaviour patterns in log file data generated from the assessment tasks. Item response theory is used to investigate the psychometric properties of these behaviour patterns. Behavioural indicators are presented that are generalizable across students, CPS tasks and assessment sets. The goal of this study is to present an approach that can inform new measurement practices in relation to previously unattended latent traits and their processes. The assessment template provides an efficient approach to development of assessments that measure the social and cognitive subskills of collaborative problem solving.