Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 8 of 8
  • Item
    Thumbnail Image
    Speech-language pathology intervention for young offenders
    Swain, Nathaniel Robert ( 2017)
    Young offenders are a vulnerable and marginalised group with critical speech, language, and communication needs. Fifty to sixty percent of male young offenders have a clinically significant developmental language disorder. Despite this, little research has focussed on the efficacy and feasibility of speech-language pathology (SLP) intervention in youth justice settings. A year-long study in a youth justice facility in Victoria, Australia was undertaken. Following an assessment study (n = 27), a language intervention trial was conducted using a series of four empirical single case studies. The study evaluated the extent to which one-to-one speech-language pathology intervention improved the language skills of male young offenders. The feasibility of delivering SLP services was also investigated using quantitative service efficiency data, and qualitative data gathered from a staff focus group, and researcher field notes. Half of the sample in the assessment study qualified for a diagnosis of language disorder (> 1 standard deviation below mean on standardised measures), one third had social cognition deficits, and deficits in subskills of executive functioning ranged from one to three quarters of participants. Social cognition and executive functioning measures contributed significantly to variability in oral language skills. Individualised intervention programs were delivered for each of the four single case studies. There were medium-large improvements in the targeted communication skills, many of which were statistically significant. The data indicated evidence of the feasibility of SLP services, in spite of considerable barriers, including a high frequency of disruptions and cancellations. This research makes a substantial contribution to the evidence supporting the efficacy of one-to-one SLP intervention for young offenders. This research indicates that, despite substantial barriers, there are opportunities for effective and responsive SLP services with young offenders, as part of wider efforts to change the risk trajectories of these young people.
  • Item
    Thumbnail Image
    Using data from computer-delivered assessments to improve construct validity and measurement precision
    Ramalingam, Dara ( 2016)
    Rapid uptake of information and communication technologies (ICTs) has influenced every aspect of modern life. The increased use of such technologies has brought many changes to the field of education. The current work focuses on educational assessment, and in particular, on some hitherto unexplored implications of increased computer delivery of educational assessments. When an assessment is paper-delivered, what is collected is the final product of a test-taker's thinking. In form, this product might range from their choice of response to a multiple-choice item, to an extended written response, but, regardless of form, the final product can offer only limited insight into the thought process that led to the final product. By contrast when an assessment is computer-delivered, it is a trivial matter to collect detailed information about every student interaction with the assessment material. Such data are often called “process data”. The current work uses process data from computer-delivered assessments of digital reading and problem solving included in the 2012 cycle of the Programme for International Assessment (PISA) to explore issues of construct validity and measurement precision. In previous work, process data have either been used in purely exploratory ways, or, while a link to theory has been made, the central issues in the current work have been at most, a peripheral focus. A review of the literature suggested four indicators derived from process data: navigation behaviour (to be used in relation to digital reading items) and total time, decoding time, and number of actions (to be used in relation to both digital reading and problem solving items). While all the indicators were derived directly from frameworks of digital reading and problem solving, there were differences in the expected relationship between the indicator and ability. In particular, while effective navigation behaviour is part of good digital reading across items with different demands, the relationship between total time, decoding time and number of actions may be expected to vary depending on the demands of an individual item. Therefore in the current work, two different approaches were needed. In the case of navigation behaviour, the indicator was included directly in the scoring of items, so that students received some credit for reaching the target page containing the information needed to answer the question even if they did not answer correctly. By including an indicator that is explicitly valued in digital reading in scoring, we can better assess the intended construct and therefore improve construct validity. In the case of total time, decoding time and number of actions, these indicators were included as regressors in the scoring models used, thereby increasing measurement precision. Results of the current work suggest that the new data arising from computer-delivered assessments can be used to improve our measurement of digital reading and problem solving by better measuring the intended construct, and by increasing measurement precision. More generally, the current work suggests that process data can be used in a way that is responsible, and well-linked to theory.
  • Item
    Thumbnail Image
    From log file analysis to item response theory: an assessment template for measuring collaborative problem solving
    SCOULAR, CLAIRE ( 2017)
    Recent economic, educational and psychological research has highlighted shifting workplace requirements and the change that is required in education and training to equip the emerging workforce with the skills for the 21st century. The emergence of these demands highlights the importance of new methods of assessment. An earlier study, ATC21S, pioneered assessment of individuals’ collaborative problem solving (CPS). The study represented a major advance in educational measurement, although the issue of efficiency, reliability and validity remained to be resolved. This study addresses some of these issues by proposing and developing an assessment template for measuring CPS in online environments. The template presented, from conceptualisation to implementation, centres on its generalizable application. The first part of the template outlines task design principles for the development of CPS tasks. The second part of the template presents a systematic process of identifying, coding and scoring behaviour patterns in log file data generated from the assessment tasks. Item response theory is used to investigate the psychometric properties of these behaviour patterns. Behavioural indicators are presented that are generalizable across students, CPS tasks and assessment sets. The goal of this study is to present an approach that can inform new measurement practices in relation to previously unattended latent traits and their processes. The assessment template provides an efficient approach to development of assessments that measure the social and cognitive subskills of collaborative problem solving.
  • Item
    Thumbnail Image
    Bridging the data literacy gap for evidence-informed education policy and practice: the impact of visualization
    Van Cappelle, Frank ( 2017)
    Data literacy comprises an important set of competencies in today’s society. Its rise in prominence can be traced to several developments: the exponential increase in data leading to unprecedented possibilities for transforming society; the global Open Data movement as a driving force in making data more accessible; and the evidence-informed policy movement. In the education sector, the latter is linked to the data-driven decision making movement, which refers to the use of data to inform education policy and practice at all levels. Because of these developments, data literacy is becoming embedded as an integral part of professional competencies for educators and education leaders. The purpose of the study was twofold: first, to investigate whether data literacy can be measured on a single scale of increasing proficiency, and second, to investigate the effect of different data presentation formats on data literacy within the context of evidence-informed education policy and practice. A data literacy test was developed which required participants to answer multiple-choice questions based on a set of research briefs. Participants consisted mainly of graduate students enrolled in an education-related degree and education researchers. An experimental design was used in which the treatment condition was the presentation format of the research briefs. Test participants (N = 127) were randomly assigned to one of three presentation formats – text-only, text plus tabulated data, and text plus visualization – where tabulated data and visualizations were constructed from information in the text. The findings from the test calibration supported the hypothesis of a hierarchical unidimensional data literacy scale. The interpretation of data literacy competencies along a log-linear scale replicated the hypothesized hierarchical development of data literacy levels. It was also hypothesized that text plus visualization would lead to higher levels of data literacy compared to the other presentation formats. While previous research analysed differences in presentation formats through raw scores, this study used many-facet Rasch model analysis. Ordinal-level raw scores were transformed into linear, interval-level measures as an outcome of the interaction between three facets: person, item, and presentation format. In contrast to raw scores, Rasch model parameter estimates are sample independent, so the findings can be more objectively generalized beyond the sample and items used in the study. Rasch parameter estimates for the three presentation formats supported the hypothesis that the use of visualizations is associated with higher levels of data literacy. Item-level analysis of the effect of presentation format, based on the theories of cognitive fit, cognitive load, and the proximity compatibility principle, suggested that data presentations which emphasize relationships between variables matching the problem context increase data literacy levels. Those that do not may lower data literacy levels by acting as extraneous cognitive load that diverts limited cognitive resources, especially if they misdirect attention and subsequent analysis. Implications of these findings were discussed in terms of the conceptualization of a hierarchy of data literacy competencies vis-à-vis the requirements of educators and education leaders, the potential and caveats of using data presentations for communicating policy-relevant evidence, and future research on data presentation and visualization.
  • Item
    Thumbnail Image
    Teaching the live: the pedagogies of performance analysis
    Upton, Megan Joy ( 2016)
    Theatre as an artform is ephemeral in nature and offers a lived, aesthetic experience. Attending theatre and analysing theatre performance is a key component of the study of drama in senior secondary education systems in Australia, and in many international education systems. The senior secondary drama curriculum in Victoria offers a unique context for analysing live theatre performances. Lists of performances are prescribed for teachers and students to select from and attend. The year prior to the lists being created, theatre companies are invited to submit productions for consideration. The written curriculum determines that students write a written analysis of one production. This task assesses students’ knowledge, skills and understanding of what they experience at school level, and they are assessed again in an end-of-year‘ high-stakes’ examination, the results of which contributes to students’ overall graduating academic score. Methodologically, this study used case study methods to investigate the pedagogies of performance analysis, selecting four cases as a collective case study approach. Over a period of fourteen months the study investigated how the lists of performances were generated, how teachers and students selected a performance to attend, how teachers taught the analysis of live theatre performance to senior drama students in a high-stakes assessment environment, and critically examined the role of theatre companies within these processes. The data comprised document analysis, participant observation, field notes, semi-structured individual and focus group interviews, and researcher reflective journal. Specifically the study examined pedagogy and how teachers’ pedagogical choices moved the written curriculum towards enacted and experienced curriculum. It explored what influenced and impacted these pedagogies in order to consider what constitutes effective pedagogies for teaching the analysis of live theatre performance within the research context and, more broadly, wherever the analysis of theatre performance is included in senior drama curricula. The findings indicate that while the teachers who participated in the study sought to create rich educational experiences for their senior drama students, they needed to take a reductive approach and employ teaching strategies that reinforced capacities relevant to the exam rather than those that engaged with the live arts experience or recognised and incorporated the embodied practices of drama education. Consequently, the study questions the purpose of examining performance analysis. The study also revealed how theatre company practices impact the teaching of performance analysis. As a way to structure an effective pedagogy for teaching performance analysis the study recommends that a purposeful, structured and sustained community of practice be established between curriculum authorities, theatre companies and schools. It is one that acknowledges the four stages of pedagogy identified and is a model that has potential application in curriculum where performance analysis is part of studying drama and theatre.
  • Item
    Thumbnail Image
    Using assessment of student learning outcomes to measure university performance: towards a viable model
    Martin, Linley Margaret ( 2016)
    This study investigates the possibility of developing a suite of performance indicators which could measure differences in universities’ performance in attainment by their students of specified institutional or course-based learning outcomes. The measurement of learning outcomes has been the subject of active interest in higher education for over 20 years but to date there is no approach which has led to a sustainable generalised solution to this problem. A four staged measurement model is proposed which explores the learning outcomes specified by universities, establishes a set of standards against which such outcomes could be assessed, and examines local assessment of students’ learning for these outcomes to identify what graduates have learned and can do by the end of their study. Data on the grades achieved by individual students in local assessment tasks are then considered for use in a suite of institutional indicators which are designed to differentiate between universities in terms of the knowledge and skills demonstrated by their students. The focus of the study was to investigate whether the model could be applied to measure learning outcomes and institutional performance for Australian university undergraduate degrees. The study showed that it was possible to derive a generalisable set of learning outcomes relevant to Australian universities and also a set of standards relating to each of these outcomes which could be used to grade assessments in a quantitative way for individual learning outcomes measurement. It was also possible to define a suite of quantitative performance indicators which appear to be valid for measuring differences in achievement for a subset of the specified learning outcomes. However it was discovered that Australian universities’ current practice in describing and testing learning outcomes for subjects rather than courses or for the institution is different to the approaches commonly used internationally, requiring an adjustment to the model. Universities’ practice in this is also different to the approach they espouse on their websites and in their assessment policies. The Australian approach requires a bottom-up model for measurement rather than the top-down model originally identified from international practice. Various options are presented for types of local achievement assessment that are likely to produce the greatest consistency of learning outcome results between different universities. The favoured option is a set of newly devised signature assessments to test achievement of cognitive learning outcomes which could be framed in a discipline context, but this is a contentious solution. The bottom-up model has face validity based on detailed analysis of the expected outputs from each of its stages, but it could not be fully tested because assessment data held in universities’ repositories is not held at the level required. Implementation of such a model, while appearing feasible, would have implications for policy, pedagogy, scholarship and practice within universities, and it would require a strong commitment from government and the sector for implementation to be successful. The benefits to students, staff, employers and the government would be substantial and appear to outweigh the costs associated with implementation.
  • Item
    Thumbnail Image
    The alignment of valued performance types in assessment practices and curriculum in year 5 mathematics and science classrooms
    ZIEBELL, NATASHA ( 2014)
    Curricular alignment can be defined as the degree to which the performance types valued in curriculum statements (intended curriculum), instruction (enacted curriculum) and assessment (assessed curriculum) at all levels form a coherent system. This thesis reports on six key performance type categories that were used to examine the alignment of assessment practices with the intended and enacted curriculum. The six categories are knowing, performing, communicating, reasoning, non-routine problem solving and making connections. The research was undertaken as a comparative case study of two science and two mathematics primary classrooms. The methods employed were video-recorded lessons and interviews, questionnaires, document analysis and classroom observations. This study sought to determine the scope of practice (variety of performance types) evident in mathematics and science classrooms by examining the vertical and horizontal alignment of performance types. The vertical alignment analysis determined the correspondence among valued performance types in assessments at different levels of the schooling system (national, state and school levels). The horizontal alignment analysis consisted of making comparisons of performance types between classrooms at the same level and across two domains; mathematics and science. Ultimately, the classroom implementation of assessment of the curriculum is the responsibility of the teacher, so it can be argued that those performance types valued in the classroom are determined by the teacher. However, the teacher will inevitably be influenced by factors beyond the classroom, such as the state mandated curriculum, school curriculum requirements and high stakes testing. The major assertion of this study is that if performance types are not evident in classroom practice, then they are not available for formative assessment purposes and should not be summatively assessed. The findings show that in mathematics, ‘knowing’ and ‘performing procedures’ are consistently privileged in the national assessment program and through school-‐based assessment practices. These performance types were dominant in the enacted and assessed curriculum at the classroom level. The science data analysis showed that the scope of practice in the science classrooms consisted of all six performance type categories; knowing, performing, communicating, reasoning, non-routine problem solving and making connections. The relative diversity of science performance types could reflect the nature of the science curriculum at the school level and the fact that it is not subjected to the same testing, monitoring and auditing process as the mathematics curriculum. This provides teachers with the autonomy to select activities more frequently on the basis of their investigative appeal. Mathematics and English are the two domains that are assessed through the national standardised testing program and tend to dominate the primary school curriculum. Another key finding is that different school structures influence who has authoring responsibilities for the intended curriculum. The responsibility given to authorship of internal and external curriculum documents and assessment has significant implications for classroom practice and assessment. It is a recommendation of this study that monitoring programs, such as the national assessment program, are carefully aligned with the performance types valued in curriculum standards. The authority afforded to the intended curriculum and assessment documents, such as standardised testing, can be a restricting factor in the performance types that are evident in classroom practice.
  • Item
    Thumbnail Image
    Emerging identities: practice, learning and professional development of home and community care assessment staff
    Lindeman, Melissa Ann ( 2006-12)
    This thesis argues for greater recognition of assessment staff in community care/home and community care (HACC) and a more comprehensive and considered approach to preparing such a workforce. By offering deeper insights into the practice of assessment and the individuals employed in these positions, the thesis makes the case that these are emerging identities: a new specialism in the emergent space of community care. This specialism has arisen to fill the gap which has developed as a result of changing socio-cultural practices in relation to care for the frail aged and people with disabilities, and the inability of established disciplines to keep pace with the new demands of the contemporary world. The study employed a qualitative methodology using in-depth interviews with key informants with various stakeholder interests and expertise in the area of assessment and home and community care, and workers employed in assessment roles in HACC services in Victoria. The conceptual framework is represented as theoretical perspectives from current adult educational scholarship that focus on professional disciplines (including multidisciplinary/interprofessional perspectives), those that focus on communities of practice, and those that focus on the workplace. The thesis shows that HACC assessment workers are a product of contemporary workplaces and systems of health and community care. The nature of their practice derives substantially from the local contexts in which they work; there is no single profession or discipline-based narrative that drives their practice. Instead they draw from a diverse range of knowledge sources including their embodied practice. In this way, it is argued that they are emergent practitioners, whose practice and identities share many elements with traditional professions in comparable work contexts (similar levels of autonomy, reflective practices, and development and application of ‘know how’ and tacit wisdom). The case is put that their embodied practice is the site of a robust professionalism which can provide the foundation for new approaches to the education, training and development of this increasingly important and growing occupational group. A model of learning is proposed which builds on authentic learning attained in daily work activities with clients, in the workplace as a social setting, and developing the self as a resource for practice. This model is based on a hybrid approach that builds on the learning strengths of both educational institutions and the workplace.