Melbourne Graduate School of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    Developing and validating an operationalisable model of critical thinking for assessment in different cultures
    SUN, Zhihong ( 2022)
    Critical thinking has become an educational priority worldwide, as it is considered to play a fundamental role in problem-solving, decision-making and creativity. Yet the evidence is mixed about whether and how our education system produces good critical thinkers, and this is particularly evident in studies of the relative performance of Chinese and Western students. This study began with the assumption that the mixed evidence might in part be understood as resulting from a mismatch between the expectations of critical thinkers and the model of critical thinking adopted for its assessment. A review of literature suggested that the mismatch might stem from difficulties in operationalising the current theories of critical thinking in assessments. Drawing on a range of multidisciplinary studies of critical thinking, an operationalisable model of critical thinking was developed that includes a cognitive skill dimension and an epistemological belief dimension. Three assessment instruments were designed to validate the multidimensional model. The two dimensions of critical thinking were assessed separately as per existing assessments practices, and in an integrated manner. Performances on the three assessments were examined based on the data collected from a convenience sample of 480 higher education students in Australia (N=233) and China (N=247). Rasch analysis was conducted to examine the psychometric properties of the three instruments. Latent regression analysis with Rasch modelling and latent profile analysis were conducted to compare the performance patterns of critical thinking competency between the sampled groups. The results showed that the instruments were reliable for the measurement of the intended construct model and performed in an unbiased manner across the sampled groups. The results produced by the two approaches (separate and integrated assessment) were consistent. The two approaches can provide useful information for different purposes. It was found that the students in the Chinese sample performed at a lower level than the students in the Australian sample on all of the assessment instruments, and the two samples showed different performance patterns between the groups in the two components of the model. The study concluded that the operationalisable model provides a way of understanding conflicting evidence about patterns of critical thinking found in different cultures, and may inform tailored strategies for teaching critical thinking.
  • Item
    Thumbnail Image
    An analysis of evaluative reasoning in education program evaluations conducted in Australia between 2014 – 2019
    Meldrum, Kathryn Janet ( 2022)
    The Australian government spends millions of dollars every year on grants that support new and innovative programs in the education sector. For example, in the 2020- 2021 Australian budget, financial support for interventions in the primary and secondary school sectors equalled more than $72.9 million dollars. Usually, and in order to account for spending the money, granting bodies ask for an evaluation of the intervention. One of the key activities of evaluation is to determine the value, merit or worth of a program. This is achieved by reaching an evaluative conclusion/judgement about the educational intervention that is credible, valid, and defensible to stakeholders. The defensibility of an evaluative conclusion/judgement relies partly on legitimate and justified arguments. In evaluation, legitimate arguments are made using the logic of evaluation. Justified arguments are made using evaluative reasoning. However, the reasoning process underpinning the logic is doubly important because readers need to be convinced of the credibility, validity, and defensibility of the evaluative conclusion/judgement. This study investigated the presence of a legitimate and justified evaluative conclusion/judgement in publicly available education evaluations conducted in Australia between 2014-2019. Using the systematic quantitative analysis method and new integrated logic of evaluation and evaluative reasoning conceptual framework, this study found that only four of the 26 evaluations analysed provided a legitimate and justified evaluative conclusion/ judgement about program value. The remaining 22 ‘evaluations’ were categorised as research because while they provided descriptive facts about the intervention, they did not ascribe value to it. The findings highlight the need for more credible, valid, and defensible evaluations of educational programs, achieved in part by using evaluative reasoning, as they provide an evidence-base for decision-making and for ensuring that quality education is available to all members of society.