Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 10 of 16
  • Item
    Thumbnail Image
    Developing defensible criteria for public sector evaluations
    Roorda, Mathea Bendino Shulamith ( 2019)
    Criteria convey dimensions of quality and goodness as relevant for a program and its context. They also provide the first of two value premises from which one can reason to an evaluative judgement (the other value premise being standards). Selecting and justifying relevant criteria is critical to defensible evaluative reasoning, especially so in evaluations of publicly funded programs. Yet to date, much of the theory on evaluative reasoning has been at a general level, with little focus on the individual elements of reasoning, including the development of defensible criteria. The aim of this study was to identify characteristics of defensible criteria for program evaluations. It also sought to understand how criteria are currently managed in Australian and New Zealand program evaluations. The study exemplified research as an emergent process, with findings from an initial phase of the research informing the development of an evidence-informed tool for establishing defensible criteria. The study contributes to closing a significant gap in research on evaluation, specifically as it concerns valuing. Three characteristics were identified as important for developing defensible criteria. Two of these - inclusion of all relevant dimensions of value and authoritative sources - are required to justify criteria. A third, full description, has a role in supporting the first two characteristics, as it is only when abstract value terms are explicitly defined or described that criteria can be assessed for comprehensiveness and authoritativeness. The first phase of the study included an in-depth systematic examination of criteria development in Australian and New Zealand program evaluation. This occurred through a survey of 137 evaluators and a review of 47 published evaluation reports. It found that explicit criteria are not routinely included in evaluation reports. The survey research provided empirical evidence that a critical element of evaluative reasoning is weak in Australian and New Zealand program evaluations. The findings provided an evidence-based platform from which to develop a theory-informed framework for developing defensible criteria. In the second phase of the study, a conceptual framework was developed that makes several novel and significant contributions to the field of evaluation. It provides a way for practitioners to engage with value theory and specifically normative ethical principles which deal with conceptions of good and bad. The conceptual framework was developed into a criteria matrix tool, along with a handbook to support evaluation practitioners to engage with normative ethical perspectives. Initial field testing provided proof of concept that the tool could support evaluators to identify dimensions of value that might otherwise be ignored.
  • Item
    Thumbnail Image
    A Framework of Factors for Learning Environment Evaluation
    Oliver, Graeme John ( 2019)
    A Framework of factors for Learning Environmewnt Evaluation There is a common assumption that the provision of innovative learning environments in schools will lead to the subsequent implementation of appropriate innovative approaches to teaching and learning in these facilities. However, there is not a strong body of research that interrogates the nature of the relationships and outcomes that occur in the complex interactions between new learning environments and education practices. This research developed a framework to facilitate the evaluation of innovative education practices in innovative learning environments. The purpose of the framework is to help practitioners best identify their particular situation and circumstances for evaluation of identified aspects of the relationship between learning environments and teaching and learning practices. This supports the premise that better judgements about evaluation will facilitate the development of better understandings of issues related to the implementation of innovative education practices in innovative learning environments. The framework for research was developed using an approach based on Conceptual Modelling. The details of the framework were derived from the literature review deliberately incorporating a cross-disciplinary perspective of literature that drew on the fields of architecture and education facility design and education practice with a particular orientation to teaching and learning in innovative learning environments. The capacity of the framework to achieve its intended purposes was investigated through a research process of Expert Elicitation. The research methodology of Expert Elicitation was very effective in generating a valid pool of data from a small focussed group of respondents. Analysis of the data showed that experts from backgrounds in both architecture and education strongly agreed on factors considered to be the most significant in relation to the implementation of innovative education practices in innovative learning environments. These factors were centred around concepts of education principles, stakeholder connection and student engagement. Qualitative data analysis identified a revised structure to the framework that could best represent the key findings of the research. The framework allows for dynamic interpretation of the declared set of key issues that were identified. Guidelines for making decisions about interpretation of the evaluation framework are given through descriptions of the key purpose statements, guiding questions and consideration of the nature of evaluation to be utilised. Consequently, the key factors in the framework may be adapted to cater for different contextual settings as well as differing interpretations of key ideas associated with the evaluation of innovative education practices in innovative learning environments. This study presents two significant outcomes: a) the framework which was developed through the research that brings focus and coherence to the evaluative situation; and b) the questionnaire that was developed for use by specific groups to aid in their own situation specific interpretation of the framework. Both the framework and the questionnaire represent a balanced integration of the perspectives of architects and educators with respect to implementing innovative education practices in innovative learning environments.
  • Item
    Thumbnail Image
    Evaluation and value for money: development of an approach using explicit evaluative reasoning
    King, Julian Challis ( 2019)
    There is increasing scrutiny on social investments to determine whether they deliver value for money (VFM), but current approaches to assessing VFM are incomplete. The disciplines of economics and evaluation share an interest in valuing resource use, but tend to operate as complementary or rival disciplines rather than being integrated within an overarching logic. Cost-benefit analysis (CBA) is often regarded as the gold standard for evaluating VFM, but has recognised limitations. For example, collective values, distributive justice, power dynamics, public dialogue, and qualitative evidence are peripheral to the method. Conversely, program evaluation offers more capacious approaches to determining value but rarely includes costs, let alone reconciling value added with value consumed. This disciplinary divide may diminish capacity for good resource allocation decisions. The aim of this theory-building research was to develop a model to guide the evaluation of VFM in social policies and programs. A conceptual model was developed through critical analysis of literature, proposing requirements for good evaluation of VFM. Gap analysis was conducted to determine the extent to which CBA can meet the requirements of the conceptual model. Cumulative findings from the first two studies were dissected into a series of theoretical propositions. A process model was developed, identifying a series of steps that should be followed to operationalise the conceptual model. Case studies of real-world VFM evaluations in two international development programs were analysed to assess the conceptual quality of the theoretical propositions. This research makes seven significant and novel contributions to the field of evaluation. First, VFM is an evaluative question, demanding a judgement based on logical argument and evidence. Second, VFM is a shared domain of two disciplines, because it is concerned with merit, worth and significance (the domain of evaluation) and resource allocation (the domain of economics). Third, CBA is not a rival to evaluation; it is evaluation. It evaluates an important dimension of VFM (aggregate wellbeing) and can strengthen the validity of an evaluation. Fourth, CBA is not the whole evaluation; it is usually insufficient on its own because of limitations in its scope and warrants. Fifth, a stronger approach involves explicit evaluative reasoning, with methods tailored to context including judicious use of economic methods where feasible and appropriate. Sixth, program evaluation standards should guide economic evaluation, and this has implications for the way CBA is used including the nature and extent of stakeholder involvement, the use of CBA in conjunction with other methods, and decisions about when not to use CBA. Seventh, the case studies are themselves a contribution, modelling the use of probative inference to corroborate the propositions of the conceptual model. Ultimately, this thesis provides proof of concept for a practical theory to guide evaluation of VFM in social policies and programs.
  • Item
    Thumbnail Image
    The use of formal theory in evaluation: a review of evaluation practice drawn from outcome evaluations of programs assisting Indigenous Australians
    Grey, Kim ( 2018)
    Little information is available about how formal theory, particularly drawn from the social sciences, has been used in evaluations. To help address this gap, this study sought to understand variation in use of formal theory in real-world evaluation practice. This study used a systematic analytical approach – applying qualitative coding to identify, classify and analyse patterns of theory use. This was followed by qualitative comparative analysis to examine strengths and weaknesses of applying formal theories. It examined a sample of public outcome evaluation reports, drawn from a cohesive set of programs assisting Indigenous Australians in complex cross-cultural contexts. This is a general exploration of the use of theory, grounded in a particular set of programs. The results provide an insight into the range and uses of theoretical material. Borrowing and repurposing theoretical material in this sample focuses on post hoc explanation. Less common was upfront use of theoretical material, in framing an evaluation, or iteratively, throughout the design and conduct of evaluations. This study found a wide range of theoretical material, beyond formal theory, in the reports. The findings reveal the concrete approach suggested by the literature, of applying formal theory to measure and examine causal pathways to behavioural outcomes. The study also found tailored, layered use of various theoretical material, drawing on a repertoire of material including formal theory, the building blocks of theories, and wider conceptual resources. The breadth of theoretical material reveals practice beyond that anticipated. A potential typology of functional use of theory was developed, which shows options for more sophisticated and reflective approaches to using theory. This study may have implications for how evaluators use formal theories, and wider theoretical material. The typology could contribute to the tool-kit of techniques available to evaluation practice.
  • Item
    Thumbnail Image
    Creating indicators for social change in public health
    Aston, Ruth Frances ( 2018)
    The goal of achieving social change is a pursuit that traverses sectors, disciplines and levels of society. However, measuring social change, defining the concept and evaluating the impact of efforts to achieve social change is an area where there is much debate, and little empirical knowledge about the effectiveness of social change efforts. This thesis explored and developed proxy indicators for measuring progression towards social change in complex interventions, through a mixed methods concurrent triangulation research design. A narrative literature review in the first phase of the research identified how social change is defined, and what variables could inform the development of proxy indicators. The review findings indicated that intervention design and implementation characteristics are related to intervention effectiveness. From these findings, proxy indicators were developed and tested in a meta-analysis of community-based interventions addressing modifiable cardiac risk factor reduction by acting on the social determinants of health (SDOH). The meta-analysis demonstrated that the indicators could moderate the impact of community-based interventions addressing modifiable cardiac risk factor reduction. A case study of a public health systems intervention, Help Me Grow, then investigated the content validity of the indicators, and the practicality of using them for monitoring and evaluation during intervention implementation. The Help Me Grow case demonstrated that the indicators were both practical and applicable for monitoring the implementation of the intervention, and could be incorporated into a continuous quality improvement system. This thesis has demonstrated that indicators associated with intervention design and implementation are appropriate proxy impact indicators of complex community-based public health interventions, particularly for interventions with long periods of implementation aiming to achieve generational change. Further research is required to test the reliability and other forms of validity of the indicators in sectors and settings outside public health, and identify what measures could be used to gather data on these indicators.
  • Item
    Thumbnail Image
    Evaluating social accountability interventions: the case for mixed methods and program theory
    Cant, Suzanne ( 2014)
    Social accountability interventions that promote citizen-state engagement are a relatively recent phenomenon in international development. To date, the evidence of their impact is small, insufficiently robust, but growing. A reasonably strong body of research exists in the international development literature spanning the past decade. However, it is only in recent years that researchers have investigated these interventions from an evaluation perspective. There remains a significant gap in the evaluation literature on guidance for evaluators specific to these interventions and identification of the most suitable evaluation approaches. The majority of published evaluations of these interventions are conducted through Randomised Control Trials (RCTs). These studies have made a significant contribution to our understanding of the broad components that make up these interventions, including information, collective action and government response. However, to date the qualitative analysis, especially of political context, is lacking, and there are very few high quality evaluations of these interventions using alternative approaches to RCTs. Based on the findings of this paper, theory-based evaluations are advocated drawing on the early evidence from RCTs, using mixed methods and including political analysis through a trans-disciplinary approach.
  • Item
    Thumbnail Image
    Evaluating the constructivist potential of the multimedia software "Stage struck" in drama education
    Mansfield, Susan ( 2005-01)
    This project compares the educational aims of the designers of the educational title “Stage Struck” with those of two separate groups in the process of learning and teaching; namely the aims of teachers and students. This method of comparing the observations of two key stakeholder groups with the objectives of the designer is part of an instrument proposed by Squires and McDougall (1994) called the Perspectives Interaction Paradigm. This instrument diverges from traditional models of software evaluation in that it has encompassed multiple points of view in the determination of whether a piece of software is useful in educational environments. This project hypothesizes that the Squires/McDougall approach shows how software can be utilized in ways that can be complimentary or contrary to the intentions of designers. It is for this reason and the multi-pronged method of data collection that the instrument is considered a more appropriate tool for the project than other evaluative checklists. “Stage Struck” has been chosen because a number of items in the education literature indicate clearly articulated aims that the software be used in a manner befitting constructivist approach. Using interviews with two teachers and observations of two students within the scope of the controlled setting, this project will identify where and why aims are transformed/modified and how the expectations of the educational merit of “Stage Struck” correspond or fail to correspond.
  • Item
    Thumbnail Image
    The development of the Course Experience Questionnaire
    Elphinstone, Leonie J. ( 1990)
    This project describes the theoretical background and the empirical development of the Course Experience Questionnaire (CEQ), which is an instrument designed to measure students’ perceptions of tertiary courses. Two areas of research provide the background for the development of the CEQ. Student ratings research (Marsh,1987), has provided evidence of the reliability and validity of student evaluations at the tertiary level and has revealed consistent dimensions of teaching effectiveness. The second area is research which has focused on a relational view of learning(Marton, Hounsell & Entwistle, 1984). This has demonstrated the influence of student’ perceptions of educational contexts on their approaches to learning and the quality of learning outcomes. Although students' perceptions have been recognized as important in evaluating tertiary teaching at an individual level, no suitable instrument has so far been included in packages of performance indicators which are concerned with evaluation at the aggregate level of departments. The CEQ has been designed as such an instrument.The development of the CEQ from the initial pilot stage to the administration to a range of tertiary course groups is described. Possible further developments and refinements of the instrument are discussed together with reference to limitations and considerations in its future use as a performance indicator.
  • Item
    Thumbnail Image
    Vocational education and training, impacts, values, disinterest, prejudice and the politics of employability
    Beck, Kevin R. ( 2003)
    This study questions the proposition that undertaking vocational education and training (VET) enhances the likelihood of getting a job. The research opines that the perceived impact of VET in the employment decision is ideological and based on circumstantial evidence not measurable in the absence of specifically focused, large scale, longitudinal research. The research contends that unemployment, and the role of education and training in reducing unemployment, intersect in a highly politicised and manipulated environment lacking the necessary data and research to inform public policy and that this environment has spawned a new class system in Australia - "Employability". The research states that, in Australia, too many individuals and employers exhibit little regard for the value of education and training preferring experience and attitude, whilst the government makes little effort to instil a desire for life long learning engaging instead in denigration of the unemployed and shaping of public perception whilst frustrating independent analysis of its claims for success in dealing with this social enigma. Work, according to this researcher, has been elevated to the point of becoming a religion and humanity is now valued by business, and government, solely for its "employability" in an act of faith policy set that allocates education and training to a role subordinate to, and supporting of, employability. Decisions on the value of education are left up to individual choice in a narrowly focused set of policies shackled by economic overtones and a failure to promote life long learning. The research locates these, and other assertions, in a conceptual framework that explores the intersecting themes of society, economics, politics, education and training, through hermeneutical interrogation of empirical, theoretical and philosophical research using content, thematic and materialist semiotic analysis within an ethnographic - inductive design. The study surfaces the proposition that research and balanced economic and social evaluation techniques, together with a sophisticated debate and a set of values and policy of life long learning, does not shape public policy and action. Instead, the drivers are primarily narrow ideologies, acts of faith, unproven assumptions and the objectives of politics and capital. The researcher ultimately concludes that the decision to employ is more influenced by the external complex interactions of agency, practice and social structure and the singular conditions of interview.
  • Item
    Thumbnail Image
    Meeting the needs of engineering students in an ESP EFL Thai university context
    Kaewpet, Chamnong ( 2008)
    This study takes into account dissatisfaction with the English ability of Thai engineering students. Based on theoretical and practical perspectives, outcomes of ESP instruction could be improved if the students' communication needs and learning needs were seriously responded to, and needs analysis extended to curriculum development. Consideration led to three research questions: what are the students' communication needs and learning needs, as perceived by key stakeholders; how can the identified needs be built into a new curriculum; and how successfully were the needs incorporated, after the curriculum was implemented? The investigation was based on a course run for civil engineering students, giving importance to perspectives of all involved: employers, civil engineers, civil engineering lecturers, ex- and current civil engineering students, and ESP teachers. The stakeholders participated in different data collection procedures: individual interviews, class observations, collection of students' work samples, focus groups interviews, and evaluation of instructional materials. Five communicative events were recommended and incorporated into a new curriculum, considering related communication needs and learning needs: talking about everyday tasks and duties, reading textbooks, using technical terms in professional Thai conversations, reading manuals, and writing periodic/progress reports. The curriculum design was underpinned by the view that ‘curriculum’ was a process, which was shaped by its context and consisted of an interrelated set of other processes. The evaluation was carried out in two action research cycles while the course was underway. It was found that incorporating the needs was successful when the communication needs were also learning needs. The findings suggest that meeting learner needs may not always be successful initially, because many variables are sensitive and changeable, but problems can be overcome with flexibility and responsiveness. A significant contribution of this study can be as an example of how change for improvement can be made in an existing system in a way that does not arouse antagonism.