Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 1 of 1
  • Item
    Thumbnail Image
    An investigation of Australian OECD Pisa trend results
    Urbach, Daniel ( 2009)
    This thesis investigates a range of equating-related issues for the Australian data collected under the Organisation for Economic Co-operation and Development (OECD) Programme for International Student Assessment (PISA). The implications for Australia's reported trend results are considered in detail. Following the exploration of differential item functioning (DIE) and dimensionality of the Australian PISA scales, a single scale, over all three PISA cycles (namely 2000, 2003 and 2006) for each major PISA domain (namely Reading, Mathematics and Science) was constructed. Previous published PISA results have employed a common reporting scale across all cycles for Reading, however scales common to all cycles have not been utilised for Mathematics or Science. Two further classes of equating issues are considered in this paper. First four different approaches to equating were used - two different treatments of missing data as well as two different item sets (all items and link items only) were estimated for each scale - and for each approach the implications for trends were discussed. Second, the equating approaches studied here used item parameters which are set at the country level rather than at the international level, thus allowing an examination of the impact of country DIF on the Australian trend results. Australian PISA trends were first explored in terms of means and standard deviations, and then in terms of the overall shape of the estimated performance distribution. This was achieved through the use of Q-Q (Quantile-Quantile) plots. Where applicable, comparisons were made with published trends. While results showed many similarities between models and published results, some differences were found. Australian PISA Reading means were statistically significantly lower when treating all omitted (or missing) responses as not administered at the item calibration stage compared to treating embedded omitted responses as incorrect and trailing omitted responses as not administered in PISA cycles 2003 and 2006. Between 2003 and 2006, published Australian Mathematics means were significantly lower than those found in this study. The published results showed a decline in means between 2003 and 2006, whereas the results reported here showed no change in the Australian means between these two cycles. Published Australian Reading distributions reported a decline from 2000 to 2003 and 2003 to 2006 in the number of Australian students located at the top end of the performance distribution. Between cycles 2000 and 2003 there was a decline from around the 70th percentile onwards and between cycles 2003 and 2006, the decline was even more severe; the higher the ability group the higher the decline from around the 20th percentile onwards. These estimated changes in the distribution shape were not replicated here, where the Australian data is analysed independently of the international data. The reanalysis undertaken here found a decline between the first two PISA cycles, but remarkably in the bottom 15 per cent of the distribution only. Between cycles 2003 and 2006 an almost constant decline across the whole proficiency distribution was found and not a decline that was limited to the top end of the distribution. The reported results highlight some of the potentially important differences that can occur when different analysis methods are used.