Faculty of Education - Theses

Permanent URI for this collection

Search Results

Now showing 1 - 2 of 2
  • Item
    Thumbnail Image
    The use of supplementary tests to improve selection for entry to engineering and applied science courses
    Redman, Ian H ( 1982)
    Selection into science based courses of tertiary education is generally dependent upon the score achieved by applicants at the examinations conducted at the conclusion of their year twelve studies. When the scores produced for the selection process are based on different courses of study, the question arises as to whether these scores are of equal validity in predicting the success of students in the first year of the department's courses. If the scores are not of equal validity, then some of the applicants for admission to the courses are being disadvantaged in the selection process. One method of reducing this disadvantage is to require all applicants to take an additional test (or tests), and to base the selection process on the scores achieved on those tests. Superficially, it could appear that the type of test to be administered should be a "content-free" one, since it is claimed that such tests allow access to tertiary education courses to lower socio-economic, and other disadvantaged groups. However, since all science based tertiary education courses are designed to build on a specific core of knowledge, assumed from the students' secondary studies, it was felt that a content based test was more appropriate, particularly if it used the core knowledge to test at least one of the abilities needed for success in the first year of a particular course. Two such tests were administered to 122 of the 128 students in the 1980 intake to the courses conducted by the Department of Communication and Electronic Engineering, RMIT. One of these tests was the Mechanical Reasoning Test of the ACER; the other was one designed by the academic staff of the Department. Since the MRT of the ACER appeared to be a test of puzzle-solving ability set in a mechanical background, it was expected that the scores achieved on this test would correlate satisfactorily with the first year subject MP101 (Engineering Processes). The department's test was designed as a test of puzzle-solving ability, set in the core electrical material upon which the major subject in the first year of the course, C0136 (Electronic Engineering 1) was built. Because it was designed to measure the puzzle-solving ability of a student, it was hoped that the scores achieved on this test would, in addition to correlating satisfactorily with C0136, also correlate well with the other major engineering subject in the first year, C0137 (Digital Systems 1). The three main sub-groups of students in the department's annual intake are:- (a) those who have completed the Victorian HSC course of study; this sub-group typically has a numerical size of approximately sixty; (b) those who have completed a Victorian TOP course of study; the number in this sub-group is typically twenty five; and (c) those students selected by the RAAF to become engineering officers. Of the thirty cadets in the annual intake, approximately twenty five have completed an interstate year twelve course of study. Contingency tables were used to test the hypothesis that the scores achieved by the members of each of these subgroups were not of equal validity in predicting success of students in the first year of the department's courses. For each student in this study, the data collected was:- (a) . Predictors The Anderson-type score, the scores on the two additional tests, and, in the cases of Victorian students, the scores on the individual subjects of their year twelve courses. (b) Criteria The scores achieved on each subject of the department's first year course, plus the weighted mean score of the first year's work (not all subjects are considered to be of equal weight in the course). The investigation of the effectiveness of the additional tests was performed by comparing the predictive validity of these tests with those of the other predictors used . in the study. The predictive validities were produced via: (a) Pearson correlation coefficients, (b) Spearman correlation coefficients, and (c) conditional probability statements. The contingency table analysis confirmed the hypothesis being tested with respect to-the weighted mean score (WMS), which is the best single criterion of first year performance used in this study. This analysis also demonstrated that the two criterion subjects, Digital Systems 1 (C0137), and Engineering Processes (MP101), gave highly significant statistical support to the hypothesis, and that it was largely due to their contribution to the WMS which resulted in the significant result obtained for this criterion. It is noted that of all the criterion subjects in the first year courses, these two are the ones which draw least upon the material previously presented to students during their secondary school studies. With respect to the predictive validity of the alternative tests used in the study, the investigation demonstrated that: (a) the department's -teat was at least equal to that of the aggregate score with respect to C0136, C0137 and the WMS; and (b) the MRT was superior to that of the aggregate score with respect to MP101. It should also be borne in mind that the times taken to - administer these predictor tests are: (a) department's test, 45 minutes; (b) MRT, 20 minutes; and (c) aggregate score, 15 hours. When considering the three ways by which the predictive validity of a predictor was established in this study, evidence is presented which suggests that, with respect to the selection process, the graphical summary of the conditional probability statements (which has been called a success profile), is more relevant than the two which involve the computation of correlation coefficients. An unexpected result occurred when the predictive validities of the principal predictors used in this study were produced (using Pearson correlation coefficients), for those subjects in the second year of the department's courses for which corresponding first year subjects existed. In several cases, the predictive validity of the alternative test predictors actually rose for the second year subjects, whereas in every case, the predictive validity of the aggregate score fell for the second year subjects. This result, if confirmed by an analysis of additional cases from the department's 1981-84 intakes, would seem to confirm the hypothesis that the alternative tests are measuring an ability needed for success in the department's courses which is not being measured by the subjects from which the aggregate score is produced, and that this ability (puzzle-solving?) is needed more in the later years of the department's courses, than in the first year courses.
  • Item
    Thumbnail Image
    Sex bias in ASAT?
    Adams, Raymond J. (1959-) ( 1984)
    Since 1977 when the Australian Scholastic Aptitude Test was first used in the ACT as a moderating device, there have been differences in the average performance of males and females on the test. This difference in mean group performance has been referred to as a "sex bias". This report investigated the nature and the origins of those observed sex differences in ASAT mean Scores. The study focused on five key issues: 1 Retention 2 Attitudes 3 Preparation 4 Item Bias 5 Differential Coursework Retention rates were investigated to determine the effects of different retention patterns for male and female students on their ASAT scores. Students' attitudes were explored to examine the relationships between sex, attitudes and performance on ASAT. Students' preparation was investigated. The problem of bias in the ASAT items was investigated using both classical and latent trait theory and the effects of course type on ASAT performance was investigated. The findings indicated a significant relationship between English ability, time spent in the study of mathematics, confidence in success and ASAT. It was also found that differences in retention rate may explain a substantial part of the observed differences in male and female mean scores. Although a range of factors were found to be related to ASAT performance no significant sex effect was found after taking into account English ability, experience in mathematics and confidence in success.