Melbourne Law School - Research Publications

Permanent URI for this collection

Search Results

Now showing 1 - 4 of 4
  • Item
    Thumbnail Image
    Helping and not Harming Animals with AI
    Coghlan, S ; Parker, C (Springer, 2024-03-01)
    Ethical discussions about Artificial Intelligence (AI) often overlook its potentially large impact on nonhuman animals. In a recent commentary on our paper about AI’s possible harms, Leonie Bossert argues for a focus not just on the possible negative impacts but also the possible beneficial outcomes of AI for animals. We welcome this call to increase awareness of AI that helps animals: developing and using AI to improve animal wellbeing and promote positive dimensions in animal lives should be a vital ethical goal. Nonetheless, we argue that there is some value in focusing on technology-based harms in the context of AI ethics and policy discourses. A harms framework for AI can inform some of our strongest duties to animals and inform regulation and risk assessment impacts designed to prevent serious harms to humans, the environment, and animals.
  • Item
    Thumbnail Image
    Harm to Nonhuman Animals from AI: a Systematic Account and Framework
    Coghlan, S ; Parker, C (Springer, 2023-06-01)
    This paper provides a systematic account of how artificial intelligence (AI) technologies could harm nonhuman animals and explains why animal harms, often neglected in AI ethics, should be better recognised. After giving reasons for caring about animals and outlining the nature of animal harm, interests, and wellbeing, the paper develops a comprehensive ‘harms framework’ which draws on scientist David Fraser’s influential mapping of human activities that impact on sentient animals. The harms framework is fleshed out with examples inspired by both scholarly literature and media reports. This systematic account and framework should help inform ethical analyses of AI’s impact on animals and serve as a comprehensive and clear basis for the development and regulation of AI technologies to prevent and mitigate harm to nonhumans.
  • Item
    No Preview Available
    Using public data to measure diversity in computer science research communities: A critical data governance perspective
    Bosua, R ; Cheong, M ; Clark, K ; Clifford, D ; Coghlan, S ; Culnane, C ; Leins, K ; Richardson, M (ELSEVIER ADVANCED TECHNOLOGY, 2022-04)
  • Item
    No Preview Available
    Good Proctor or "Big Brother"? Ethics of Online Exam Supervision Technologies.
    Coghlan, S ; Miller, T ; Paterson, J (Springer Science and Business Media LLC, 2021)
    Online exam supervision technologies have recently generated significant controversy and concern. Their use is now booming due to growing demand for online courses and for off-campus assessment options amid COVID-19 lockdowns. Online proctoring technologies purport to effectively oversee students sitting online exams by using artificial intelligence (AI) systems supplemented by human invigilators. Such technologies have alarmed some students who see them as a "Big Brother-like" threat to liberty and privacy, and as potentially unfair and discriminatory. However, some universities and educators defend their judicious use. Critical ethical appraisal of online proctoring technologies is overdue. This essay provides one of the first sustained moral philosophical analyses of these technologies, focusing on ethical notions of academic integrity, fairness, non-maleficence, transparency, privacy, autonomy, liberty, and trust. Most of these concepts are prominent in the new field of AI ethics, and all are relevant to education. The essay discusses these ethical issues. It also offers suggestions for educational institutions and educators interested in the technologies about the kinds of inquiries they need to make and the governance and review processes they might need to adopt to justify and remain accountable for using online proctoring technologies. The rapid and contentious rise of proctoring software provides a fruitful ethical case study of how AI is infiltrating all areas of life. The social impacts and moral consequences of this digital technology warrant ongoing scrutiny and study.