- Computing and Information Systems - Research Publications
Computing and Information Systems - Research Publications
Permanent URI for this collection
47 results
Filters
Reset filtersSettings
Statistics
Citations
Search Results
Now showing
1 - 10 of 47
-
ItemCloze Evaluation for Deeper Understanding of Commonsense Stories in IndonesianKoto, F ; Baldwin, T ; Lau, JH (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022-01-01)
-
ItemOne Country, 700+Languages: NLP Challenges for Underrepresented Languages and Dialects in IndonesiaAji, AF ; Winata, GI ; Koto, F ; Cahyawijaya, S ; Romadhony, A ; Mahendra, R ; Kurniawan, K ; Moeljadi, D ; Prasojo, RE ; Baldwin, T ; Lau, JH ; Ruder, S (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022)
-
ItemThe patient is more dead than alive: exploring the current state of the multi-document summarization of the biomedical literatureOtmakhova, Y ; Verspoor, K ; Baldwin, T ; Lau, JH (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022)
-
ItemCan Pretrained Language Models Generate Persuasive, Faithful, and Informative Ad Text for Product Descriptions?Koto, F ; Lau, JH ; Baldwin, T (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022)
-
ItemOptimising Equal Opportunity Fairness in Model TrainingShen, A ; Han, X ; Cohn, T ; Baldwin, T ; Frermann, L (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022)
-
ItemMultiSpanQA: A Dataset for Multi-Span Question AnsweringLi, H ; Vasardani, M ; Tomko, M ; Baldwin, T (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022)
-
ItemWhat does it take to bake a cake? The RecipeRef corpus and anaphora resolution in procedural textFang, B ; Baldwin, T ; Verspoor, K (ASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022)
-
ItemEvaluating Debiasing Techniques for Intersectional BiasesSubramanian, S ; Han, X ; Baldwin, T ; Cohn, T ; Frermann, L (Association for Computational Linguistics, 2021-01-01)Bias is pervasive in NLP models, motivating the development of automatic debiasing techniques. Evaluation of NLP debiasing methods has largely been limited to binary attributes in isolation, e.g., debiasing with respect to binary gender or race, however many corpora involve multiple such attributes, possibly with higher cardinality. In this paper we argue that a truly fair model must consider 'gerrymandering' groups which comprise not only single attributes, but also intersectional groups. We evaluate a form of bias-constrained model which is new to NLP, as well an extension of the iterative nullspace projection technique which can handle multiple protected attributes.
-
ItemMultiLexNorm: A Shared Task on Multilingual Lexical Normalizationvan der Goot, R ; Ramponi, A ; Zubiaga, A ; Plank, B ; Muller, B ; San Vicente Roncal, I ; Ljubešić, N ; Çetinoğlu, Ö ; Mahendra, R ; Çolakoğlu, T ; Baldwin, T ; Caselli, T ; Sidorenko, W (Association for Computational Linguistics, 2021)
-
ItemFairness-aware Class Imbalanced LearningSubramanian, S ; Rahimi, A ; Baldwin, T ; Cohn, T ; Frermann, L (Association for Computational Linguistics, 2021-01-01)Class imbalance is a common challenge in many NLP tasks, and has clear connections to bias, in that bias in training data often leads to higher accuracy for majority groups at the expense of minority groups. However there has traditionally been a disconnect between research on class-imbalanced learning and mitigating bias, and only recently have the two been looked at through a common lens. In this work we evaluate long-tail learning methods for tweet sentiment and occupation classification, and extend a margin-loss based approach with methods to enforce fairness. We empirically show through controlled experiments that the proposed approaches help mitigate both class imbalance and demographic biases.