A cross-border community for researchers with openness, equality and inclusion

ABSTRACT LIBRARY

Bias and Mitigation in Large Language Models: Addressing Inequalities and Promoting Ethical AI Development

Publisher: IEEE

Authors: P Nagaraj, India; SRM Institute of Science and Technology;Department of Computer Science and Engineering; Tiruchirappalli; Tamil Nādu V Muneeswaran, Department of Electronics and Communication Engineering Kalasalingam Academy of Research and Education Krishnankoil, Virudhunagar, India Amer Ayman, Faculty of Engineering; Jordan; Zarqa Univeristy Hafez Mohamed, INTI-IU-University;Shinawatra University Muthiah Raja, Department of Computer Science and Engineering Kalasalingam Academy of Research and Education Krishnankoil, Virudhunagar, India. Islam Mohammad Tahidul, Australia;School of IT and Engineering Melbourne Institute of Technology MelbourneIjaz Muhammad Fazal, Australia;Torrens University

  • Favorite
  • Share:

Abstract:

Large Language Models (LLMs) are used extensively in natural language processing but are biased and hence yield unfair output. In this paper, the bias present in four prominent models—BERT, XLNet, RoBERTa, and ALBERT—is examined using the Crows-Pairs dataset, which is employed for identifying biased language patterns. The paper discusses the working of the models and the type of their biases. Bias mitigation methods address various factors of bias through techniques such as Counterfactual Data Augmentation (CDA), Adversarial Debiasing, and the AI Fairness 360 Toolkit (AIF360), which aim to ensure fairness in AI systems. It aims to create more balanced and dependable AI systems, which will lead to the development of ethical and unbiased language models. The project also aims to showcase how training data, model architecture, and interventions outside of the model can be employed to prevent bias. By outlining how to avoid and identify bias, the article sets the stage for the potential future development of responsible AI. Through research, it has been determined that there is a need to continually probe and improve in light of changing needs, to develop equitable AI for all applications.

 

Keywords: Large Language Models (LLMs), Bias, Mitigation, BERT, RoBERTa, XLNet, ALBERT, Stereotypical, anti-stereotypical.

Published in: 2024 Asian Conference on Communication and Networks (ASIANComNet)

Date of Publication: --

DOI: -

Publisher: IEEE