• Turing Post
  • Posts
  • Token 1.23: Mitigating Bias in Foundation Models/LLMs

Token 1.23: Mitigating Bias in Foundation Models/LLMs

Your guide on how to identify bias, a few debiasing techniques, and collection of tools and libraries for detection and mitigation

Introduction

From its inception, Artificial Intelligence (AI) has been plagued by bias. Early AI systems were trained on inherently limited datasets that often reflected existing societal inequalities. This resulted in algorithms that perpetuated stereotypes and discriminatory outcomes for certain groups. One famous example is a 1988 investigation that found a UK medical school's AI admissions software discriminated against women and applicants with non-European names.

Another famous one came much later. In 2016, Microsoft released Tay, a chatbot designed to learn from Twitter conversations. Within hours, Tay's language turned offensive and racist, reflecting the toxic content it had been exposed to online. An avalanche of racist statements and obscene narratives was a powerful example of how quickly AI can absorb harmful biases from its environment. For some time, companies were very afraid to set their chatbots loose. All changed on November 30, 2022, when the world was introduced to ChatGPT. Today we explore what we learnt since Tay’s failure about biases. Our focus will be on:

  • What is bias, and why it is a big problem for foundation models/LLMs?

  • Does bias only come from data?

  • How to identify biases?

  • Countermeasures: debiasing techniques

  • Tools and libraries for bias detection and mitigation 

  • Actions for different stakeholders.

  • Conclusion

  • Research papers

What is bias, and why it is a big problem for foundation models/LLMs?

Foundation models, including large language models (LLMs), represent a significant advancement in artificial intelligence (AI) systems capable of processing, generating, and understanding human-like language. These models have garnered immense popularity recently, driven by their ability to perform a wide array of tasks with remarkable accuracy. Beyond text classification, sentiment analysis, machine translation, and answer generation, foundation models extend their utility to fields such as image recognition, autonomous systems, and even creative arts, showcasing their versatility and broad impact.

At the heart of these models lies the question: from where do these billions of parameters, that guide their "understanding" and outputs, derive their knowledge? Foundation models are trained on extensive datasets encompassing text, images, and sometimes audio from the internet, books, articles, and other media. This vast, diverse corpus of human knowledge enables them to learn patterns, relationships, and contexts, forming the basis for their intelligence and capabilities.

However, the breadth and depth of knowledge these models can access also introduce challenges. The data used for training these models reflect the biases, inconsistencies, and varied quality present in the source materials. These biases can become catastrophic when implemented in a generalized form on life future-defining decisions for genuine people with their own unique stories. 

When deployed in areas such as loan assessment, job employment, law enforcement, healthcare, customer service, social media moderation, and organizations moving more and more towards AI-generated content and employee pipelines, these biases, even in the ballpark of ~0.1%, can leave thousands and millions of people victim to undeserving loss in opportunities.

Does bias only come from data?

The rest of this article, loaded with useful details, is available to our Premium users only. Please –>

Thank you for reading, please feel free to share with your friends and colleagues. In the next couple of weeks, we are announcing our referral program 🤍

How did you like it?

Login or Subscribe to participate in polls.

Join the conversation

or to participate.