Overview

The development and use of large language models (LLMs) in India has the potential to drive access to crucial information and services, further equity in knowledge access and production, and fuel homegrown innovation. Yet, as with any technology, the use of these models comes with many risks — including bias, discrimination, exclusion, and informational harms. With respect to gender in particular, LLMs are known to reproduce many existing gender biases we find in the world around us.

However, the majority of the research on gender biases in LLMs focuses on the English language and often limits itself to narrow definitions of what constitutes such bias. Moreover, while governments and civil society organisations are increasingly leveraging LLMs for critical social sectors such as healthcare or agriculture, very little is known about the potential implications of such efforts, especially from a gender equity perspective.

To bridge this knowledge gap, we undertook an extensive 1-year research study, examining gender-related issues across the development lifecycle of LLM applications, particularly those deployed in critical social sectors. We focus specifically on chatbots, given their predominance among other LLM use cases in India.

Drawing on our research insights, we also provide a diverse set of recommendations and tools for various stakeholders, including AI developers, government, and philanthropic organisations. These aim to foster more equitable and inclusive LLMs within critical social sectors while recognising the nuances involved in building gender-responsive technologies and the rapid pace of advancements within the LLM space.

Please note that in this study, we limit our analysis to women specifically - while acknowledging that many of these concerns may not only apply to non-binary individuals and communities but may also be more pronounced for them.

pg16 Illustration (1) (1).png

Read the Outputs

Untitled

pg21 illustration (1).png

Research Outcomes

Over the past year, Digital Futures Lab conducted extensive research — interviews, field visits, and expert workshops — to understand gender-related concerns in Indian language LLMs for critical social sectors. This was further complemented by on-ground user research conducted by Quicksand Studios. The exercise focused on understanding the needs and preferences of women from disadvantaged backgrounds who were either new or non-users of LLM-based chatbots.

<aside> <img src="/icons/asterisk_gray.svg" alt="/icons/asterisk_gray.svg" width="40px" />

In addition to this, we are also releasing individual components of this guidebook in the form of in-depth, standalone research outputs. These include:

Drawing insights from each of these different research activities, we present our key findings, recommendations, and tools in the form of a comprehensive guidebook, which serves as a primer on addressing gender biases in LLMs within the Indian context.

Given the rapidly evolving nature of LLMs, the research and knowledge shared in these outputs is not meant to be definitive. We anticipate that many of the issues, recommendations, and tools detailed in our research will need to be continuously updated, as this space further evolves. However, we hope our research can serve as a starting point for further investigations into gender equity issues associated with LLMs - both within and outside of India.

The Outputs

The Outputs

pg12_mainIllustration (1).png

pg10 illustration (1).png


From Code to Consequence: Interrogating Gender Biases in LLMs within the Indian Context

These reports were produced by Digital Futures Lab and supported by the Bill & Melinda Gates Foundation. The views expressed in this publication are those of the authors and do not necessarily represent the perspectives of any organisation involved in supporting or enabling this research.

Digital Futures Lab is an India-based, interdisciplinary research network that examines the complex relationship between technology and society in the Majority World. Through evidence-based research, participatory foresight, and public engagement, we identify pathways toward equitable, safe, and caring futures

The research for this project was conducted between August 2023 and July 2024.

Research Team Urvashi Aneja, Aarushi Gupta, Anushka Jain, and Sasha John

Production Quicksand Design Studio

Report design Kyra Pereira and Quicksand Design Studio

Landing Page Design Quicksand Design Studio

Illustrations Pāus

Project By

DFL logo-inverse.png

Supported by

Screenshot 2024-08-29 at 12.45.21 PM.png

Design by

PHOTO-2024-08-28-21-56-27 2 Background Removed (1).png