13 July,2024 11:15 AM IST | Mumbai | Sanjana Deshpande
Representative Image
Subscribe to Mid-day GOLD
Already a member? Login
Artificial Intelligence (AI) has rapidly integrated into various aspects of modern life and offered unprecedented advancements and convenience. However, as AI advances, so are concerns about inherent flaws, including gender bias. This issue is common in AI systems, including language models such as ChatGPT. Understanding and correcting these biases is critical to creating fair, egalitarian AI technologies.
Defining gender bias in AI
Speaking to mid-day.com, Shreya Krishnan, MD (India) for AnitaB[dot]org said that gender bias in AI refers to systematic preference or discrimination against persons based on their gender identity. The bias, she said, can manifest in myriad ways including reinforcing stereotypes that disproportionately affect certain outcomes.
In language models, gender bias can appear in the form of biased responses, unequal representation, and perpetuation of gender stereotypes, she said.
ALSO READ
Art & technology unite in this immersive experience featuring da Vinci's work
OpenAI launches text-to-video generation tool Sora: Check features and pricing
Drum-playing robot fuses rhythm-innovation
RBI expands UPI credit line linking and unveils AI ethics framework
IIIT-Delhi researchers develop AI platform to promote healthy ageing
Where is bias in AI coming from?
When asked where the bias comes from, Krishnan pointed out that it is coming from the dataset it is learning from. "Dataset, from which the AI is learning, is innately biased and the artificial intelligence picks up on these biases and it manifests in its responses," she said.
Krishnan stated that the brain compartmentalises people for ease of functioning, manifested as stereotypes. The stereotypes become the basis for racism and sexism and it has mental and physical ramifications. The AI is picking on these biases, she added.
Elaborating on the same, Supriya Bhuwalka, the founder of Coding and More told the correspondent that AI is a "live technology" and that it learns from humans.
"AI is different from coding because AI is a live technology. It's learning from us, with every click, and every choice that we make. It can learn from all sorts of data. Because it's learning from data, it's learning from us; that's where the biases are coming in. So whether it's large language models, which we are seeing in the Gen AI tools, like ChatGPT, or the image generators, where we are seeing the biases, it is also the case for our traditional AIs, which was always there in our phones, in our apps, where we didn't notice," elucidated Supriya.
She said that even when 'Googling' gifts for boys versus gifts for boys, the search options are gendered. Things for girls would be all pink and for boys, it would be blue, Supriya said and added that even while looking for top leaders or even mathematicians, the top results were very male-focused.
"Now, because the company has realised that this bias is there, they are trying to mitigate that by making sure that, you know, even women and people of colour are presented if they do feature in the top scientists, etc," she added.
When asked about what stages of AI development can these biases enter the system, Kalika Bali, Principal Researcher at Microsoft, said that gender bias could be introduced in all stages in the training and deployment of models.
Elucidating, Bali said, "AI language models learn from data through "deep learning" where the model learns to recognise patterns from data and make predictions based on this learning. Gender bias can be introduced at all stages in the training and deployment of the models: Data stage: when there are inherent societal biases are reflected in the data collected/curated for the model; Algorithm development: the design of the algorithm and the choices made by developers can introduce biases; Training: If during training, the data is not balanced or split to be more representative of gender, bias can be introduced/retained/amplified and Deployment: If the model is used in contexts that it wasn't trained for, or the applications are not sensitive/gender aware in the interaction between the human and the model/AI system."
Supriya, Kalika, and Shreya pointed out a "classic example" of gender bias in AI with machine translation tools like Google Translate. When translating phrases like "He likes cooking; she likes coding" into a gendered language like Hindi and then back to English, the tool often switches the genders, turning the phrases into "She likes cooking; he likes coding."
Supriya attributes this to the "way the algorithms have been trained".
The real-life impact of gender bias in AI
Gender bias in AI can have serious real-world consequences. Language models, such as ChatGPT, which are trained on massive datasets from the internet, can unintentionally learn and perpetuate societal biases contained in the data. For example, when requested to finish sentences or generate text, these models may respond in a gender-stereotypical manner, perpetuating obsolete gender roles. Such prejudices can exacerbate inequality and impede progress towards gender equality.
Several case studies have illuminated the pervasive nature of gender bias in AI. One notable example is Amazon's AI recruitment tool, which was found to be biased against women. The tool, trained on resumes submitted to the company over a decade, favoured male candidates by penalising resumes that included the word "women's".
Gender bias in artificial intelligence also has far-reaching social consequences. Biassed AI systems can exacerbate current inequities, restrict chances for some groups, and promote negative prejudices. Addressing these biases is critical for increasing trust in AI technologies and ensuring their positive impact on society.
Organisations and policymakers have critical roles in reducing gender bias in AI. Establishing clear principles and standards for AI development, promoting transparency, and encouraging responsibility are all crucial stages. Public knowledge and education regarding AI's possible biases can also empower individuals to push for more equitable technologies.
Elaborating on the same, Supriya said, "Suppose you are the HR head and are hiring an executive for a firm based on certain qualities. If you use AI for filtration, you will witness that it shortlists only men in case a woman has not held the position earlier."
"Facebook had received backlash when AI was first used and they were advised not to include gender in the screening process. Even after removing gender filters, bias persisted due to language variations between men's and women's profiles and CVs. For example, men may employ leadership terminology or mention interests such as football or chess, whereas women may use a different vocabulary and explain alternative activities," Supriya added.
Women are underrepresented in search results as a result of AI algorithms' continued reliance on historical data. Even without specific gender information, women may be unfairly excluded from roles for which they are better competent, simply because the AI is influenced by past prejudices in the language used by various genders, she said.
Mitigating gender bias in AI
Researcher Joy Buolamwini, in 2018, was experimenting with facial recognition software and she discovered that the system had biases. Buolamwini has been working towards the eradication of these biases from AI since then, Coding & More founder said highlighting that it is important to have a human in the loop in processes involving AI.
"When applying for a credit loan at a bank, if the financial system uses an AI model to assess creditworthiness and the model has never seen profiles similar to yours, your application may be rejected without taking into account all relevant characteristics. To reduce gender bias or other biases in AI, it is critical to be aware of the challenges and guarantee that AI does not make independent judgements without human supervision. By involving humans, we can confront and eliminate biases. Additionally, introducing checks and balances through coding might assist in ensuring more equitable outcomes," Supriya said when asked about how to mitigate gender bias in AI.
Kalika, elaborating on common methods/ systems used for the detection and mitigation of gender biases in AI systems said, "To mitigate the biases in AI, various strategies can be used to increase fairness and accuracy. Data augmentation is the process of adding specific data to the training set to balance representation and ensure that varied groups are appropriately represented. Weight modifications can be applied to various data points to give more balanced coverage and prevent any one group from being overrepresented. Adversarial training challenges and reduces model biases through the use of adversarial examples and counterfactual scenarios. During training, fairness restrictions can be incorporated directly into the model's goal function to aid in achieving equal results. Furthermore, post-processing approaches, such as checking for misgendering in the output, ensure that the model's predictions are fair. By combining these techniques, we can construct AI systems that are more just and impartial."
Kalika when asked how she balances the need for large datasets to avoid biased data training, noted that it could be achieved with additional data sampling for representational balance and diversifying sources of data required for training.
The researcher added that a preliminary data audit giving an idea of the representation across different dimensions is always useful.
"I would like to use the term "gender-intentional" rather than "gender-neutral" where we are mindful of the misgendering challenges as well as gender bias that can exist/creep in at every step of building/training/deploying an AI model. Some of the biggest challenges are in identifying these biases in the first place as the way gender bias manifests can vary a lot from language to language, different cultures as well as different contexts of applications. Given such diversity, it is extremely difficult to find a single solution that works across all the different contexts and granularity of gender bias that can exist in different social contexts, and hence, in the AI models," Kalika said when asked about challenges she faces in creating âgender-neutral' AI language models.
This is an active area of research for Microsoft and we have recently published two papers on the subject, she added.
Anulekha Nandi, a fellow at Observer Research Foundation, whose research pertains to technology policy, digital innovation policy and management, told mid-day.com, "Improved gender representation is critical across the AI development lifecycle and in key decision-making positions. This includes increasing sensitisation, representation, and awareness in data labour procedures, particularly when preparing and processing datasets for AI models. To ensure justice, we need better evaluation measures for detecting intersectional bias in AI systems. Furthermore, there is an urgent need to create gender-responsive normative AI tools, principles, and regulatory frameworks. By tackling these issues, we may develop more inclusive AI technologies that reflect and fulfil the various requirements of society."
When asked whether encouraging more women to opt for careers in STEM would help reverse the gender bias, both Supriya and Shreya responded in the affirmative.