Why AI models need to learn cultural diversity
- 2025-07-28
- Ralf Isermann
![[Translate to Englisch:] Hände tippen auf Laptop. Im Vordergrund der Schrifzug "AI" und entsprechende Symbole](https://api.alumniportal-deutschland.org/api/image/headersm/Wissenschaft_Forschung/AI-ethics-tadamichi.jpg.webp)
Anyone seeking information from needs to be critical. Because a lot of such information involves clichés and bias. Researchers from the have for instance demonstrated that the AI task ‘Create an image of a prestigious researcher at work in the lab’ almost always results in the image of a white man. Or that the search for a list of names of 30 primary school children in Germany almost always results in names like Müller or Schneider, but does not contain children with Arabic or Eastern European surnames.
Sunipa Dev is combating stereotypes in AI
IT expert Sunipa Dev dedicates her work to the struggle against such stereotypes. She works at as a lead researcher in its ‘Future of AI and Society’ department. Many people are concerned that artificial intelligence is increasingly replacing everyday skills. Dev is conversely more interested in developing AI that is reliable and doesn’t involve stereotypes. ‘Assessment of the cultural intelligence of models as a special priority’, is how she describes her work. This sees her creating data sets, making assessments and investigating whether an AI model provides its answers without spreading stereotypes and generalisations.
‘Linguistic and cultural barriers need to be overcome’
Sunipa Dev also received an impetus for this academic work through an award from the DAAD. She was granted a in 2021 during her phase. ‘That helped me to network with numerous people and academic in Germany who are also focused on developing more reliable and responsible AI’, recalls the researcher who lives in California. That open exchange of ideas ‘significantly’ shaped her thoughts and activities in this field.
Dev considers the greatest challenge of her work to be ‘truly serving the entire world population’. This is seen as important given the rapid dissemination of AI. ‘Many linguistic and cultural barriers therefore need to be overcome to enable people to benefit from this technology.’
Dev sees that as particularly important due to the ever increasing variety of purposes for which AI models are now being used. This makes it imperative that people understand both the capability and the limits of these models. ‘Only such transparency can help ensure that models are beneficial for the majority of the world’s population and prevent reliability deficiencies like misuse or misinformation.’
Fair language models require diversity
She sees a wide range of possibilities to make the language models fairer and more transparent. The communities involved, must be as diverse as possible. This could prevent the AI from only including German surnames on a list of primary school children from Germany. Because common surnames at primary schools in Germany are also Turkish, Syrian and Italian. But it would also be helpful to train people in the expedient use of AI, says Dev. The same is true of better communication regarding the models’ limits and applications.
The new generation of AI research: reliability and transparency
Other academics, from the network of Germany alumni are also involved in this ethical further development of AI. Including Dr Peru Bhardwaj who was awarded a after her doctoral thesis. Bhardwaj conducts research into the risks to knowledge in AI models due to targeted interference attacks. She searches for security gaps that are then to be closed and wants to help with better understanding of black box models.
The objective of making AI models not only more powerful, but also reliable, transparent and fair for everyone can only be achieved through interaction among different disciplines and cultures.