An article co-authored by Nickhil Sharma, Visiting Faculty, SIAS, titled, Intersectional analysis of visual generative AI: the case of stable diffusion has been published in AI & Society.
The article critically examines Stable Diffusion, a widely used open-source visual Generative AI tool, through an intersectional lens. It explores how AI-generated imagery can perpetuate existing social hierarchies—including sexism, racism, heteronormativity, and ableism—by defaulting to representations that are often white, able-bodied, and masculine-presenting. It also highlights the dominance of Euro- and North America-centric aesthetic norms within these outputs.
Importantly, the article moves beyond questions of bias in outputs to interrogate the institutional and ideological frameworks that inform the design and deployment of these tools. As digital technologies increasingly shape our political and social lives, the work underscores the importance of analysing the structures and ideas that produce them.

The authors advocate for a reparative and social justice-oriented approach to visual generative AI—one that actively addresses the injustices these systems can reinforce and imagines more inclusive and equitable technological futures.
The team invites readers to explore the full article and engage in the broader conversation on equity and accountability in AI development.
Read the full article here