IJEMR

Provocations from the Humanities for Generative AI Research

© 2025 by IJEMR

Volume 1 Issue 1

Year of Publication : 2025

Author : Devadharshini G, Kishalini C

Citation :

Devadharshini G, Kishalini C, 2025. "Provocations from the Humanities for Generative AI Research" ESP International Journal of Emerging Multidisciplinary Research [ESP-IJEMR]  Volume 1, Issue 1: 43-52.

Abstract :

The development of language, imagery, music and multimodal artefacts has already been disrupted by the swift progress of generative artificial intelligence (AI), leading to increasing confusion between algorithmic synthesis and human creativity. The interpretive, historical and cultural dimensions that the humanities have long studied are typically overlooked by engineering-centric research in generative models, which has focused heavily on optimisation, scale and benchmark performance. In arguing that interpretive modes such as literary theory, philosophy, archival intervention, and critical cultural analysis can help to illuminate significant blind spots in the practice of contemporary AI work, this paper advances several humanistic provocations for generative AI research. All of these provocations challenge the persistent notion that AI models are objective, context-free technologies. Instead, they highlight generative systems as interpretive agents enfolded in complex social networks that replicate biases, values and hierarchies which always need to be exposed to critical scrutiny. In so doing, this paper suggests that generative AIs should be approached as interlocutors of meaning rather than mere code-generating objects and that they could be tackled by hermeneutics, historiography and critical epistemology. At the same time, existing indicators do not account for provenance and authorship, chronology or rhetorical framing; in these humanities offer methodological tools that can be used to investigate such factors. For example, archival theory exposes data collection methodologies that produce structural silence and selective memory; close readings of models outputs reveal the narrative and ideologic factors enacted in representation. In this way the humanities ground critique in historically and materially particular ways, taking the conversation out of these kinds of abstract notions like bias or “ethics. The article reconceptualises generative AI as an interpretative space than mechanical output through nine provocations. These include calls to celebrate interpretative multiplicity, recognize labour and institutional power, remember historical contingency, privilege data provenance and treat models as interpretive objects among others. Further provocations draw attention to archival reflexivity, hermeneutic interpretability and ethically charged evaluative dimensions. Joined, they advocate for the integration of humanistic methodologies into AI research pipelines, prophylactically provenance-aware data infrastructures, participatory evaluation with and by affected communities, and “close reading” of models. The study concludes that, without a role for the humanities, there cannot be responsible development of generative AI. Ethical deliberation should form the basis, not as an afterthought to model generation, interpretation and implementation Humanistic inquiry should drive modelmaking. By defining generative systems as interpretive or cultural agents, and not only computational ones, researchers can create machine learning that respects historical context, multiple meanings and human creativity. Rather than stifling creativity, the provocations put forward work towards alternative and more reflexive, democratic, and culturally accented AI futures.

References :

[1] Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

[2] Birhane, A. (2021). Algorithmic injustice: A relational ethics approach. Patterns, 2(2), 100205.

[3] Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. NeurIPS.

[4] Borgman, C. L. (2015). Big Data, Little Data, No Data: Scholarship in the Networked World. MIT Press.

[5] Bourdieu, P. (1991). Language and Symbolic Power. Harvard University Press.

[6] boyd, d., & Crawford, K. (2012). Critical questions for Big Data. Information, Communication & Society, 15(5), 662–679.

[7] Chun, W. H. K. (2016). Updating to Remain the Same: Habitual New Media. MIT Press.

[8] Crawford, K. (2021). Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press.

[9] Crenshaw, K. (1991). Mapping the margins: Intersectionality, identity politics, and violence against women of color. Stanford Law Review, 43(6), 1241–1299.

[10] Daston, L., & Galison, P. (2007). Objectivity. Zone Books.

[11] Foucault, M. (1972). The Archaeology of Knowledge. Pantheon Books.

[12] Gebru, T. (2020). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.

[13] Green, B. (2021). The flaws of policies requiring human oversight of government algorithms. Computer Law & Security Review, 41, 105528.

[14] Haraway, D. (1988). Situated knowledges: The science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599.

[15] Heidegger, M. (1977). The Question Concerning Technology. Harper & Row.

[16] Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., & Denuyl, S. (2021). Towards accountability for machine learning datasets: Practices from software engineering and infrastructure. NeurIPS.

[17] Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2).

[18] Kitchin, R. (2014). The Data Revolution: Big Data, Open Data, Data Infrastructures & Their Consequences. Sage.

[19] Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-Theory. Oxford University Press.

[20] Marcus, G. (2022). Deep Learning Is Hitting a Wall. The Gradient.

[21] Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.

[22] O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.

[23] Parisi, L. (2019). Critical computation: Digital automata and general artificial thinking. Theory, Culture & Society, 36(2), 89–121.

[24] Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.

[25] Raji, I. D., & Buolamwini, J. (2019). Actionable auditing: Investigating the impact of publicly naming biased performance results of commercial AI products. AIES.

[26] Ricoeur, P. (1981). Hermeneutics and the Human Sciences. Cambridge University Press.

[27] Suchman, L. (2007). Human-Machine Reconfigurations: Plans and Situated Actions. Cambridge University Press.

[28] Suresh, H., & Guttag, J. V. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Equity and Access in Algorithms, Mechanisms, and Optimization.

[29] Tufekci, Z. (2015). Algorithmic harms beyond Facebook and Google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13, 203–218.

[30] Winner, L. (1986). The Whale and the Reactor: A Search for Limits in an Age of High Technology. University of Chicago Press.

Keywords :

Generative Artificial Intelligence, Humanities, Hermeneutics, Critical Theory, Data Provenance, Interpretive Plurality, Archival Studies, Rhetorical Analysis, Epistemic Justice, Cultural Context, Digital Humanities, AI Ethics, Sociotechnical Systems, Historical Contingency, Algorithmic Power.