Student research has evolved alongside advances in AI digital humanities. Early projects explored creative applications — theater set design, musical composition, game design — alongside pioneering computational text analysis. Beginning in 2018, students conducted groundbreaking work auditing language models years before ChatGPT's release, developing novel sentiment analysis frameworks for analyzing Supreme Court opinions and social media discourse.
Human-centered AI research at Kenyon spans many domains, each organized around a central question. Students choose their own path, and many projects draw on several domains. A guiding principle of every project is the premise that AI is most powerful when shaped by humanistic and social scientific thinking.
AI and Creativity
Can AI create?
Faculty and students investigate whether and how AI systems produce creative work, and what that production reveals about human creativity itself. Elkins’s GPT-3 Turing test study (Journal of Cultural Analytics, 2020) is now treated as a canonical case in debates about machine authorship, cited by ethicists and philosophers debating whether language models can be treated as quasi-agents. Student projects use generative AI to create original text, images, music, and video, then evaluate the results against humanistic standards. Work in this domain has included AI-generated screenwriting and storyboards, narrative convergence in AI-generated fiction, generative AI for cultural heritage restoration, and deep learning analysis revealing visual trends across decades of magazine covers.
Computational Humanities and Social Sciences
Can AI help us research?
The program’s largest research domain. Faculty and students use AI as a method for investigating questions across literature, philosophy, religious studies, political science, economics, and public health. Elkins and Chun’s SentimentArcs methodology, central to The Shapes of Stories (Cambridge University Press, 2022), has been independently adopted by researchers across novels, fan fiction, games, film, TV scripts, medical narratives, and election sentiment analysis. Student projects have applied SentimentArcs to map emotional architecture across literary traditions, used NLP on the Septuagint to analyze biblical domestic terms, traced political discourse across 316,000 tweets, investigated private equity’s impact on healthcare quality, analyzed convertible bond investing, studied immigration policy, and conducted financial sentiment analysis.
AI and Society
Can AI solve real-world problems?
Faculty and students design and build working systems that address real-world needs, and investigate how AI intersects with justice, governance, and social welfare. The IBM/Notre Dame Tech Ethics Lab grant ($60,000, one of eleven internationally) benchmarked AI decision-making in juvenile recidivism contexts. This domain also includes entrepreneurship: students don’t just study problems, they build solutions. Projects include analysis of surveillance capitalism in social media Terms of Service, the opioid epidemic in Ohio, algorithmic decision-making in criminal justice and healthcare, a retrieval-augmented film recommendation system, an AI voice coach for high-stakes fields, multi-agent debate systems for MLB front office analytics, and chatbots designed for disability services. In 2025, students won Most Impactful Project at HackOH/IO and received a Y Combinator invitation.
AI Safety and Governance
Can AI be trusted?
One of the program’s most visible research areas. Prof. Elkins serves as PI for the U.S. AI Safety Institute (NIST CAISI), and the open-source AI risk taxonomy (ICML 2024, oral presentation) has been adopted by policy scholars to structure risk assessments for the EU AI Act. The comparative typology of AI regulation across the EU, China, and the US (with the Oxford Witt Lab) is used as a starting framework in law and governance scholarship. Students contribute directly to this work, analyzing 17,000 ChatGPT conversations to study patterns in human-AI interaction, profiling LLM decision-making under varying conditions, auditing models for bias and ethical reasoning, testing negation sensitivity and robustness in high-stakes AI reasoning, and developing frameworks for responsible and secure deployment.
AI and Human Experience
What does AI reveal about us?
Faculty and students investigate what AI systems reveal about human cognition, creativity, and self-understanding. When a machine can write, what does that tell us about authorship? When an algorithm translates, what gets lost? Elkins’s work on AI and narrative cognition has been taken up by cognitive scientists comparing how humans and AI models process story structure. This domain bridges the program’s humanistic foundations with its computational work, asking how AI reshapes our understanding of consciousness, memory, representation, and what it means to know another mind.
AI and Human Futures
How will AI transform how we live, learn and work?
Faculty and students examine how AI is reshaping institutions, economies, and human possibilities. The AI DH curriculum article (Edinburgh University Press, 2023) has been cited across higher education, AI ethics, and information science as an epistemological framework for what human-centered AI education requires. This domain asks the forward-looking questions: What happens to the university when AI can write essays? What happens to labor markets when AI can do knowledge work? How do societies govern a technology that evolves faster than the institutions designed to regulate it? Projects address AI and the future of higher education, economic disruption, institutional adaptation, and the broader question of how human institutions respond to transformative technology.
Student research is published on Digital Kenyon at digital.kenyon.edu/dh, where it has attracted almost 100,000 readers from institutions including Stanford, MIT, Oxford, Cambridge, Berkeley, the Max Planck Institute, the Chinese Academy of Social Sciences, and the World Bank.