Katherine Elkins came to Kenyon as a scholar of consciousness, embodied experience, and the phenomenology of time. Her research in comparative literature centered on how modernist writers (Proust, Woolf, Beckett) investigated the texture of lived time, the gap between what can be measured and what can be known, and the ways narrative form encodes perceptual experience. That work on memory, cognition, and the limits of representation continues to be cited in scholarship on literary memory, classical reception, and intertextuality, and it turned out to be unusually good preparation for the age of AI. The same questions about what it means to understand another mind now animate her computational research.
 
In 2016, Elkins co-founded Kenyon’s AI CoLab with Jon Chun. The intuition was that the deepest questions about artificial intelligence are not technical but humanistic: What does it mean for a machine to “understand” a text? Can a language model capture what a novel knows about grief, or time, or moral complexity? Where does computation end and interpretation begin? Elkins designed courses that place these questions at the center, teaching students to use AI as a tool for humanistic and social scientific inquiry while thinking critically about what AI can and cannot do.
 
Her early empirical work tested these questions directly, growing out of collaborative experiments with students in the AI CoLab. “Can GPT-3 Pass a Writer’s Turing Test?” (Journal of Cultural Analytics, 2020), co-authored with Chun just four months after GPT-3’s release, demonstrated that readers judged GPT-3’s stories as equal or superior to human work. The paper is now treated as a canonical case in AI ethics debates about machine authorship. Her book The Shapes of Stories (Cambridge University Press, 2022) brought computational methods to narrative theory, drawing on student research conducted through the CoLab. The SentimentArcs methodology at the heart of that book has been independently adopted by researchers working across novels, fan fiction, games, film, medical narratives, election analysis, and economic crisis studies.
 
Elkins has since published in PMLA, Poetics Today, Narrative, and the Journal of Cultural Analytics on how large language models destabilize the traditional author function and force universities to rethink how they teach reading and writing. In AI safety, she co-authored a taxonomy of open-source AI risks that was selected for oral presentation at ICML 2024 and has been adopted by scholars working on the EU AI Act and model governance. She serves as Principal Investigator for the U.S. AI Consortium and leads the Schmidt Sciences HAVI project, “Archival Intelligence: Rescuing New Orleans’ Endangered Heritage.”
 
Elkins directs IPHS and continues to investigate the questions that have animated her work from the beginning: how humans perceive, remember, and make meaning, and what happens when machines attempt to do the same.

Areas of Expertise

Human-centered AI, Multimodal and Multilingual Generative AI, Affective AI, Narrative, Translation, Explainable AI, Bias and Fairness, AI Regulation, AI Ethical Auditing and AI Safety

Education

2002 — Doctor of Philosophy from Univ. of California Berkeley

1990 — Bachelor of Arts from Yale University

Courses Recently Taught

This course equips students with computational methods spanning the humanities, social sciences, and data science. Through Python programming, data visualization, and modeling, students analyze everything from literary texts to social networks. The course examines how digital tools transform our understanding of human behavior and society while tackling crucial questions about AI, surveillance, automation, and transhumanism. By combining quantitative methods with critical analysis, the course prepares students to both understand and shape our increasingly algorithmic world. This course serves as the gateway course in the IPHS AI curriculum. We recommend that students without prior data science or programming experience take this course before enrolling in more advanced AI courses. \n\n

This course explores artificial intelligence through both technical implementation and humanistic inquiry. Building on the programming foundations from IPHS 200, students learn to build and critically evaluate AI systems, from classical machine-learning approaches to cutting-edge deep neural networks and large language models. Through hands-on projects, students create AI systems that generate music, analyze text, classify images and more. The course pairs technical training with readings from philosophy, ethics and critical theory to examine fundamental questions about creativity, intelligence, and what it means to be human in an age of artificial minds. The course emphasizes both technical competency and critical thinking, preparing students to be thoughtful practitioners and critics in our AI-driven future. Prerequisite: COMP 118, IPHS 200 or IPHS 391 (fall 2025).

This course, designed as a research and/or studio workshop, allows students to pursue their own interdisciplinary projects. Students are encouraged to take thoughtful, creative risks in developing their ideas and themes. Those engaged in major long-term projects may continue with them during the second semester. This course does not count toward the completion of any diversification requirement. No prerequisite. Junior standing.