"Human-centered AI" means different things in different places. For some, it is a design philosophy for building reliable, safe systems. For others, it is an institutional strategy for getting humanists in the room with engineers. At Kenyon, it starts from a different question entirely: what should be left to humans?

AI raises two kinds of questions. Some are old questions in a new medium — language, creativity, consciousness, justice, governance — where centuries of humanistic and social scientific thought turns out to be essential. Others are genuinely new: What happens to authorship when a machine can write? What does fairness mean when an algorithm decides? How do you govern a technology that evolves faster than the institutions designed to regulate it? Human-centered AI, as practiced here, mean pursuing both kinds of questions at once, with both technical fluency and humanistic depth.

Students learn Python, machine learning, deep learning, and generative AI. They build chatbots, RAG systems, and multi-agent simulations. They also learn to identify unstated assumptions in a model's design and to ask where the question being answered is the right question. The technical and the humanistic develop together from the first course. Central to this is the many-model approach, developed by faculty members Kate Elkins and Jon Chun: students work across multiple models and architectures, developing the judgement to determine which tool is right for which question — a capability that purely technical programs do not develop.

The program has been developing this approach since 2016, years before generative AI made these questions urgent. The curriculum and methodology are documented in the "The Crisis of Artificial Intelligence: A New Digital Humanities Curriculum for Human-Centered AI" (Edinburgh University Press, 2023).

Partnerships and Recognitions

  • Schmidt Sciences: Selected for the Humanities & AI Virtual Institute, 1 of 23 worldwide from 600+ applications. Funding "Archival Intelligence: Rescuing New Orleans' Endangered Heritage."
  • NIST CAISI: Prof. Elkins serves as PI for the U.S. AI Safety Institute.
  • OpenAI: Invited to the Higher Education Forum (1 of 6 talks from 1,000+ applicants).
  • Forbes: Profiled as a model for human-centered AI education.
  • RALLY Innovation: Elkins invited to speak on human algorithms.
  • Bloomberg: AI Strategy Course (Elkins as industry expert).
  • UNESCO: Collaboration on international AI education initiatives.
  • IBM/Notre Dame Tech Ethics Lab: $60,000 grant for technology ethics research.
  • Meta: Faculty participant in the Transparency Working Group (dates TBD).
  • Public AI: Contributing to the debate over AI for the public good.

Scholarly Impact

Faculty and student work has been taken up across multiple fields, from AI ethics and governance to cognitive science and digital humanities.

AI and Creativity: The GPT-3 Turing test study ("Journal of Cultural Analytics", 2020) is now treated as a canonical case in debates about machine authorship. Ethicists and philosophers cite it when debating whether language models can be treated as quasi-agents. Researchers on the "AI ghostwriter effect" build on its findings about reader deception.

Computational Narrative Methods: The SentimentArcs methodology, central to "The Shapes of Stories" (Cambridge University Press, 2022), has been independently adopted by researchers across novels, fan fiction, games, film, TV scripts, medical narratives, election sentiment analysis, and economic crisis studies. Cognitive scientists have drawn on it when comparing how humans and AI models process story structure.

AI Safety and Governance: The open-source AI risk taxonomy (ICML 2024, oral presentation) has been adopted by policy scholars to structure risk assessments for the EU AI Act and model governance. The comparative typology of AI regulation across the EU, China, and the US (with the Oxford Witt Lab) is used as a starting framework in law and governance scholarship. The ethics-based audit of commercial LLMs demonstrated that different systems embed distinct normative patterns.

AI and Higher Education: The AI DH curriculum article (Edinburgh University Press, 2023) is cited across higher education, AI ethics, and information science as an epistemological framework for what human-centered AI education requires.

Publication venues include Cambridge University Press, Oxford University Press, Edinburgh University Press, the International Conference on Machine Learning (ICML), PMLA, Poetics Today, Narrative, Journal of Cultural Analytics, Frontiers in Computer Science, and the International Journal of Digital Humanities.