Jon Chun came to Kenyon from Silicon Valley, where he had designed, patented, and co-founded successful startups around network security and privacy, including the world’s largest anonymity service, backed by In-Q-Tel. After successfully relaunching the first web-based VPN appliance as Director of Development at the world’s largest security company, he chose to bring that experience to a humanities program. Over more than a decade at Kenyon, he co-developed the first interdisciplinary human-centered AI curriculum, motivated by a conviction that has only grown more urgent: that too many of the most consequential decisions about technology were being made by engineers alone. The questions AI raises about consciousness, creativity, justice, and governance require more voices at the table, and those voices need training that the technical disciplines do not provide.

At Kenyon, Chun co-founded the AI CoLab with Katherine Elkins in 2016. He created SentimentArcs, an open-source methodology for mapping the emotional architecture of narrative texts, developed in collaboration with students whose projects tested and extended the approach across genres and disciplines. Researchers worldwide have independently adopted SentimentArcs as a template for studying novels, fan fiction, games, film, TV scripts, end-of-life medical narratives, and economic crisis, and cognitive scientists have drawn on the methodology when comparing how humans and AI models process story structure. His early experiments with students on GPT-2 for AI-generated story writing (2019) and DivaBot human-AI improv (2021) were among the first attempts to use transformer models for creative narrative, predating the current wave of generative AI by several years.

Chun’s more recent work extends into explainable AI and empirical ethics. His framework for using GPT-4 in story analysis and generation (International Journal of Digital Humanities, 2023) is cited by AI researchers and digital humanists as a model for tailoring explainable AI methods to interpretive tasks. His ethics-based audit of eight major commercial language models (2024) demonstrated that different LLMs embed distinct and sometimes inconsistent normative patterns; researchers on value alignment and AI ethics have adopted its scenario-based auditing methodology. With Elkins and collaborators at the Oxford Witt Lab, he co-authored a comparative typology of AI regulation across the EU, China, and the US that governance scholars now use as a starting framework for analyzing emerging regulatory models.

Chun’s publication record reflects the range of his thinking: work appearing in venues from Cambridge University Press and Oxford University Press to the International Conference on Machine Learning, spanning AI security and benchmarking, human alignment, affective AI, storytelling and narrative, AI policy and regulation, and machine psychology. His earlier research in medicine and physics produced publications on medical informatics, genomics, and semiconductor intellectual property. In industry, he has worked in the US, Asia, and the EU in enterprise software, fintech, insurtech, and healthtech in roles as CEO, CTO, and sales engineer.

He has mentored approximately 400 student research projects, building the infrastructure that enables non-STEM undergraduates to conduct sophisticated computational research, closing what Forbes called the “STEM lab-opportunity gap” that typically excludes humanities and social science students from hands-on research experience. He serves as co-PI on the Schmidt Sciences HAVI grant.

Areas of Expertise

Research in human-centered AI, AI agents, affective computing, narrative, security/privacy, generative AI benchmarking, eXplainable AI (XAI), AI fairness bias transparency explainability (FATE), ethical and compliance auditing, and AI policy/regulation. Domain expertise in HealthTech, FinTech, InsurTech, Security, and Entrepreneurship.

Education

1995 — Master of Science from University of Texas at Austin

1989 — Bachelor of Science from Univ. of California Berkeley

Courses Recently Taught

This course explores artificial intelligence through both technical implementation and humanistic inquiry. Building on the programming foundations from IPHS 200, students learn to build and critically evaluate AI systems, from classical machine-learning approaches to cutting-edge deep neural networks and large language models. Through hands-on projects, students create AI systems that generate music, analyze text, classify images and more. The course pairs technical training with readings from philosophy, ethics and critical theory to examine fundamental questions about creativity, intelligence, and what it means to be human in an age of artificial minds. The course emphasizes both technical competency and critical thinking, preparing students to be thoughtful practitioners and critics in our AI-driven future. Prerequisite: COMP 118, IPHS 200 or IPHS 391 (fall 2025).