Designing AI systems that understand, adapt, and align with human intelligence.
I’m a Ph.D. candidate in the Language and Learning Analytics (LaLA) Lab at the University of California, Irvine, advised by Dr. Nia Nixon, where I study how humans and intelligent systems learn, collaborate, and make sense of complex problems together. My work brings together methods from AI, cognitive science, and human–computer interaction to explore what effective human–AI teaming looks like in practice—across classrooms, research settings, and professional environments.
I lead the design and development of TRAIL (Team Research and AI Integration Lab), a research platform that simulates team problem-solving with configurable AI teammates. Through TRAIL, I study how personality, trust, and communication shape collaboration between people and AI systems.
My research explores the intersection of human-centered AI, cognitive science, and alignment, asking how AI systems can understand not just what we ask, but how we mean it.
Human–AI Alignment in Teams: I study how AI teammates align with human intentions, communication patterns, and values through persona design, behavioral tuning, and adaptive proactivity.
Each AI agent in TRAIL can vary in:
Persona traits (e.g., conscientiousness, agreeableness, dominance) that affect tone, empathy, and assertiveness.
Behavioral alignment parameters that control responsiveness and conversational pacing.
Proactivity settings that determine when the AI initiates dialogue, offers solutions, or prompts reflection.
This work advances a framework of socio-cognitive alignment, where AI systems learn to maintain shared mental models and communicate in ways that promote collaboration, equity, and understanding.
Cognitive and Social Dynamics: Using temporal, linguistic, and behavioral modeling, I analyze how reasoning and shared understanding unfold in human and human–AI teams. These studies reveal how conversational structure, turn-taking, and semantic alignment influence collective cognition and decision quality.
Human–Computer Interaction for AI Systems: I design and evaluate interfaces that make AI collaboration visible and interpretable, such as dashboards for interaction memory, real-time reflection prompts, and adaptive feedback loops. These tools aim to make AI teammates more transparent, explainable, and contextually grounded.
Model Welfare and Reflective AI: My current work extends alignment to the concept of model welfare—how AI systems reason about their own limitations, uncertainty, and ethical alignment. I explore how reflective models can monitor their internal state, manage ambiguity, and maintain value consistency during long-term collaboration.
I view alignment as an ongoing dialogue between human and machine—a process of mutual adaptation rather than one-way control. In my view, an aligned AI does not simply follow instructions; it interprets them contextually, adjusting tone, initiative, and reasoning to fit human intent and team dynamics. By tuning an AI’s persona, behavior, and proactivity, I model how humans naturally adapt to one another in communication. This makes AI teammates capable of shared mental modeling, reflective reasoning, and ethical self-awareness—core ingredients for trustworthy collaboration. Ultimately, my goal is to build AI systems that are not just efficient but attentive, interpretive, and reflective partners in human cognition—helping us think better together.
Selected Work
Minds and Machines Unite: Deciphering Team Social and Cognitive Dynamics with AI. LAK’24
The AI Collaborator: Bridging Human–AI Interaction in Educational and Professional Settings. arXiv, 2024
Read the Room or Lead the Room: Understanding Socio-Cognitive Dynamics in Human-AI Teaming. arXiv, 2025
Balancing the Scales: Using GPT-4 for Robust Data Augmentation. LAK’25
Multi-Agent Framework for Math Problem Generation. EDM’25