Poster Sessions - ToM4AI Workshop at AAAI 2026
📊 Poster Sessions
50
Total Posters
24
Session I (Set A)
26
Session II (Set B)
📊 Poster Session I - Set A: Human & Cognitive Aspects
10:05 - 10:30
10:05 - 10:30
| # | Title | Research Group |
|---|---|---|
| #4 | NAIL: A Neuropsychological Approach to Interpretability in Large Language Agents: Applications to Theory of Mind | Group 4: Cognitive Architectures & Multimodal ToM |
| #5 | The AI Tipping Point: How Design and Repeated Use Shape Beliefs about Machine Minds | Group 1: Human-AI Interaction & Trust |
| #6 | The Resonance Corpus: A Large-Scale Chinese Parent–Child Conversation Dataset | Group 1: Human-AI Interaction & Trust |
| #8 | RToMA: Recursive Theory of Mind Alignment for Large Language Models | Group 4: Cognitive Architectures & Multimodal ToM |
| #11 | Toward Theory of Mind: BERT Learns and Uses Emotion Geometry in Two Phases | Group 1: Human-AI Interaction & Trust |
| #14 | Temporal Localization Improves Video Theory of Mind in Multimodal LLMs | Group 4: Cognitive Architectures & Multimodal ToM |
| #15 | "Tell Me Something About Yourself": Setting Appropriate Perceptions and Expectations on AI Systems | Group 1: Human-AI Interaction & Trust |
| #17 | How Social Environments Shape Brains: Modelling Developmental Adversity using Neural Networks | Group 1: Human-AI Interaction & Trust |
| #21 | Aesthetic Theory of Mind: Using Artistic Conception Computation as a Litmus Test for Machine ToM | Group 4: Cognitive Architectures & Multimodal ToM |
| #22 | Morals and Reasoning: Formalizing Moral Influence on Reasoning and AI Systems Alignment | Group 1: Human-AI Interaction & Trust |
| #27 | Do Language Models Understand Social Minds? A ToM-based Probe Through Norm Detection | Group 1: Human-AI Interaction & Trust |
| #28 | AI Alignment Demands Better Emotion Recognition and Social Understanding Capabilities | Group 4: Cognitive Architectures & Multimodal ToM |
| #29 | Artificial Theory of Mind in Human-in-the-Loop | Group 1: Human-AI Interaction & Trust |
| #32 | Theory of Mind for Explainable Human-Robot Interaction | Group 1: Human-AI Interaction & Trust |
| #35 | Visual Theory of Mind through LLM-based Semantic Extraction | Group 4: Cognitive Architectures & Multimodal ToM |
| #36 | Theory of Mind through Partially Ordered Plans | Group 4: Cognitive Architectures & Multimodal ToM |
| #38 | Inside Deception: How to Exploit a Target | Group 1: Human-AI Interaction & Trust |
| #39 | Learning User Boredom Thresholds for a Conversational Robot | Group 1: Human-AI Interaction & Trust |
| #40 | Language-Informed Synthesis of Rational Agent Models for Grounded Theory-of-Mind Reasoning On-The-Fly | Group 4: Cognitive Architectures & Multimodal ToM |
| #47 | Sign-Based World Model as a Basis of Cognitive Modeling: Imitation in Human-Robot Interaction | Group 4: Cognitive Architectures & Multimodal ToM |
| #48 | Beyond VAGUE: Attention Analysis for Probing How VLMs Ground Ambiguity | Group 4: Cognitive Architectures & Multimodal ToM |
| #56 | Connectome-Based Alignment between Brain and Large Language Models via Gromov-Wasserstein Barycenters | Group 4: Cognitive Architectures & Multimodal ToM |
| #57 | Explanation-first Explainable AI | Group 1: Human-AI Interaction & Trust |
| #60 | Theoretical Framework for a Quantum Brain Model | Group 4: Cognitive Architectures & Multimodal ToM |
📊 Poster Session II - Set B: Computational Agents & Systems
12:00 - 12:30
12:00 - 12:30
| # | Title | Research Group |
|---|---|---|
| #1 | A Practical Sufficient Test of Consciousness for Language Models | Group 2: LLM Theory of Mind & Evaluation |
| #2 | How Uninformed Reports Impact Trust: A Formal Model Involving Implicit Intention | Group 3: Multi-Agent Systems & Game Theory |
| #7 | HiVAE: Hierarchical Latent Variables for Scalable Theory of Mind | Group 2: LLM Theory of Mind & Evaluation |
| #9 | Exploration Through Introspection: A Self-Aware Reward Model | Group 2: LLM Theory of Mind & Evaluation |
| #12 | Latent Theory of Mind in World Models for Multi-Agent Reinforcement Learning | Group 3: Multi-Agent Systems & Game Theory |
| #13 | Predicting Emergent Capabilities Using Sparse Features | Group 2: LLM Theory of Mind & Evaluation |
| #16 | On the Evolution of Multi-Agent Communication in Non-Cooperative Games | Group 3: Multi-Agent Systems & Game Theory |
| #18 | A Computable Game-Theoretic Framework for Multi-Agent Theory of Mind | Group 3: Multi-Agent Systems & Game Theory |
| #19 | On the Interplay of Training Population Diversity, Theory of Mind, and Zero-Shot Coordination | Group 3: Multi-Agent Systems & Game Theory |
| #23 | Four Decision-Heads are Better Than One: Augmenting Decision Making with Collective Cognition in Small Neural Networks | Group 2: LLM Theory of Mind & Evaluation |
| #24 | From Theory of Mind to Theory of Environment: Counterfactual Simulation of Latent Environmental Dynamics | Group 3: Multi-Agent Systems & Game Theory |
| #26 | Routing Belief States: A Meta-Cognitive Architecture for Theory of Mind in Language Models | Group 2: LLM Theory of Mind & Evaluation |
| #30 | Geometric Belief Spaces: A Topological Framework for Scalable Multi-Agent Theory of Mind | Group 3: Multi-Agent Systems & Game Theory |
| #31 | Who Knows Who Knows? A Step Toward Common Knowledge in Multi-Agent Systems | Group 3: Multi-Agent Systems & Game Theory |
| #33 | SUITE: Scaling Up Individualized Theory-of-Mind Evaluation in Large Language Models | Group 2: LLM Theory of Mind & Evaluation |
| #34 | Theory of Mind and Optimistic Beliefs Emerge in a Sequential Dilemma with Incremental Rewards | Group 3: Multi-Agent Systems & Game Theory |
| #37 | Decomposing Theory of Mind: How Emotional Processing Mediates ToM Abilities in LLMs | Group 2: LLM Theory of Mind & Evaluation |
| #41 | Complementarity of Developmental Motivation and Learned Intrinsic Rewards in Multi-Agent Reinforcement Learning | Group 3: Multi-Agent Systems & Game Theory |
| #42 | Recursive Bayesian Theory of Mind for Sparse-Observation Multi-Agent Gridworlds | Group 3: Multi-Agent Systems & Game Theory |
| #43 | Reasoning About Bias: Theory of Mind for Trustworthy Knowledge Distillation | Group 2: LLM Theory of Mind & Evaluation |
| #45 | Do LLMs Possess Theory of Mind in Pokémon Battle Paradigm | Group 2: LLM Theory of Mind & Evaluation |
| #46 | Investigating the Effects of Translation Quality on LLM Performance in Machine-Translated Theory of Mind Benchmarks | Group 2: LLM Theory of Mind & Evaluation |
| #49 | Semantic Encoders Enable Robust Communication-Aware Reinforcement Learning Policies | Group 3: Multi-Agent Systems & Game Theory |
| #50 | Faithful Theory of Mind Distillation: Why Preference Based Refinement Improves Imitation | Group 2: LLM Theory of Mind & Evaluation |
| #51 | Introducing Dialogue-Act Framework for Multi-Agent LLM Negotiation | Group 3: Multi-Agent Systems & Game Theory |
| #52 | A Model-Based Approach for Recognizing Unknown Goal Combinations | Group 3: Multi-Agent Systems & Game Theory |
| #53 | The Curse of Knowledge in Language Models: Perfect Theory of Mind or Missing Human Biases? | Group 2: LLM Theory of Mind & Evaluation |
| #54 | A Multi-Game MARL Framework for Evaluating Social Reasoning | Group 3: Multi-Agent Systems & Game Theory |
| #55 | Correcting LLM Errors: A Metacognitive Architecture for ToM Adaptation in AI Agents | Group 2: LLM Theory of Mind & Evaluation |
| #58 | Belief-Desire-Intention Dynamics in Language Models via the p-Beauty Contest | Group 2: LLM Theory of Mind & Evaluation |
| #59 | A Mechanistic Investigation of Theory-of-Mind in a Large Language Model | Group 2: LLM Theory of Mind & Evaluation |
