Name
Artificial Social Intelligence and Theory of Mind in Interdisciplinary Human-AI Teams
Authors

Jessica Williams, University of Central Florida
Rhyse Bendell, University of Central Florida
Stephen M. Fiore, University of Central Florida

Date
Saturday, May 9, 2026
Time
8:45 AM - 9:00 AM (PDT)
Presentation Category
Team Science and Industry
Description

Artificial intelligence is increasingly transitioning from a passive analytical tool to an active collaborator in scientific work, raising fundamental questions about this technology’s potential in the Science of Team Science. We conceptualize AI not merely as decision support, but as a scientific teammate embedded within human collectives and contributing to sensemaking, hypothesis generation, and coordination. Within interdisciplinary scientific teams, effective collaboration depends on reciprocal modeling capabilities: humans rely on Theory of Mind (ToM) to infer the intentions, beliefs, knowledge, competencies, and limitations of others, including AI systems; conversely, AI must approximate an Artificial Theory of Mind (AToM) to interpret human intentions, beliefs, understanding, and contextual cues and to adapt its contributions accordingly. In addition to AI functioning as an independent teammate not tied to any specific team member, we consider the place of human–AI centaur dyads in interdisciplinary teams, in which humans and AI systems form tightly coupled, interdependent units that may outperform either alone.

We argue that advances in artificial social intelligence, particularly in modeling AToM as well as individual and team states, are central to developing AI systems that function as credible scientific teammates. Rather than treating MITM simply as a framework to be extended to AI, we use it to specify the macrocognitive conditions under which AI can participate meaningfully in scientific teamwork. From this perspective, AToM is not merely an added capability, but a key mechanism through which AI may enter team-level processes such as knowledge construction, coordination, negotiation, sensemaking, and adaptation in ways that are contingent on the goals, expertise, and evolving cognitive states of human teammates. This is especially important in interdisciplinary science teams, where effective collaboration depends not only on pooling diverse knowledge, but also on recognizing differences in assumptions, representations, priorities, and epistemic standards across domains.

In this context, AI may function as more than an analytical assistant. It may act as a socially situated teammate that helps identify gaps in shared understanding, translate across disciplinary perspectives, anticipate coordination breakdowns, surface overlooked dependencies, and adapt its support to the informational and cognitive needs of the team. At the same time, positioning AI in this role introduces new challenges concerning transparency, authority, responsibility, and trust calibration, particularly when the AI’s inferences about human states or team needs are incomplete, inaccurate, or difficult to interpret. Drawing together perspectives from cognitive science, team psychology, computer science, and human–AI interaction research, we argue that AI scientific teammates should be studied not only in terms of performance support, but also in terms of how they model, influence, and participate in team cognition. By linking AToM to MITM, this paper advances a principled account of how hybrid human–AI teams can be designed, measured, and optimized to support collaborative science.

Abstract Keywords
Human-AI Teams, Artificial Social Intelligence, Artificial Theory of Mind