Anirban Mukhopadhyay, Virginia Tech
As AI agents become embedded within collaborative settings and problem-solving teams, they are increasingly positioned as teammates rather than tools (Wang et al., 2025). However, most approaches to human-AI collaboration prioritize task performance while overlooking how AI reshapes teamwork processes. This gap is critical in the context of team science, where outcomes depend on mediating processes such as communication, coordination, shared mental models, and trust (Kerr & Tindale, 2004). We argue that AI does not simply assist teams but actively mediates teamwork. This shift requires rethinking team evaluation to account for how AI alters the processes through which interdisciplinary teams function, rather than focusing solely on outputs, and positions AI as a central factor in the evolving science of collaboration.
Grounded in organizational and team science theory, we adopt the Input–Mediator–Output–Input (IMOI) framework to examine how AI influences team effectiveness through mediators rather than inputs or outcomes alone (Ilgen et al., 2005). Mediators including planning, structuring, adaptation, and transactive memory systems, explain why teams succeed or fail. However, these processes remain fragile in hybrid human-AI settings. We extend this perspective by positioning AI as both a contributor to and disruptor of these mediators, and by emphasizing the need for evaluation approaches that capture dynamic team processes over time. In doing so, we contribute to SciTS discussions, by integrating human-AI collaboration into established team science frameworks and advancing a process-oriented lens for studying emerging forms of interdisciplinary teamwork.
To ground this argument empirically, we draw on our prior mixed-methods, within-subjects lab study of 4-member teams (n = 24) engaged in time-sensitive collaborative problem-solving with proactive AI agents embedded in their workflows (Mukhopadhyay et al., 2026). Teams interacted with two agent roles: a facilitator agent that periodically provided summaries and coordination cues, and a peer agent that contributed ideas and responded to queries. We collected data through surveys, task performance, and qualitative focus group interviews, enabling us to examine how agent interventions shaped performance, coordination, communication flow, workload distribution, and reliance patterns.
We found that AI agents simultaneously enhanced and disrupted team processes. Peer agents improved problem-solving by offering timely hints and supporting memory offloading, but also increased cognitive load, fragmented communication, and fostered over-reliance. Facilitator
agents supported early coordination and shared focus but were often marginalized when their contributions were poorly timed or redundant. Across conditions, breakdowns can be explained by disruptions in mediators such as shared mental models, trust calibration, and conversational flow. Teams exhibited divergent trajectories, including shifts from curiosity to dependence and from engagement to disengagement, underscoring the dynamic and emergent nature of AI-mediated teamwork.
These findings have direct implications for Team Evaluation research. We argue for a process-centered approach to human-AI team science that prioritizes mediators as key units of analysis and evaluation. We highlight the need to design AI systems that are process-aware and sensitive to timing, team state, and interaction dynamics. Future work should develop evaluation frameworks, training approaches, and design methods that enable more resilient and effective human-AI teams in interdisciplinary collaboration.