Savitha Sangameswaran, Sage Bionetworks
Amber Nelson, Sage Bionetworks
Ashley Clayton, Sage Bionetworks
Orion Banks, Sage Bionetworks
Angie Bowen, Sage Bionetworks
Aditi Gopalan, Sage Bionetworks
Jineta Banerjee, Sage Bionetworks
Susheel Varma, Sage Bionetworks
Coordinating centers play a crucial role in fostering collaboration and resource-sharing across large-scale research initiatives. These centers help integrate expertise across disciplines, facilitate data-sharing, and build sustainable research infrastructures (Trochim et al., 2008; Trentham-Dietz et al., 2022). Funded by the NIH/NCI Division of Cancer Biology (DCB), the Multi-Consortia Coordinating (MC2) Center initiative serves as a central hub for six DCB-funded cancer research consortia, aiming to enhance collaboration and create an impactful, diverse scientific ecosystem. However, evaluating the effectiveness of such complex, interdisciplinary efforts presents significant challenges, as traditional evaluation approaches often fail to capture the emergent and dynamic nature of team science collaborations (Hall et al., 2018; Rolland et al., 2021). This abstract outlines the development and implementation of the MC2 Center evaluation framework, designed to assess collaboration dynamics, knowledge integration, and broader scientific contributions.
The evaluation framework was developed using a human-centered, participatory approach, incorporating input from stakeholders across the MC2 Center supported consortia and funding partners. Key evaluation categories were identified through iterative discussions, literature reviews, and alignment with established team science evaluation principles. The framework encompasses qualitative and quantitative indicators, enabling a mixed-methods approach to assessing the effectiveness of MC2 Center collaborative structures and scientific outcomes.
Implementation of the framework involved multiple stages, including the identification of relevant data sources, development of data collection protocols, and integration with existing project workflows. Structured interviews, surveys, and bibliometric analyses were selected as primary data collection methods to ensure comprehensive assessment. The framework also emphasizes adaptability, allowing for refinement based on ongoing feedback and emerging collaboration patterns.
Preliminary applications of the framework suggest that structured evaluation can help identify strengths and gaps in coordination efforts (e.g. need for targeted strategies to enhance involvement in collaborative activities), highlight successful knowledge integration strategies (establishment of interdisciplinary working groups to facilitate cross-domain learning), and provide actionable insights for improving team science initiatives (e.g. enhance communication pathways). Future iterations of the framework will focus on incorporating network analysis techniques and longitudinal assessments to track collaboration trends over time.
By integrating complex systems and SciTS-based evaluation approaches, this framework provides a more holistic perspective on research collaboration, offering valuable insights for evaluating similar collaborative initiatives in the future. The findings from this work will contribute to a growing body of knowledge on team science evaluation and serve as a model for assessing other large-scale, interdisciplinary research efforts.