Biljana Birac, University of Georgia
Karen DeMeester, University of Georgia
Erik Thompson, University of Georgia
The science of team science is an evolving field as scholars continue to study how different types of interactions, processes, and organizational structures influence and moderate team successes. Methods for evaluating those successes also have to evolve to understand better the complex synergies within and across teams that underly successes at the organizational level—success that equals more than the sum of its individual parts or teams. In the proposed presentation, we will describe methods used to design and implement a utilization-focused, developmental evaluation of C-CoMP (Patton, 2008), including (a) use of logic models to facilitate co-development of center-level outcomes and indicators, (b) strategic use of mixed-methods approaches, and (c) ongoing collaboration with the C-CoMP leadership team through bi-monthly meetings to facilitate timely sharing of evaluation findings for adaptive management and to enhance evaluation usability as the Center evolves and matures.
Hall et al. (2018) found that the success of team science was mainly measured by pre-existing data (e.g., archival data) and bibliometrics and evaluations often relied on single methods that limited understanding of the developmental characteristics of science teams and hampered timely assessment of team success. Scholars recommend the use of more sophisticated and varied evaluation methods to assess complex science teams (Börner et al., 2010), and while some suggest frameworks and indicators for evaluating teams (e.g., Hall et al., 2012; Marchand & Hilpert, 2019), evidence remains scarce on how best to evaluate team science outcomes specifically at the organizational level and to provide science-center leadership data to optimize decision-making. Team science provides a foundation for large, collaborative science centers like C-CoMP to pursue their research goals, but a comprehensive evaluation also has to assess progress and achievement of other overarching goals, including goals around knowledge transfer, data sharing/open science, education and career development, and increased inclusivity and equity.
To foster evaluation usability, the C-CoMP evaluation design, guided by the principles of utilization-focused and developmental evaluation, includes both outcome and process evaluations and quantitative and qualitative methods. Progress towards and achievement of outcomes are measured primarily through C-CoMP member annual activity report data (e.g., publications/presentations, internal and external collaborations, data shared through open-science platforms, and education and career advancement). Process, formative data are obtained through annual focus groups with members at different career stages (principal investigators (PIs)/faculty, post-docs and graduate students, and technical staff), evaluator observation at monthly All-Center and PI meetings, and an annual climate survey to assess C-CoMP members’ perceptions of its culture and promotion of team science principles, inclusivity, innovation, and productivity. The timing of data collection activities and reporting are aligned with critical decision-making and development points—before the Center’s annual NSF site visit, meeting of the Advisory Committee, and the C-CoMP members’ meeting.
Challenges will be discussed related to evaluating team science and its effect on the overall achievement of C-CoMP’s goals, understanding the evolution of teams as the Center matures, and maximizing the benefits of evaluation within budget constraints.