Name
Paper Session: Impact
Date
Wednesday, July 26, 2023
Time
9:00 AM - 10:30 AM (EDT)
Description

Presentation 1 
20 - Cross-cutting themes and opportunities for global collaborative research and networks

Presented by: Jane Payumo, Michigan State University
Authored by: David DeYoung, Michigan State University
Joseph Huesing, Michigan State University; Purdue University
Jane Payumo, Michigan State University


In development work, cross-cutting themes are issue areas, such as gender inclusion, which are identified by the project sponsor as important and that impact not only the specific project but also the broader development goals of the sponsor. These cross-cutting themes are seen as key in attaining a broader development impact across programs by acting as a system-wide ‘force multiplier’ extending the reach of any single investment. The U.S. Agency for International Development (USAID), an independent agency of the U.S. federal government, is responsible for administering global aid and development assistance. USAID uses cross-cutting themes across all its programing to create a lasting and greater impact on development. Experience has shown, however, that despite the commonality of cross-cutting themes across programs and projects the level of successful implementation is less than ideal. First, the idea of multiple goals loosely tied to a project’s central focus is antithetical to good project management practices. Second, because each project necessarily has varying interpretations and approaches to implementing and measuring the progress of cross-cutting goals, the goals become secondary for the project management team. 

This paper will highlight the experience of the USAID Feed the Future Innovation Lab for Legume Systems Research (Legume Systems Innovation Lab for short) and its research partners, in mainstreaming cross-cutting themes for global research collaborations and networks and addressing opportunities and issues for this objective. The Legume Systems Innovation Lab, managed by Michigan State University, is one of the 21 Feed the Future Innovation Labs (https://www.feedthefuture.gov/feed-the-future-innovation-labs/), which draw on the expertise of U.S. universities and developing country research institutions to design and implement solutions to a wide array of agricultural and food security issues. Specifically, we will highlight the Legume System Innovation Lab’s experience in integrating overarching concepts and topics, including gender, capacity development, youth, nutrition, and resilience, across its projects. We will also present the Legume Systems Innovation Lab’s approach to address knowledge gaps and implementation and offer insight into pivotal elements for designing, implementing, monitoring, and evaluating cross-cutting themes. Finally, we will provide the results of a qualitative study using document reviews, surveys, and group interviews that highlight where cross-cutting themes need to reside in a program’s theory of change design, what are the formative processes (inputs and outputs) required, and the emerging lessons and insights for integrating cross-cutting themes for collaborative research and networks in an international development context. While qualitative studies have limitations, this study illustrates a new and shared understanding of managing cross-cutting themes as the basis for other organizations and other development practitioners working in a similar environment.

Presentation 2
53 - Team Satisfaction and Network Integration: A Case Study of Two NSF Funded Science Teams with Varying Levels of Success

Presented by: Anneloes Mook, Colorado State University
Authored by: Jennifer Cross, Colorado State University
Verena Knerich, Colorado State University
Anneloes Mook, Colorado State University


As the world becomes more complex, interdisciplinary team-based research is required to understand wicked problems such as climate change. Yet despite the extensive focus on the essential scientific competencies to undertake these ambitious research projects, educational institutions overlook the importance of collaboration skills. In this paper, we use social network, quantitative, and qualitative data to evaluate team satisfaction, network integration, and productivity of two National Science Foundation funded teams over a two-year time period. One team performed well and continued to grow and integrate, while also reporting high levels of satisfaction and productivity. However, the other team remained similar in size and level of network integration. Though the leaders of this team who are situated in the core of the network reported high levels of satisfaction, the team members in the periphery were dissatisfied. Overall, the team was less productive than the first. As both teams were led by established scientists, the difference in team satisfaction, network integration, and productivity can be explained by the varying levels of collaboration skills. We argue that effective team science training as well as collaborative leadership structures explain the trajectories of the two teams. Given the importance of collaboration skills in scientific activities, more team science training is necessary to prepare the next generation scholars for interdisciplinary careers.

Presentation 3
60 - Using Matched Controls to Measure the Impact of Small Research Pilot Grants

Presented by: Griffin Weber - Harvard Medical School
Authored by: Griffin Weber - Harvard Medical School


Introduction
Each year, the NIH Clinical and Translational Science Award (CTSA) program spends millions of dollars to fund hundreds of small pilot grant projects at academic medical centers across the country. The NIH tracks how many projects were funded and the percent of projects with at least one publication. However, the impact of pilot grant program on Team Science is unknown. It is unclear whether the teams that assembled for pilot grant projects and the publications they wrote would have happened anyway without funding. Furthermore, it is critical to realize that CTSA pilot grant funding might have had an effect on the thousands of investigators who came together to collaborate on proposals that were not funded.

Methods
To better understand the impact of pilot grant funding, we developed a novel method called Random Teams, which matches both funded and unfunded teams to appropriate controls, in order to separate the effects of the pilot grant program from baseline scientific collaboration and productivity. The controls are thousands of “virtual teams” consisting of randomly selected investigators who were eligible to collaborate on a proposal but did not. They were matched to actual teams in various ways. Characteristics of the virtual collaborations provide null distributions which can be used to statistically test hypotheses about pilot grant funding.

Results
We applied our Random Teams approach retrospectively to one year of CTSA pilot grants at Harvard University, with a five-year follow-up window. Out of 37,266 faculty who were eligible to apply, a total of 1,469 investigators assembled into 458 teams that submitted proposals, of which 65 were funded. We found: (1) Teams that applied for funding were more open to new collaborations. On average 50% of the authors on publications at Harvard were prior co-authors on other publications, while only 20% of the investigators on pilot grant teams were prior co-authors. (2) Funded teams published 2.6 articles citing CTSA funding. However, randomly matched teams published 0.9 articles citing the CTSA program for other reasons, such as using its consultation services. Thus, we need to consider this baseline when estimating the added publication impact of pilot grant funding. (3) After five years, funded pilot grant teams, on average, created 2.5 new lasting collaborations (people who co-authored papers together for the first time after receiving funding). However, on average, 1.7 people from the un-funded teams also later published together for the first time, perhaps though other funding mechanisms--a previously unrecognized Team Science benefit of the program. Surprisingly, since there were many more un-unfunded than funded teams, most of total new collaborations came from the un-unfunded teams.

Discussion
Having comparison teams is critical to understanding the impact of pilot grant funding, both on scientific productivity and collaboration. A limitation of this study was that it was performed at a single institution through one funding mechanism. However, our Random Teams method is generalizable and a useful tool for evaluating Team Science.

Presentation 4
62 - What Factors Facilitate Multidisciplinary Collaboration After NSF Awards?

Presented by: Chelsea Basore, Penn State University
Authored by: Chelsea Basore, Penn State University


Objectives 
Granting agencies such as the National Science Foundation (NSF) often require multidisciplinary research teams to be competitive for funding. However, to what extent do funded research teams have multidisciplinary authorships after they win an award, given that awards are not contracts? Would, for example, two computer scientists choose to continue working with a psychologist after winning an NSF award? Furthermore, what factors facilitate post-award cross-disciplinary collaboration?

This study examined the extent to which NSF-funded multidisciplinary teams collaborated with team members from different disciplines following their NSF award and the factors that facilitated continued collaboration.

Archival and Interview Research Methods
Our sample was 150 PIs and Co-PIs (67% male) from 58 NSF-funded projects receiving EAGER (EArly-concept Grants for Exploratory Research) grants between 2013 and 2019. Using publicly available data, we collected the number of conference papers, publications, and grants the PIs and co-PIs published with each other. We also catalogued whether PIs and Co-PIs represented unidisciplinary or multidisciplinary collaboration after their NSF award. Multidisciplinary collaboration consisted of multidivisional (Ph.D. disciplines across NSF divisions) and/or multidirectorate (Ph.D. disciplines across NSF directorates) authorship. We also conducted 30-minute to 1-hour interviews with 33 PIs/co-PIs in our sample, which were transcribed and coded.

Results
Of the 58 original EAGER PI and Co-PI teams, 74% continued to collaborate after the EAGER grant. Of the 74%, 87% were multidisciplinary, 79% of which were multidirectorate (e.g., sociology and computer science) and 8% were multidivisional (e.g., political science and cognitive psychology). Of the 327 collaboration outputs (conferences, journals, grants) produced after EAGER awards in our sample, 82% had multidisciplinary authorship, 67% of which were multidirectorate. 
To address why some PI/co-PI teams collaborated post-EAGER and others did not, we analyzed pre-EAGER award collaborations, gender diversity, and PI/co-PI university distribution (single versus multiple university) as possible contributing factors from the archival and interview data. Although gender diversity and university distribution did not exhibit clear patterns, pre-EAGER award collaboration was predictive of authorship after successfully garnering the grant. Of the teams who collaborated prior to their EAGER award, 92% collaborated before EAGER. Among post-EAGER award collaboration teams, 47% authored together pre-EAGER.

Conclusion
Of the approximately 74% of PI and Co-PI teams who continued to collaborate after the EAGER grant, almost 9 out of 10 chose to work with a collaborator from a different discipline. NSF EAGER grants serve as a catalyst for continued multidisciplinary (especially multidirectorate) collaboration. Pre-EAGER award collaboration facilitated post-EAGER award collaboration.

Presentation 5
63 - What Knowledge Must Cross-Disciplinary Team Members Share for Effective Collaboration? A Mixed Methods Study

Presented by: Susan Mohammed, Penn State University
Authored by: Chelsea Basore, Penn State University
Susan Mohammed, Penn State University


Objective
Tackling “grand challenges” demands broad representation across multiple disciplines, but teams must also build substantial integration to make the best use of diverse expertise. Efficiently harnessing expertise in teams requires both knowledge divergence and convergence. What we do not know, however, is what knowledge should be shared and what knowledge should remain unique to individual members for interdisciplinary teams to be effective. Addressing these questions is crucial because building shared knowledge in teams demands high coordination and communication costs, which can tax members’ limited attentional and temporal resources. Therefore, converging when diverging is needed or diverging when converging is needed mishandles team resources, impairing team processes and performance. Regrettably, however, the team cognition and interdisciplinary team literatures provide few answers and little guidance regarding the complex process of knowledge exchange in interdisciplinary collaborations or how knowledge convergence and divergence should be balanced. 

Seeking to address these deficiencies regarding this foundational issue for both theory and practice, the objective of this research was to explore the effects of simultaneously held knowledge divergence and convergence in cross-disciplinary teams. Our research questions include: 1) What knowledge should be shared versus uniquely held by interdisciplinary team members? 2) How does the extent to which interdisciplinary team members report more of a unique or shared understanding of relevant team knowledge affect a) perceived collaboration outcomes and b) archivally measured productivity outcomes? 

Mixed-Method Approach 
We first conducted 33 semi-structured interviews with PIs and co-PIs funded by the National Science Foundation, who reflected on what knowledge should be shared and what should be unique among interdisciplinary team members. Based on the coding of the interviews (using NVivo), we designed and administered a quantitative survey completed by 38 grant-funded PIs and co-PIs. We then examined the relationship between survey data and archival measures of research productivity (number of PI/co-PI grants, publications, and conference presentations).

Results
Results revealed a consistent pattern across interview, survey, and archival methodologies. Interview and survey data were skewed toward knowledge convergence rather than divergence. Participants especially favored knowledge similarity over uniqueness for what they planned to accomplish, what they produced, who did what, and when work should be done. However, research outcomes (where to publish) and research content (theory, methodology, analysis) evidenced less convergence than vision or teamwork items. Moreover, more actual knowledge convergence was associated with higher collaboration satisfaction, trust, respect, and research impact and trended in the direction of more grants, publications, and conference presentations.