Are you sure you want to log out?
A team, in the context of team science, typically means a group of two or more researchers collaborating to identify or define the empirical question to be explored and then working together toward a common goal. There are several ways these teams can form and function. They're often categorized as one of the following: unidisciplinary, multidisciplinary, interdisciplinary and transdisciplinary.
Let's use Wendy Austin's fruit metaphor to take a look at each of these kinds of team science. With unidisciplinary team science, researchers from a single discipline work together to address a common research problem. So let's say unidisciplinary team science is an orange.
In multidisciplinary team science, researchers in different disciplines work in a sequential, yet independent process in which each develops a discipline-specific perspective, with a goal of eventually combining efforts to address a common research problem. So multidisciplinary team science would be a fruit salad with oranges in it.
You can liken interdisciplinary team science to a fruit smoothie with oranges, in which the process is interactive and researchers work jointly to draw from his or her own disciplinary-specific perspective to address a common research problem.
Transdisciplinary team science is an integrative process in which researchers work jointly to develop and use a shared conceptual framework that synthesizes and extends discipline-specific theories, concepts, methods, or all three to create new models and language to address a common research problem.
In this metaphor, transdisciplinary team science would be a Mexican-Asian fusion dish that includes the smoothie as part of the meal.
Did you get all of that?
Now, try categorizing each type on your own by dragging the terms on screen to their corresponding definitions.
In Structures of Scientific Collaboration, Shrum looked at bibliometric data and what it suggested about co-authorship patterns from 1981 to 1995.
The bibliometric data analyzed in Structure of Collaborations indicated that there was an increase in co-authored papers from 1981 to 1995.
From the 1970s until the 1980s the proportion of internationally co-authored papers doubled. International co-authorship increased from 17 percent in 1981 to 29 percent in 1995 across all countries and fields.
Inter-sectoral collaboration has also grown. About 25 percent of all papers published by academic authors involved co-authors in another sector, compared with 20 percent in 1981.
It is important to note that there is a limit on co-authorship data; such data does not generate insight into the internal dynamics of collaborations. The result is only evidence of collaboration, divorced from social organization and context.
It's like playing dominoes. When you just line the dominoes up in a single row, they all fall down in a straight line. That's like sequential thinking-it follows one narrow path and displays just one perspective. But when you line the dominoes up in different patterns, they create many different shapes and formations. This is the way connective thinking works. The dominoes follow more than one path and make new connections with different sets of dominoes.
Connective thinking happens when teams brainstorm with members who come from different disciplines and perspectives. Dominoes are more fun when they don't just follow a straight line. They have more possibilities. That's how team thinking should be, too.
Teams who share leadership but come from different mindsets are best able to unlock and access innovation.
The highest shift in teamwork has occurred in the Science and Engineering field, but there has also been a strong shift toward teams in the Social Sciences. Although the Arts and Humanities and Patents have the lowest teamwork trend, they still display a positive shift toward teams.
What does all this mean for you?
Teamwork has become the norm in many different areas of study, not just one
The prevalence of teams can't be seen as a simple passing trend, but should be viewed as the way of the future, which means that knowing how to work on a team is becoming more important than ever.
REFERENCE: Wuchty, S., Jones, B., & Uzzi, B. (2007). The increasing dominance of teams in production of knowledge. Science, 316(5827), 1036-1039. doi: 10.1126/science.1136099
In the past, it was more likely for a lone author to publish a singularly influential paper than a team.
But times have changed.
Now, team-authored papers have a much higher probability of being highly cited.
In fact, a team-authored paper in Science and Engineering is currently 6.3 times more likely than a solo-authored paper to receive at least 1000 citations.
The fields where single authors receive more citations than teams are almost non-existent in today's world of research. If you're looking to publish the most highly cited publications, a team may be your best bet.
You might be wondering what kind of results multi-university collaborations produce. Satisfactory? Outstanding? Just mediocre? Well, data shows that they often produce better results.
Today, scientific papers written by multiple universities make up the fastest-growing authorship structure. According to a 2008 study by Jones, Uzzi and Wuchty, when these papers include a top-tier university, they have the highest impact.
The research showed that, on average, a team made up of two authors from different universities will produce a higher-impact paper than a team made up of two authors from same university.
As more and more universities start to work and publish papers together, they become interdependent and stratified by in-group university rank.
When and why is trust important in team collaborations?
In the book, Structures of Scientific Collaboration, Shrum et al., examined how trust functions in collaborations.
Their analysis found that trust is inversely related to conflict - which is, in turn, positively associated with bureaucracy.
More Trust = Less Conflict = Less Bureaucracy.
Nevertheless, somewhat surprisingly, Shrum et al., did not find that high levels of trust always predicted collaboration success...why?
Trust takes time to form, and often teams need to come together quickly despite lacking a long history of collaboration. In these cases, Shrum et al., argue that by creating formal structures for social practices (i.e., positive forms of bureaucracy), collaborations can often minimize mutual dependencies and reduce the need for high levels of trust.
Despite the advantages of diverse resources and expertise, multi-university collaborations encounter higher coordination costs than collaborators from a single university. These dispersed teams must pay close attention to several coordination factors that predict project success. High coordination costs include delayed, misinterpreted, or non-existent communication, geographic distance, institutional differences, and resource allocation. All of these factors can impede scientific discovery, team trust, and productivity, as well as significantly slow team consensus and division of labor.
But how can highly dispersed teams overcome these coordination costs? Cummings and Kiesler (2007) examined 491 research collaborations and found several coordination factors that predict project outcomes.
Knowledge transfer was the most important predictor of project outcomes (predicting all outcomes), and includes student exchanges, co-authorship and presenting work to a project team.
Division of responsibility and labor involves dividing and assigning project tasks appropriately, delegating subgroup tasks, and getting faculty and post-docs to supervise these assignments.
Type of communication. A team's communication needs to be in a form collaborators will trust. Whenever possible, it should be in-person during meetings and spontaneous discussion, or through traveling to meet other members.
Synchronous communication is better than asynchronous. Use phone calls instead of email. Technology is an imperfect substitute for collocation.
Frequency of communication. The more frequent communication, the better.
Shared resources and communication technology includes using a common website or intranet as a way to share information and communicate further.
These coordination factors have an impact on project success and help multi-university collaborations conquer high coordination costs.
The Collaboration Success Wizard is an online survey tool implemented by a research team at the University of California, Irvine, that is investigating the science of team science.
Researchers use the Wizard to collect data on collaborations and provide collaborators with an incentive to use the Wizard. Participating collaborators receive feedback in the form of strengths and limitations of their collaborations, based on their own Wizard input.
Additionally, the Wizard provides suggested solutions to improve the collaboration. There are three versions of the Wizard, one for each stage of collaboration. Collaborators can answer the survey for an initiative in the planning stage, an initiative already in action, or even completed initiatives.
The researchers operating the Wizard aim for the participation of as many team members as possible in a given collaboration, which results in a stronger collection of data.
The types of agents composing a team may be classified according to their experience. There are newcomers, who have little experience and unseasoned skills, and there are incumbents who are established persons with track records.
Therefore, there are four possible types of links within a team:
Newcomer-newcomer, Newcomer-incumbent, Incumbent-incumbent and Repeat incumbent-incumbent
The distribution of links reflects the team's diversity. A 2005 study by Guimera, Uzzi, Spiro, and Amaral, on which this information is based, found that if teams have a preponderance of repeat incumbent-incumbent links, they are less likely to have innovative ideas because their shared experiences tend to homogenize their pool of knowledge.
When teams have a variety of links, they're more likely to have diverse perspectives and therefore more innovative solutions.
The study analyzed data from both artistic and scientific fields in which collaboration needs experienced pressures such as differentiation and specialization, internationalization, and commercialization. Specifically, in the artistic field, the researchers looked at all 2,258 Broadway productions from 1877 to 1990. Their findings are telling.
They found that network typography significantly affects artist performance and that the more diverse pools of talent and creative material - new collaborators - the more likely artists were to experiment and create hits from new combinations of existing material.
They also found that artist teams that combined experienced with new artists were most successful, whereas artist teams composed of only people who had collaborated before had less successful productions.
Cyberinfrastructure consists of computing systems, data storage systems, advanced instruments and data repositories, visualization environments, and people, all linked together by software and high-performance networks to support scientific discoveries.
Cyberinfrastructure can serve a number of purposes in team science, including: helping researchers connect with new collaborators, improving communication within an existing team, and helping researchers share their data more effectively.
Here are a few categories of tools with examples of each.
The National Network enables the discovery of researchers across institutions. The information accessible through VIVO's search and browse capability will reside and be controlled locally, within institutional VIVOs or other semantic web-compliant applications.
Participants in the network include institutions with:
The study analyzed data from both artistic and scientific fields in which collaboration needs experienced pressures such as differentiation and specialization, internationalization, and commercialization. Specifically, in the artistic field, the researchers looked at all 2,258 Broadway productions from 1877 to 1990. Their findings are telling.
They found that network typography significantly affects artist performance and that the more diverse pools of talent and creative material - new collaborators - the more likely artists were to experiment and create hits from new combinations of existing material.
They also found that artist teams that combined experienced with new artists were most successful, whereas artist teams composed of only people who had collaborated before had less successful productions.
In order to set team member expectations and potentially mediate future team conflict, it's important that a team agrees upon how certain components of the project will be handled. Try answering these questions, which are based on questions used by the NIH Ombudsman's Office.
You may use these questions as a guide for team discussion.
Reference: Questions courtesy of and developed by the National Institutes of Health Office of the Ombudsman - Center for Cooperative Resolution
Examples of conflicts can be found at: http://ethics.od.nih.gov/procedures/COI-Protocol-Review-Guide.pdf
Conflicts are inevitable in collaboration. Conflict, in fact, isn't always bad, but it is a hindrance if it takes away from the development of the science rather than fueling scientific discussion among varying points of view. When conflict becomes a hindrance, there are tools available to help you manage it.
Although each project is unique, certain core issues are common to group conflicts. The National Institute of Health's Office of the Ombudsman suggests that cross-disciplinary teams address a number of common sources of conflict before they arise. These sources of conflict fall under five main categories:
As you may imagine, many different kinds of questions fit into these broad categories. Your job is to identify questions that belong to each of these five big topics.
Bibliometrics are a simple method for evaluating a team science venture. Using papers, reports, and patents as tangible evidence of progress, bibliometrics study statistical trends in authorship across fields of science.
But like any method, it has limitations.
Bibliometrics, on their own, limit the interpretation of results from a project. They neither indicate the process behind the formation of collaboration and its organization, nor do they indicate information a team may deem relevant to the final product and evaluation.
Bibliometrics don't measure the differing levels of relationships formed during and after collaboration. Relationships between teachers and students, colleagues, or supervisors and assistants, are of great interest to those who frequently collaborate on team science.
Another issue with bibliometrics is that evaluating published team science studies means there is no accounting for the unpublished work, whether the project was a failure or whether the findings might have proven a null hypothesis and never found its way to publication.
The file drawer effect, as coined by Robert Rosenthal, speaks to this unpublished data that gets shoved into a metaphorical (and sometimes literal) file drawer.
Omitting these studies, even if a fraction of the total work, creates a bias.
Case studies address the behind-the-scenes specifics of an initiative, which the limited scope of bibliometrics overlooks. Case studies can provide theoretical guidance and identify the social processes of team science through a narrative orientation. They are able to raise questions about organizational and cultural dimensions.
But even though bibliometrics have too narrow a scope of evaluation, case studies are too broad.
Case studies lack the systematic assessments of the relative importance of one process over another.
It is important to know the most important factors. When contrasted against one another, the results are too diversified to be generalized.
Case studies can highlight the division of labor, technology, or communication of a project, but none is more important than another to the scholar. This weakens the effectiveness of case studies as a method of evaluation.
Team science initiatives involve multiple kinds of group membership, which aren't limited to membership associated with academic discipline. Participating researchers and trainees, funding organizations, academic institutions, policymakers, translational partners in clinical settings and community organizations all have a stake in the outcome of team science programs. Therefore, evaluation of team science initiatives must incorporate multiple perspectives. As a result, a multi-method approach is best suited for evaluating team science.
Outcome assessments from participating scientists, trainees, and staff members should be in line with peer appraisals
It has been postulated that more diverse teams, like those that are cross-disciplinary, are the best equipped to drive innovation and ignite the creative spark. Often, the robust nature of cross-disciplinary teams compliments the way ideas are developed better than a single innovator. According to David Campbell's Evolutionary Theory of Creativity, original ideas come to life in three main steps: variation, selection, and retention.
First, team members combine their existing knowledge to create a canvas for new ideas. Different team members will have different kinds of knowledge to share. The more varied the knowledge, the better.
Second, teams prioritize actions based on the probability of idea breakthrough or failure. Teams must discuss their idea plans with one another and come to a consensus on which ideas to move forward with and which to abandon.
Lastly, once teams have decided on a course of action, they will begin to replace old practices with new ideas. The more assorted the team, the more opportunities for creativity to take place.
Highly distributed or multi-university teams, especially those that span disciplines and large geographic distances, can face big challenges along the journey of collaboration.
If team members are far apart from one another, this can slow communication and consensus-making among team members. These teams may have higher collaboration costs than teams with members who are much closer to one another. More effort and energy may be needed to understand one other.
Higher collaboration costs can further complicate the research itself, and project success may be compromised.
Sometimes large institutional differences can stand in the way of multi-university collaborations. Problems at one institution may go unnoticed at another. Different universities often have dissimilar structures, such as pay scales or requirements for joint appointments.
Even cultural norms can clash during collaboration. Researchers may have to negotiate where to publish because of differing A-List journals and conferences where faculty members are expected to publish. These kinds of decisions must be negotiated, and it's not always easy.
But while multi-university collaborations may be tricky, there are vast incentives to bringing many heads together from many different institutions-even those that are quite far apart on the map.
When several institutions collaborate together, the rewards tend to be greater. But so are the risks. There are many rewards to collaborating in such a big way. One of the biggest includes the potential for long-term innovation.
The more experts on your team from different universities, the better your chances for new, amazing ideas. Having multiple institutions working on your project may increase your chances for funding, too. Funding agencies may give a project more consideration if it's being developed by a group of universities instead of just one.
But there are risks.
Sometimes multi-university collaboration will introduce some barriers that you'll need to be able to navigate through.
Beware of those risks, but keep your sights focused on all the rewards of multi-university collaboration. Your path might not always be smooth, but it's sure to be worthwhile.