Finding Collaborators: Toward Interactive Discovery Tools for Research Network Systems

‘);
}
//]]

Table 8.
Mean and standard deviations of scores for Likert questionsa regarding features within the prototype.

View this table

Overall, the participant reaction to the working prototype was positive. The combination of publication and grant information in a single timeline scored the highest (mean 4.65, SD 0.67) of all the prototype features. Participants felt the interface provided insights into the candidate’s research interests and past history. The timeline format allowed users to examine researchers’ publication history, including both the recency and frequency of publications associated with search keywords. The publications allowed users to categorize a candidate as either a multidisciplinary researcher or a researcher with a single field of research. As with the SUS scores, responses from the 2 groups are roughly similar.

Several improvements to the application were suggested during the evaluations. Multiple participants requested hyperlinks to the PubMed and NIH Reporter records corresponding to items found in collaborator profiles. One evaluator felt that information regarding candidates’ academic training (eg, MD or PhD) was important for assembling collaborative teams. Another evaluator observed that papers and grants do not fully describe the value a candidate brings to collaborations, particularly including unique expertise or access to crucial resources (eg, animal models, computing techniques). The addition of research resource information [3840] was suggested as a potential solution to this problem.

Anecdotal feedback from participants suggested that interest in collaborator search tools might differ based on the context in which researchers work. Two participants—1 from a country with a smaller number of universities and another from a small US medical school (less than 200 faculty)—commented that the lack of resources at their institutions limited opportunities for local collaborations, potentially motivating greater interest in RNS tools.

Discussion

Principal Findings

Interactive visualizations may help researchers use RNSs to identify collaborators. Interviews with researchers used paper prototypes to stimulate discussion of desired functionality for collaborator search tools. A functional prototype providing many of these features, including chronological displays, bookmarking tools, and multiple keyword search, was well received by users. Additional development and evaluation will be needed to gauge the utility of RNS collaborator search tools.

Building Usable Collaborator Search Tools

Identifying appropriate collaborators is an important task in the increasingly interdisciplinary field of biomedical research. Although RNSs show great promise for aggregating and representing data describing researchers and their potential contributions, the success of these tools will require more than just infrastructure. If RNSs are to play a constructive role in facilitating collaboration, they will need to improve on the established method of using existing collegial contacts to find the “friend of a friend” who might provide needed expertise. To do this, they will need to provide easy access to high-quality data; in effect, they must provide added value unavailable through other means [6]. Furthermore, they must support the potentially different goals of different groups of users [12].

Although previous efforts have investigated collaborator search habits and preferences, relatively little attention has been paid to how interactive tools might meet these needs. Investigation of potential features and how they might be realized addressed requirements identified in earlier studies, including the importance of personal contact lists [6] and geographic location [14], along with others that were implied, if not explicitly discussed, such as the temporal histories of grants and publications. In contrast with computational methods that attempt to model researcher similarity [19,20], the designs considered in this study rely on term matching and visual displays, thus favoring clarity and simplicity at the potential expense of missing latent similarities. Further comparisons of this tradeoff might be an interesting area for future investigation.

Participants in the qualitative inquiries did not respond enthusiastically to some features that were identified as potentially important in prior work [5,6]. In response to the prototype based on personal social networks, participants were not particularly interested either in the use of their personal contacts as seed points or in the use of geographical distance as a criterion for selecting collaborators. However, in both cases participants may have missed the salient point. In the case of personal social networks, participants’ reaction that “they know these people already” may have overshadowed the fact that existing colleagues are important “gateways” to people they do not know. In the case of geographical distance, participants may not have been aware of the potential impact of proximity on collaborative productivity and of the possibility of discovering neighboring, but unknown, collaborators. Whatever the etiology, these findings suggest the likelihood of a range of preferences and styles for searching for collaborators. More fully realized tools might provide users with a range of starting points, views, and filtering options.

Positive responses to the prototype suggest the design provided useful functionality for collaborator searches. Participants found the timeline-based display of publications and grants to be useful for a variety of tasks, including identifying central people in fields, assessing researchers’ levels of activity and finding multidisciplinary collaborators. Timeline displays of publication activity have also been explored in other RNSs, most notably Profiles[15] and SciVal [16].

Participants in the first phase of the study were inconsistent in their comments regarding the role of impact measures in collaboration search processes. Although these metrics were generally found to be of potential use, there was little agreement on which specific measures might prove most useful. It is possible that this lack of consensus is a reflection of the ongoing discussion of the relative merits of different measures [41,42]. Potential design solutions might include displaying multiple impact measures, along with tools for filtering and ranking along individual measures or potentially some weighted aggregate measure.

The results for the SUS present both initial feedback on the usability of the functional prototype and indications of areas potentially in need of further work. The mean SUS score of 76.4 (SD 13.9) provides some validation of the usability of the tool, with particularly encouraging scores for questions regarding ease of learnability and confidence in using the system. Other questions suggest potential concerns regarding unnecessary complexity, potential need for technical support, inconsistency, and the need for training.

A relatively low score for the question involving frequency of use (“I think that I would like to use this system frequently”) is consistent with earlier observation that most researchers do not use online tools to find collaborators (see Table 4) and with the observation that finding collaborators may not be seen as a discrete or frequent task. Further study including empirical comparisons of metrics, such as learnability, would be needed to better understand these preliminary usability results.

Additional Likert questions assessing satisfaction with specific design elements gave generally encouraging results. The lowest score was given to the representation of the documents within the system (“height of the publication bar”), which scored 2.90. During the design phase of the working prototype, several approaches for representing the documents were considered. The initial design suggestion was to use absolute scales making the height of each bar proportional to the number of publications by a candidate that matched the keyword in each given year. This approach was rejected initially because it complicated rendering for candidates with keywords or bars that would contain low but nonzero counts. Furthermore, absolute counts might perpetuate biases against junior researchers, who might be less likely to have many publications matching a single topic in any given year.

Instead, we used a relative scaling approach normalizing the height of each bar to the percentage of that individual’s publications on the given topic for the given year. This design presents its own challenges because researchers with similar ratios but vastly different outputs on a given topic could be represented identically. Alternative representations with appropriate user controls might give users the option of selecting a preferred visual representation and further comparative user testing might be needed to better understand the usability implications of these different layouts.

Questions regarding initial design elements also provide preliminary validation of the requirements identified in the qualitative investigation of the paper prototypes (Table 6). Positive responses to the timeline (requirement 1), the list of potential collaborators (requirement 3), and the multiple keyword search (requirement 4) suggest that these features might play important roles in production-quality collaborator search tools. However, the current list of requirements and themes is not definitive. Further exploration of user needs, involving a broader set of informants, is likely necessary to capture the possible variations in preferences and working styles for collaboration identification.

These inquiries identified several additional suggested features focusing on the presentation of richer information about potential collaborations. The addition of academic degrees, impact factors, and research resources [3840] might provide additional perspective on the prominence of potential collaborators. Exploration of the relative utility of these comparative measures might be an interesting focus for future work.

Participant recruitment identified subpopulations of users with potentially different needs and goals for research collaborator search tools. Because recruitment involved a convenience sample [43] based on email solicitations to scientists interested in research networks and subsequent snowball sampling, participants are in no way representative. It is entirely possible that this convenience sample might have introduced biases in the results.

However, we did identify 2 distinct groups with different goals and perspectives. Although the nature of the sample limits generalizations that might be made, PIs appeared to rely more heavily on personal networks than RFs (Table 4). Because the facilitators are generally working on behalf of others, potentially in unfamiliar fields, they might benefit more from interactive tools. Other potential features, such as concept maps relating topics from different fields, might provide additional benefits for research facilitators.

Participants also suggested that contextual differences might make interactive tools more useful to certain classes of PIs. Researchers at institutions that lack opportunities for local collaborations and junior researchers, previously described as relatively impoverished with respect to personal networks of potential collaborators [5], may be those who stand to benefit most from research social network collaboration identification tools.

The concern that collaborator search is not a discrete task that users engage in is consistent with the observation that search engines may lead many users into RNS pages [11]. To be successful, collaborator search tools will have to work within this well-established dynamic, finding ways to engage users who arrive via search engines and providing value beyond simple ranked lists. The functional prototype provides an initial design exploration that might move in this direction, but additional work will be needed to fully integrate this vision within the context of functional RNSs.

Further work will be needed to develop a more complete understanding of the use of collaboration search tools. The small and nonrepresentative sample of participants limits the breadth, depth, and generality of these preliminary results. Specifically, this study does not address the very real possibility that collaboration search practices and preferences may differ across the wide range of biomedical research collaborations. Differences in researcher backgrounds (basic researchers vs clinical researchers), number of collaborations, size of collaborations, local funding climate and incentives, and the extent to which research is interdisciplinary are just a few of the factors that might influence how researchers might identify potential collaborators and, therefore, how interactive tools might best support this practice.

Limitations

This project’s small sample size limits the generalizability of the results. The convenience sample of 38 participants may not be representative of the greater research community. Generality of the results might also be limited by the diversity of the participant pool, which contained a relatively small number of researchers with medical degrees. Descriptions of collaborator search behavior are limited by reliance on recall-based measures and respondents’ definitions of the nature and extent of their collaborations. The limitations of the data used in the functional prototype (2 VIVO datasets) might have influenced users’ responses to the tool.

Conclusions

The landscape of RNSs continues to evolve as more systems are deployed throughout institutions providing researchers novel opportunities for scientific collaborations. RNSs have the potential to play an important role in enabling interdisciplinary science. However, these benefits will not be realized without highly usable and useful end-user applications. Successful collaboration support tools must provide enough value to convince researchers to change established habits, including traditional networking and Web searches. Effectively converting the previously manual and socially complex task of identifying collaborators into a computer search system requires analysis of user needs and how tools might change/impact their workflow.

This qualitative study used semistructured interviews with researchers to gauge responses to paper prototypes for collaboration search tools. This inquiry identified 2 distinct user groups (RFs and PIs), and 3 themes categorizing collaboration search software needs: measure impact, track candidates, and conduct complex searches. Four specific requirements—chronological display of research output, robust impact measures, tools for tracking promising candidates, and multiple keyword searches—were considered for inclusion in a functional prototype, which was reviewed by participants in a second round of qualitative inquiry. Responses on the SUS provided initial formative validation of the design.

Although further inquiry will be needed to understand the similarities and differences between these subgroups, these distinctions illustrate the importance of understanding user needs and of providing functionality that meets those needs.

Acknowledgments

CDB developed the paper prototypes, conducted the participant interviews, coded and interepreted responses, developed the functional system, and drafted the manuscript. HH discussed designs, reviewed data coding, reviewed analyses of qualitative data, and revised the manuscript. TKS and MJB contributed to designs of the paper prototypes and of the functional tool. All authors reviewed and contributed to the manuscript. We thank all the participants in the study for their time and insights. We would especially like to thank Holly Falk-Krzesinski who was instrumental in our recruiting efforts. This publication was made possible, in part, by the Lilly Endowment, Inc Physician Scientist Initiative and by the University of Pittsburgh Clinical and Translational Science Institute, NIH Grants #5 UL1 TR000005-08 and #UL1 RR024153-06.

Conflicts of Interest

None declared.


Multimedia Appendix 1

Semi-structured interview questions used in Phase 1.

PDF File (Adobe PDF File), 4KB



Multimedia Appendix 2

Description: Semi-structured interview instrument for Phase 2 evaluation of functional prototype.

PDF File (Adobe PDF File), 15KB


References

  1. Wuchty S, Jones BF, Uzzi B. The increasing dominance of teams in production of knowledge. Science 2007 May 18;316(5827):1036-1039 [FREE Full text] [CrossRef] [Medline]
  2. Disis ML, Slattery JT. The road we must take: multidisciplinary team science. Sci Transl Med 2010 Mar 10;2(22):22cm9. [CrossRef] [Medline]
  3. Beaver DD. Reflections on scientific collaboration (and its study): past, present, and future. Scientometrics 2001;52(3):365-377. [CrossRef]
  4. Katz J, Martin B. What is research collaboration? Research Policy 1997;26(1):1-18. [CrossRef]
  5. Spallek H, Schleyer T, Butler BS. Good Partners are Hard to Find: The Search for and Selection of Collaborators in the Health Sciences. In: IEEE Fourth International Conference on eScience.: The search for and selection of collaborators in the health sciences. eScience, 2008 eScience ’08 IEEE Fourth International Conference on; 2008 Presented at: IEEE Fourth International Conference on eScience; 2008; Indianapolis, IN p. 462-467. [CrossRef]
  6. Schleyer T, Butler BS, Song M, Spallek H. Conceptualizing and Advancing Research Networking Systems. ACM Trans Comput Hum Interact 2012 Mar 1;19(1):2:1-2:26 [FREE Full text] [CrossRef] [Medline]
  7. Northwestern University. 2010 NU RN Tool Comparison
      URL: http://www.vivoweb.org/files/NU%20RN%20Tool%20Comparison.pdf [accessed 2014-03-12]
    [WebCite Cache]
  8. VIVO. VIVO | connect – share – discover. 2014.
      URL: http://www.vivoweb.org/ [accessed 2014-03-12]
    [WebCite Cache]
  9. Schleyer T, Spallek H, Butler Bs, Subramanian S, Weiss D, Poythress Ml, et al. Facebook for scientists: requirements and services for optimizing how scientific collaborations are established. J Med Internet Res 2008;10(3):e24 [FREE Full text] [CrossRef] [Medline]
  10. Weber Gm, Barnett W, Conlon M, Eichmann D, Kibbe W, Falk-Krzesinski H, Direct2Experts Collaboration. Direct2Experts: a pilot national network to demonstrate interoperability among research-networking platforms. J Am Med Inform Assoc 2011 Dec;18 Suppl 1:i157-i160 [FREE Full text] [CrossRef] [Medline]
  11. Kahlon M, Yuan L, Daigre J, Meeks E, Nelson K, Piontkowski C, et al. The use and significance of a research networking system. J Med Internet Res 2014;16(2):e46 [FREE Full text] [CrossRef] [Medline]
  12. Boland Mr, Trembowelski S, Bakken S, Weng C. An initial log analysis of usage patterns on a research networking system. Clin Transl Sci 2012 Aug;5(4):340-347 [FREE Full text] [CrossRef] [Medline]
  13. Kraut R, Egido C, Galegher J. Patterns of contact and communication in scientific research collaboration. In: Proceedings of the 1988 ACM conference on computer-supported cooperative work. 1988 Presented at: ACM conference on computer-supported cooperative work; September 26-28, 1988; Portland, OR p. 1-12. [CrossRef]
  14. Olson G, Olson J. Distance matters. Human-Comp. Interaction 2000 Sep 1;15(2):139-178. [CrossRef]
  15. Profiles Research Networking Software. Professional Networking and Expertise Mining for Research Collaboration
      URL: http://profiles.catalyst.harvard.edu/ [accessed 2014-03-20]
    [WebCite Cache]
  16. Elsevier. Elsevier Research Intelligence
      URL: http://www.elsevier.com/online-tools/research-intelligence [accessed 2014-03-20]
    [WebCite Cache]
  17. ResearchGate.
      URL: http://www.researchgate.net/ [accessed 2014-03-20]
    [WebCite Cache]
  18. Agle R. VIVO Searchlight. 2014.
      URL: about.vivosearchlight.org [accessed 2014-03-12]
    [WebCite Cache]
  19. Gollapalli SD, Mitra P, Giles Cl. Similar researcher search in academic environments. In: Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries. 2012 Presented at: 12th ACM/IEEE-CS joint conference on Digital Libraries; June 10-14, 2012; Washington, DC p. 167-170. [CrossRef]
  20. Chen HH, Gou L, Zhang X, Giles Cl. CollabSeer: a search engine for collaboration discovery. In: Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries. 2011 Presented at: 11th annual international ACM/IEEE joint conference on Digital libraries; June 13-17, 2011; Ottawa, ON p. 231-240. [CrossRef]
  21. Beyer H, Holtzblatt K. Contextual Design: Defining Customer-Centered Systems. San Francisco, CA: Morgan Kaufmann Publishers; 1998.
  22. Rosson MB, Carroll JJ. Usability Engineering: Scenario-Based Development of Human Computer Interaction. San Francisco, CA: Academic Press; 2002.
  23. Gewin V. Collaboration: social networking seeks critical mass. Nature 2010 Dec 16;468(7326):993-994. [CrossRef]
  24. Bhavnani Sk, Warden M, Zheng K, Hill M, Athey Bd. Researchers’ needs for resource discovery and collaboration tools: a qualitative investigation of translational scientists. J Med Internet Res 2012;14(3):e75 [FREE Full text] [CrossRef] [Medline]
  25. Corbin JM, Strauss AL. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Los Angeles, CA: Sage Publications, Inc; 2008.
  26. Dow SP, Glassco A, Kass J, Schwarz M, Schwartz DL, Klemmer SR. Parallel prototyping leads to better design results, more divergence, and increased self-efficacy. ACM Trans Comput-Hum Interact 2010 Dec 01;17(4):1-24. [CrossRef]
  27. Spence R, Tweedie L. The Attribute Explorer: information synthesis via exploration. Interacting with Computers 1998 Dec 01;11(2):137-146. [CrossRef]
  28. Cisco. WebEx Conferencing, Online Meetings, Desktop Sharing, Video Conferencing
      URL: http://www.webex.com/ [accessed 2014-03-12]
    [WebCite Cache]
  29. Lazar Dj, Feng D, Hochheiser Dh, Lazar J, Feng J, Hochheiser H. Research Methods in Human-Computer Interaction. West Sussex: John Wiley Sons; 2010.
  30. Openlink Software. Virtuoso Open-Source Wiki
      URL: http://www.openlinksw.com/wiki/main [accessed 2014-03-12]
    [WebCite Cache]
  31. W3C. SPARQL 1.1 Overview
      URL: http://www.w3.org/TR/sparql11-overview/ [accessed 2014-03-12]
    [WebCite Cache]
  32. Bostock M, Ogievetsky V, Heer J. D³: Data-Driven Documents. IEEE Trans Vis Comput Graph 2011 Dec;17(12):2301-2309. [CrossRef] [Medline]
  33. Brooke J. SUS-A quick and dirty usability scale. In: Jordan PW, Thomas B, Weerdmeester BA, McClelland AL, editors. Usability Evaluation in Industry. London: Taylor and Francis; 1996:189-194.
  34. Bangor A, Kortum Pt, Miller Jt. An empirical evaluation of the System Usability Scale. International Journal of Human-Computer Interaction 2008 Jul 30;24(6):574-594. [CrossRef]
  35. Hirsch Je. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A 2005 Nov 15;102(46):16569-16572 [FREE Full text] [CrossRef] [Medline]
  36. Liu Y, Coulet A, LePendu P, Shah NH. Using ontology-based annotation to profile disease research. J Am Med Inform Assoc 2012 Jun;19(e1):e177-e186 [FREE Full text] [CrossRef] [Medline]
  37. GitHub. RNS-Searcher
      URL: https://github.com/pittdbmivis/rns-searcher [accessed 2014-10-09]
    [WebCite Cache]
  38. Tenenbaum JD, Whetzel PL, Anderson K, Borromeo CD, Dinov ID, Gabriel D, et al. The Biomedical Resource Ontology (BRO) to enable resource discovery in clinical and translational research. J Biomed Inform 2011 Feb;44(1):137-145 [FREE Full text] [CrossRef] [Medline]
  39. Vasilevsky N, Johnson T, Corday K, Torniai C, Brush M, Segerdell E, et al. Research resources: curating the new eagle-i discovery system. Database (Oxford) 2012;2012:bar067 [FREE Full text] [CrossRef] [Medline]
  40. Torniai C, Brush M, Vasilevsky N, Segerdell E, Wilson M, Johnson T. Developing an application ontology for biomedical resource annotation and retrieval: challenges and lessons learned. 2011 Presented at: ICBO: International Conference on Biomedical Ontology; July 28-30, 2011; Buffalo, NY.
  41. Pan RK, Fortunato S. Author Impact Factor: tracking the dynamics of individual scientific impact. Sci Rep 2014;4:4880 [FREE Full text] [CrossRef] [Medline]
  42. Penner O, Pan RK, Petersen AM, Kaski K, Fortunato S. On the predictability of future impact in science. Sci Rep 2013;3:3052 [FREE Full text] [CrossRef] [Medline]
  43. Goodman L. Snowball sampling. The Annals of Mathematical Statistics 1961;32(1):148-170.



Abbreviations


Edited by G Eysenbach; submitted 02.04.14; peer-reviewed by M Kahlon, G Weber, L Johnson, Qi Li, M Dogan; comments to author 17.07.14; revised version received 13.08.14; accepted 30.08.14; published 04.11.14

©Charles D Borromeo, Titus K Schleyer, Michael J Becich, Harry Hochheiser. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 04.11.2014.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included.


Journal of Medical Internet Research
ISSN 1438-8871

Copyright © 2014 JMIR Publications Inc.

Comments Off on Finding Collaborators: Toward Interactive Discovery Tools for Research Network Systems

Tags: ,

UA-25380860-1