Category: UC3

Posts written by UC3 staff.

PIDapalooza is back!

PIDlove Logo

Guess what!?!?  PIDapalooza is back!  This time we will meet up with the nerdiest PID folks on January 23-24, for a two-day celebration of persistent identifiers and networked research. 

Brought to you by California Digital Library, Crossref, DataCite, and ORCID.  Together, we will do the impossible – make a meeting about persistent identifiers and networked research fun! 

Perfect PID-sessions

This year’s sessions are organized around eight broad topics:

PIDapalooza logo

  • PID myths
  • Achieving persistence
  • PIDs for emerging uses
  • Legacy PIDs
  • Bridging worlds
  • PIDagogy
  • PID stories
  • Kinds of persistence

The program is final and there’s something for everyone! From Do Researchers Need to Care about PID Systems? to Stories from the PID Roadies: Scholix; From The Bollockschain and other PID Hallucinations to #ResInfoCitizenshipIs?

There will also be plenaries by Johanna McEntyre on As a [biologist] I want to [reuse and remix data] so that I can [do my research] and Melissa Haendel (title to be confirmed).

Get your tix!

Now’s the time to register – we hope to see you there!

 

Skills Training for Librarians: Expanding Library Carpentry

In today’s data-driven, online and highly interconnected world, librarians are key to supporting diverse information needs and leading best practices to work with and manage data. For librarians to be effective in a rapidly evolving information landscape, training and professional development opportunities in both computational and data skills must be available and accessible.

OIMLS logover the past couple years, an international Library Carpentry (LC) movement has begun that seeks to emulate the success of the Carpentries — both the Data Carpentry and Software Carpentry initiatives — in providing librarians with the critical computational and data skills they need to serve their stakeholders and user communities, as well as streamline repetitive workflows and use best data practices within the library. This Library Carpentry community has already developed initial curriculum and taught more than 40 workshops around the world.

We are excited to announce that California Digital Library (CDL) has been awarded project grant funds from IMLS to further advance the scope, adoption, and impact of Library Carpentry across the US.  CDL’s 2-year project will be conducted by their digital curation team, University of California Curation Center (UC3), and will focus on these main activities: 

  1. development and updates of core training modules optimized for the librarian community and based on Carpentries pedagogy
  2. regionally-organized training opportunities for librarians, leading to an expanding cohort of certified instructors available to train fellow librarians in critical skills and tools, such as the command line, OpenRefine, Python, R, SQL, and research data management
  3. community outreach to raise awareness of Library Carpentry and promote the development of a broad, engaged community of support to sustain the movement and to advance LC integration within the newly forming Carpentries organization

Why Library Carpentry?

Library Carpentry leverages the success of the Carpentries pedagogy, which is based on providing a goal-oriented, hands-on, trial-and-error approach to learning computational skills, and extends it to meet the specific needs of librarians.

It is often difficult to figure out what skills to learn or how to get started learning them. In Library Carpentry, we identify the fundamental skills needed for librarians and develop and teach these skills in hands-on, interactive workshops. Workshops are designed for people with little to no prior computational experience, and they work with data relevant to librarians (so that librarians are working with data most applicable to their own work). WAnd workshops are also friendly learning environments with the sole aim of empowering people to use computational skills effectively and with more confidence.

How does this relate to the Carpentries?

Two sister organizations, Software Carpentry and Data Carpentry, have focused on teaching computational best practices. The ‘separate but collaborative’ organizational structure allowed both groups to build a shared community of instructors with more than 1000 certified instructors and 47 current Member Organizations around the world.  However, as Software Carpentry and Data Carpentry grew and developed, this ‘separate but collaborative’ organizational structure did not scale. As a result, the governing committees of both Software Carpentry and Data Carpentry recognized that as more mature organizations they can be most effective under a unified governance model.

On August 30, 2017, the Software Carpentry and Data Carpentry Steering Committees met jointly and approved the following two motions, which together form a strong commitment to continue moving forward with a merger.  As part of this merger, the new “Carpentries” organization will look to increase its reach into additional sectors and communities.  The nascent Library Carpentry community has recently met to decide they aim to join as a full-fledged ‘Carpentry’ in the coming year.

This grant will help LC solidify approaches to learning and community building, while also bringing resources to the table as we embark on future integration of LC within the merged Carpentries organization.

How does the Carpentries model work?

In the Carpentries model, instructors are trained and certified in the Carpentries way of teaching, using educational pedagogy, and are asked to commit to offering workshops in their regions and reworking/improving and maintaining lessons. These instructors teach two-day, hands-on workshops on the foundational skills to manage and work effectively with data. The goal is to become practitioners while in the workshop and then continue learning through online and in-person community interaction outside the classroom.

With the “train-the-trainer” model, the Carpentries are built to create learning networks and capacity for training through active communities and shared, collaborative lessons. They have used this model to scale with parallel approaches of developing lessons, offering workshops, and expanding the community. The LC community has also used this model and our grant project aims to extend this further.

Next Steps

As an immediate next step, CDL has begun recruiting for a Library Carpentry Project Coordinator.  This will be a 2-year and grant funded position.  You can apply at the UC Office of the President website.  Due date is November 30, 2017.   

While this position will report to CDL’s Director of University of California Curation Center (UC3), this position will focus on extending LC activities in the USA and working globally to gain capacity and reach for the Library Carpentry community and Carpentries staff.

For more information on this project, please feel free to contact CDL’s UC3 team at uc3@ucop.edu You can also follow UC3 on Twitter at @UC3CDL.  To learn more about Library Carpentry, you can visit https://librarycarpentry.github.io and follow on Twitter at @LibCarpentry.

We look forward to these next steps for Library Carpentry and a growing network of data savvy librarians.

RFI for organizational identifier registry

Organizations/institutions are a key part of the scholarly communications ecosystem. However, we lack an openly licensed, independently run organizational identifier standard to use for common affiliation and citation use cases.

To define a solution to this problem, a group of interested parties drafted and shared a proposal at last year’s PIDapalooza.  Based on that discussion, earlier this year Crossref, DataCite and ORCID announced the formation of an Organization Identifier Working Group and UC3 has supported this effort by our Director, John Chodacki, serving as chair of the Working Group.

Image Credit: ORCID

Scope of Work

The primary goal of our working group (loosely codenamed OrgID or Open PIIR – Open Persistent Institutional​ Identifier Registry) is to build a plan for how to best fill this gap and our main uses were to facilitate the disambiguation of researcher affiliations.

The working group used a series of breakout groups to refine the structure, principles, and technology specifications for an open, independent, non-profit organization identifier registry. We worked in three interdependent areas: Governance, Product Definition, and Business Model, and recently released for public comment our findings and recommendations for governance and product requirements.

Summary of findings & recommendations

After 9 months, the recommendations are the creation of an open, independent organization/institution identifier registry:

  • with capabilities for organizations/institutions to manage their own record,
  • seeded with and using open data,
  • overseen by an independent governance structure, and
  • incubated within a non-profit host organization/institution (providing technical development, operations and other support) during its initial start-up phase.

Request for Information

Our working group has now issued a Request for Information (RFI) to solicit comment and to hear from groups interested in hosting and/or developing this registry.

  • Are you interested in serving as a the start-up host organziation?
  • Do you have organization data you are willing to contribute?
  • Do you have other resources that could be helpful for the project?
  • Do you have advice, suggestions, and feedback on creating a sustainable business model for each phase of the Registry’s development?

We’d like to hear from you!  Please help spread the word!

Before drafting responses, please also see our original A Way Forward document for additional framing principles. Also, please note that all responses will be reviewed by a subgroup of the Organization Identifier Working Group (that will exclude any RFI respondents).

 

Update: revised November 1, 2017

As posted above, the working group issued a Request for Information (RFI) on 9 October 2017 to solicit comment and interest from the broader research community in developing the Registry. We have received a number of questions about the RFI. The purpose of this post is to clarify the RFI, the process for reviewing responses, and the next steps for developing the registry. Please use this template to respond to the RFI.

(1) When are the responses due?

We have extended the deadline for responses to 1 December 2017.

(2) Who should be responding?

Any organization interested in (i) providing open data, (ii) participating in a governance role, (iii) serving as technical and/or administrative host for the Registry organization , and / or (iv) providing technology, staffing, or marketing resources.

(3) How much detail should the response include?

A general description of your interest (see (2) above), and a short description of the resources you could bring to the Registry will suffice. We are not requesting a detailed cost proposal. While framing your responses, please see the Governance and Product documents for requirements and principles. Please use this template to respond to the RFI.

(4) How will the responses be reviewed?

Responses will be received by the Organization Identifier Steering Group.  In early December, they will develop a summary and list of respondents to share with the full Working Group and the Executive Committees of Crossref, DataCite, and ORCID boards for review. We propose a meeting of stakeholders in late January, potentially the day before the PIDapalooza meeting, to discuss options with the respondents for a collaborative approach to developing the Registry. From there, next steps will be proposed.

(6) Who do I contact if I have more questions?

Please email the Org ID steering group with any questions.  Or, if you have any other questions/comments about the involvement of CDL’s UC3 team, let us know at uc3@ucop.edu

The Significance of Managing Research Data

Some of the most influential research tools of the last century were created to ensure the quality of beer and extrapolate the results of agriculture experiments conducted in the English countryside. Though ostensibly about the placement of a decimal point, an ongoing debate about the application of these tools also provides a window for understanding what it actually means to manage research data.

The p-value: A very quick introduction

Though now ubiquitous in experiment-based research, statistical techniques for extending inferences from small sample (e.g. the participants in a research study) to larger populations are actually a relatively recent invention. The t-test, an early and still widely used example of “small sample” statistics was developed by William Sealy Gossett in the early 20th century as an economical way of ensuring the quality of stout. Several years later, while assisting with long-term experiments on wheat and grass at Rothamsted Experimental Station, Ronald Fisher would build on the work of Gosset and others to develop a statistical framework based around the idea of comparing observations to the null hypothesis- the position that there is no significant difference between two or more specified sets of observations.

In Fisher’s significance testing framework, devices like t-tests are tests of the null hypothesis. The results of these tests indicate the likelihood of observing a result when the null hypothesis is true. The logic is a little tricky, but the core idea is that these tests give researchers a way of understanding the likelihood that their data is the result of sampling or experimental error. In quantitative terms, this likelihood is known as a p-value. In his highly influential 1925 book, Statistical Methods for Research Workers, Fisher would introduce an informal threshold for rejecting the null hypothesis: p < 0.05.

In one of the most influential sentences in modern research methodology, Ronald Fisher describes p = 0.05 as a convenient point for judging the significance of a statistical test. From: Fisher, R.A. (1925). Statistical Methods for Research Workers.

Despite the vehement objections of all three, Fisher’s work would later be synthesized with that of statisticians Jerzy Neyman and Egon Pearson into a suite of tools that are still widely used in many fields of research. In practice, p < 0.05 has since become a one-size-fits-all indicator of success. For decades it has been acknowledged that work that meets this criterion is generally more likely to be reported in the scholarly literature while work that doesn’t is generally relegated the proverbial file drawer.

Beyond p < 0.05

The p < 0.05 threshold has become a flashpoint the ongoing conversation about research practices, reproducibility, and replicability. Heated conversations about the use and misuse of p-values have been ongoing for decades, but over the summer a group of 72 influential researchers proposed a seemingly simple step forward- change the threshold from 0.05 to 0.005. According to the authors, “Reducing the p-value threshold for claims of new discoveries to 0.005 is an actionable step that will immediately improve reproducibility.”.

As of this writing, two responses have been published. Both weigh the pros and cons of p < 0.005 and argue that the placement of a decimal point is less of a problem than the uncritical use of a single one-size-fits-all threshold across many different circumstances and fields of research. Both end on calls for greater transparency and stronger justifications for how decisions related to research design and statistical practice are made. If the initial paper proposed changing the answer from p < 0.05 to 0.005, both responses highlight the necessity of changing the question from one that is focused on statistics to one that incorporates research data management (RDM).

Ensuring that data can be used and evaluated in the future is one of the primary goals of RDM. For example, the RDM guide we’re developing does not have a space for assessing p-values. Instead, its focus is assessing and advancing practices related to planning for, saving, and documenting data and other research products. Such practices come with their own nuance, learning curves, and jargon, but are important elements to any effort to ensure that research decisions are transparent and justified.

Resources and Additional Reading

Benjamin, D. J., Berger, J. O., Johannesson, M., Nosek, B. A., Wagenmakers, E. J., Berk, R., … & Cesarini, D. (2017). Redefine statistical significance. Nature Human Behaviour. doi: 10.1038/s41562-017-0189-z

Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., … Zwaan, R. A. (2017). Justify your alpha: A response to “Redefine statistical significance”PsyArxiv preprint. doi: 10.17605/OSF.IO/9S3Y6

McShane, B. B., Gal, D., Gelman, A., Robert, C., & Tackett, J. L. (2017). Abandon statistical significance. arXiv preprint. arXiv: 1709.07588.

Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versaJournal of the American Statistical Association54(285), 30-34. doi: 10.1080/01621459.1959.10501497

Rosenthal, R. (1979). The file drawer problem and tolerance for null resultsPsychological Bulletin86(3), 638-641. doi: 10.1037/0033-2909.86.3.638

Dat-in-the-Lab: Announcing UC3 research collaboration

We are excited to announce that the Gordon and Betty Moore Foundation has awarded a research grant to the California Digital Library and Code for Science & Society (CSS) for the Dat-in-the-Lab project to develop practical new techniques for effective data management in the academic research environment.

Dat-in-the-Lab

The project will pilot the use of CSS’s Dat system to streamline data preservation, publication, sharing, and reuse in two UC research laboratories: the Evolution: Ecology, Environment lab at UC Merced, focused on basic ecological and evolutionary research under the direction of Michael Dawson; and the Center for Watershed Sciences at UC Davis, dedicated to the interdisciplinary study of water challenges.  UC researchers are increasingly faced with demands for proactive and sustainable management of their research data with respect to funder mandates, publication requirements, institutional policies, and evolving norms of scholarly best practice.  With the support of the UC Davis and UC Merced Libraries, the project team will conduct a series of site visits to the two UC labs in order to create, deploy, evaluate, and refactor Dat-based data management solutions built for real-world data collection and management contexts, along with outreach and training materials that can be repurposed for wider UC or non-UC use.  

What is Dat?

The Dat system enables effective research data management (RDM) through continuous data versioning, efficient distribution and synchronization, and verified replication.  Dat lets researchers continue to work with the familiar paradigm of file folders and directories yet still have access to rich, robust, and cryptographically-secure peer-to-peer networking functions.   You can think of Dat as doing for data what Git has done for distributed source code control.  Details of how the system works are explained in the Dat whitepaper.

Project partners

Dat-in-the-Lab is the latest expression of CDL’s longstanding interest in supporting RDM at the University of California, and is complementary to other initiatives such as the DMPTool for data management planning, the Dash data publication service, and active collaboration with local campus-based RDM efforts.  CSS is a non-profit organization committed to improving access to research data for the public good, and works at the intersection of technology with science, journalism, and government to promote openness, transparency, and collaboration.  Dat-in-the-Lab activities will be coordinated by Max Ogden, CSS founder and director; Danielle Robinson, CSS scientific and partnerships director; and Stephen Abrams, associate director of the CDL’s UC Curation Center (UC3).

Learn more

Stay tuned for monthly updates on the project. You can bookmark Dat-in-the-Lab on GitHub for access to code, curricula, and other project outputs.  Also follow along as the project evolves on our roadmapchat with the project team, and keep up to date through the project Twitter feed.  For more information about UC3, contact us at uc3@ucop.edu and follow us on Twitter.

NSF EAGER Grant for Actionable DMPs

We’re delighted to announce that the California Digital Library has been awarded a 2-year NSF EAGER grant to support active, machine-actionable data management plans (DMPs). The vision is to convert DMPs from a compliance exercise based on static text documents into a key component of a networked research data management ecosystem that not only facilitates, but improves the research process for all stakeholders.

Machine-actionable “refers to information that is structured in a consistent way so that machines, or computers, can be programmed against the structure” (DDI definition). Through prototyping and pilot projects we will experiment with making DMPs machine-actionable.

Imagine if the information contained in a DMP could flow across other systems automatically (e.g., to populate faculty profiles, monitor grants, notify repositories of data in the pipeline) and reduce administrative burdens. What if DMPs were part of active research workflows, and served to connect researchers with tailored guidance and resources at appropriate points over the course of a project? The grant will enable us to extend ongoing work with researchers, institutions, data repositories, funders, and international organizations (e.g., Research Data Alliance, Force11) to define a vision of machine-actionable DMPs and explore this enhanced DMP future. Working with a broad coalition of stakeholders, we will implement, test, and refine machine-actionable DMP use cases. The work plan also involves outreach to domain-specific research communities (environmental science, biomedical science) and pilot projects with various partners (full proposal text).

Active DMP community

Building on our existing partnership with the Digital Curation Centre, we look forward to incorporating new collaborators and aligning our work with wider community efforts to create a future world of machine-actionable DMPs. We’re aware that many of you are already experimenting in this arena and are energized to connect the dots, share experiences, and help carry things forward. These next-generation DMPs are a key component in the globally networked research data management ecosystem. We also plan to provide a neutral forum (not tied to any particular tool or project or working group) to ground conversations and community efforts.

Follow the conversation @ActiveDMPs #ActiveDMPs and activedmps.org (forthcoming). You can also join the active, machine-actionable DMP community (live or remote participation) at the RDA plenary in Montreal and Force11 meeting in Berlin to contribute to next steps.

Contact us to get involved!

cross-posted from https://blog.dmptool.org/2017/09/18/nsf-eager-grant-for-making-dmps-actionable/

Co-Author ORCiDs in Dash

Recently, the Dash team enabled ORCiD login. And while this configuration is important for primary authors, the Dash team feels strongly that all contributors to data publications should get credit for their work.

All co-authors of a published dataset now have the ability to authenticate and attach their ORCiD in Dash.

How this works:

  1. Data are published by a corresponding author who has the ability to authenticate their own ORCiD but they cannot enter other ORCiDs for co-authors. Bearing this in mind, Dash has a space for co-author email addresses to be entered.
  2. If email addresses are entered for co-authors, upon publication of the data, co-authors will receive an email notification. This notification will have a note about ORCiD iDs and a URL that directs to Dash.
  3. Co-authors who have clicked on this URL will be directed to a pop-up box over the dataset landing page which navigates authors to ORCiD for login and authentication
  4. After an ORCiD iD is entered and authenticated, the author is returned to the Dash landing page for their dataset and their ORCiD ID will appear by their name.

 

Managing the new NIH requirements for clinical trials

As part of an effort to enhance transparency in biomedical research, the National Institutes of Health (NIH) have, over the last few years, announced a series of policy changes related to clinical trials. Though there is still a great deal of uncertainty about which studies do and do not qualify, these changes may have significant consequences for researchers who may not necessarily consider their work to be clinical or part of a trial.

Last September, the NIH announced a series of requirements for studies that meet the agency’s revised and expanded definition of a clinical trials. Soon after, it was revealed that many of these requirements may apply to large swaths of NIH-funded behavioral, social science, and neuroscience research that, historically, have not been considered to be clinical in nature. This was affirmed several weeks ago when the agency released a list of case studies that included a brain imaging study in which healthy participants completed a memory task as an example of a clinical trial.

ct-requirements1

NIH’s revised and expanded definition of clinical trials includes many approaches to human subjects research that have historically been considered basic research. (Source)

What exactly constitutes a clinical trial now?

Because many investigators doing behavioral, social science, and neuroscience research consider their work to be basic research and not a part of a clinical trial, it is worth taking a step back to consider how NIH now defines the term.

According to the NIH, clinical trials are “studies involving human participants assigned to an intervention in which the study is designed to evaluate the effect(s) of the intervention on the participant and the effect being evaluated is a health-related biomedical or behavioral outcome.”, In an NIH context, intervention refers to “a manipulation of the subject or subject’s environment for the purpose of modifying one or more health-related biomedical or behavioral processes and/or endpoints.”. Because the agency considers all of the studies it funds that investigate biomedical or behavioral outcomes to be health-related, this definition includes mechanistic or exploratory work that does not have direct clinical implications.

Basically, if you are working on an NIH-funded study that involves biomedical or behavioral variables, you should be paying attention to the new requirements about clinical trials.

What do I need to do now that my study is considered a clinical trial?

If you think your work may be reclassified as a clinical trial, it’s probably worth getting a head start on meeting the new requirements. Here is some practical advice about getting started.

CTjourney3

The new NIH requirements for clinical trials affect activity throughout the lifecycle of a research project. (Source)

Applying for Funding

NIH has specified new requirements about how research involving clinical trials can be funded. For example, NIH will soon require that any application involving a clinical trial be submitted in response to a funding opportunity announcement (FOA) or request for proposal (RFP) that explicitly states that it will accept a clinical trial. This means, that if you are a researcher whose work involves biomedical or behavioral measures, you may have to apply to funding mechanisms that your peers have argued are not necessarily optimal or appropriate. Get in touch with your program officer and watch this space.

Grant applications will also feature a new form that consolidates the human subjects and clinical trial information previously collected across multiple forms into one structured form. For a walkthrough of the new form, check out this video.

Human Subjects Training

Investigators involved in a clinical trial must complete Good Clinical Practice (GCP) training. GCP training addresses elements related to the design, conduct, and reporting of clinical trials and can be completed via a class or course, academic training program, or certification from a recognized clinical research professional organization.

In practice, if you have already completed human subjects training (e.g. via CITI) and believe your research may soon be classified as a clinical trials, you may want to get proactive about completing those couple additional modules.

Getting IRB Approval

Good news if you work on a multi-site study, NIH now expects that you will use a single Institutional Review Board (sIRB) for ethical review. This should help streamline the review process, since it will no longer be necessary to submit an application to each site’s individual IRB. This requirement also applies to studies that are not clinical trials.

Registration and Reporting

NIH-funded projects involving clinical trials must be registered on Clinicaltrials.gov. In practice, this means that the primary investigator or grant awardee is responsible for registering the trial no later than 21 days after the enrollment of the first participant and is required to submit results information no later than a year after the study’s completion date. Registration involves supplying a significant amount of information about a study’s planned design and participants while results reporting involves supplying information about the participants recruited, the data collected, and the statistical tests applied. For more information about Clinicaltrials.gov, check out this paper.

If you believe your research may soon be reclassified as a clinical trial, now is probably a good time to take a hard look at how you and your lab handle research data management.The best way to relieve the administrative burden of these new requirements is to plan ahead and ensure that your materials are well organized, your data is securely saved, and your decisions are well documented. The more you think through how you’re going to manage your data and analyses now, the less you’ll have to scramble to get everything together when the report is due. If you haven’t already, now would be a good time to get in touch with the data management, scholarly communications, and research IT professionals at your institution.

Dash Enables ORCiD Login

The Dash team has now added a second way to login and submit. In addition to using Single Sign-On, users now have the ability to login with ORCiD. This means that not only can you authenticate with ORCiD, but once you have logged in this way, your ORCiD ID will connect to your Dash account. The next times that you submit to Dash, your ORCiD ID will auto populate in your submission form.

To back-up a little: ORCiD is a persistent identifier used to distinguish researchers from one another, and connect researchers with their research. If you are a researcher and do not currently have an ORCiD, sign up!

To connect your ORCiD:

  1. Login using the button on the far right of the Dash homepage
  2. Here you will see two options. If you click on the top ORCiD button will send you out to the ORCiD authentication page, and after correctly entering your ORCiD info, send you back to Dash.
    Screen Shot 2017-08-17 at 10.04.30 AM
  3. Although you have now successfully authenticated with ORCiD, to ensure you are connected to your correct submitting instance (a campus, a department, DataONE, etc…) you will be asked to choose your Single Sign-On. This is the only time you will be asked to login twice.Screen Shot 2017-08-17 at 10.14.22 AM
  4. After successfully logging in with Single Sign-On you will have your account connected to your ORCiD. In the future, you will not need to repeat this process and instead you will either be able to save your login to your browser or choose one of the two options for logging in.If you have already submitted to Dash before, you may logout, and go through the same steps above. This process will tie your ORCiD to your existing account and allow for either ORCiD or Single Sign-On in the future.

What We Talk About When We Talk About Reproducibility

At the very beginning of my career in research I conducted a study which involved asking college students to smile, frown, and then answer a series of questions about their emotional experience. This procedure was based on several classic studies which posited that, while feeling happy and sad makes people smile and frown, smiling and frowning also makes people feel happy and sad. After several frustrating months of trying and failing to get this to work, I ended my experiment with no significant results. At the time, I chalked up my lack of success to inexperience. But then, almost a decade later, a registered replication report of the original work also showed a lack of significant results and I was left to wonder if I had also been caught up in what’s come to be known as psychology’s reproducibility crisis.

reproducibility

Campbell’s Soup Cans (1962) by Andy Warhol. Created by replicating an existing object and then reproducing the process at least 32 times.

While I’ve since left the lab for the library, my work still often intersects with reproducibility. Earlier this year I attended a Research Transparency and Reproducibility Training session offered by the Berkeley Institute for Transparency in the Social Sciences (BITSS) and my projects involving brain imaging data, software, and research data management all invoke the term in some way.  Unfortunately, though it has always has been an important part of my professional activities, it isn’t always clear to me what we’re actually talking about when we talk about reproducibility.

The term “reproducibility” has been applied to efforts to enhance or ensure the research process for at at least 25 years. However, related conversations about how research is conducted, published, and interpreted have been ongoing for more than half a century. Ronald Fisher, who popularized the p-value that lies so central to many modern reproducibility efforts, summed up the situation in 1935.

“We may say that a phenomenon is experimentally demonstrable when we know how to conduct an experiment which will rarely fail to give us statistically significant results.”

Putting this seemingly simple statement into action has proven to be quite complex. Some reproducibility-related efforts are aimed at how researchers share their results, others are aimed at how they define statistical significance. There is now a burgeoning body of scholarship devoted to the topic. Even putting aside terms like HARKing, QRPs, and p-hacking, seemingly mundane objects like file drawers are imbued with particular meaning in the language of reproducibility.

So what actually is reproducibility?

Well… it’s complicated.

The best place to start might be the National Science Foundation, which defines reproducibility as “The ability of a researcher to duplicate the results of a prior study using the same materials and procedures used by the original investigator.”. According the NSF, reproducibility is one of three qualities that ensure research is robust. The other two, replicability and generalizability, are defined as “The ability of a researcher to duplicate the results of a prior study if the same procedures are followed but new data are collected.” and “Whether the results of a study apply in other contexts or populations that differ from the original one.” respectively. The difference between these terms is in the degree of separation from the original research, but all three converge on the quality of research. Good research is reproducible, replicable, and generalizable and , at least in the context of the NSF, a researcher invested in ensuring the reproducibility of their work would deposit their research materials and data in a manner and location where they could be accessed and used by others.

Unfortunately, defining reproducibility isn’t always so simple. For example, according to the NSF’s terminology, the various iterations of the Reproducibility Project are actually replicability projects (muddying the waters further, the Reproducibility Project: Psychology was preceded by the Many Labs Replication Project). However, the complexity of defining reproducibility is perhaps best illustrated by comparing the NSF definition to that of the National Institutes of Health.

Like the NSF, NIH invokes reproducibility in the context of addressing the quality of research. However, unlike the NSF, the NIH does not provide an explicit definition of the term. Instead NIH grant applicants are asked to address rigor and reproducibility across four areas of focus: scientific premise, scientific rigor (design), biological variables, and authentication. Unlike the definition supplied by the NSF, NIH’s conception of reproducibility appears to apply to an extremely broad set of circumstances and encompasses both replicability and generalizability. In the context of the NIH, a researcher invested in reproducibility must critically evaluate every aspect of their research program to ensure that any conclusions drawn from it are well supported.

Beyond the NSF and NIH, there have been numerous attempts to clarify what reproducibility actually means. For example, a paper out of the Meta-Research Innovation Center at Stanford (METRICS) distinguishes between “methods reproducibility”, “results reproducibility”, and “inferential reproducibility”. Methods and results reproducibility map onto the NSF definitions of reproducibility and replicability, while inferential reproducibility includes the NSF definition of generalizability and also the notion of different researchers reaching the same conclusion following reanalysis of the original study materials. Other approaches focus on methods by distinguishing between empirical, statistical, and computational reproducibility or specifying that replications can be direct or conceptual.

No really, what actually is reproducibility?

It’s everything.

The deeper we dive into defining “reproducibility”, the muddier the waters become. In some contexts, the term refers to very specific practices related to authenticating the results of a single experiment. In other contexts, it describes a range of interrelated issues related to how research is conducted, published, and interpreted. For this reason, I’ve started to move away from explicitly invoking the term when I talk to researchers. Instead, I’ve tried to frame my various research and outreach projects in terms of how they relate to fostering good research practice.

To me, “reproducibility” is about problems. Some of these problems are technical or methodological and will evolve with the development of new techniques and methods. Some of these problems are more systemic and necessitate taking a critical look at how research is disseminated, evaluated, and incentivized. But fostering good research practice is central to addressing all of these problems.

Especially in my current role, I am not particularly well equipped to speak to if a researcher should define statistical significance as p < 0.05, p < 0.005, or K > 3. What I am equipped to do is to help a researcher manage their research materials so they can be used, shared, and evaluated over time. It’s not that I think the term is not useful, but the problems conjured by reproducibility are so complex and context dependent that I’d rather just talk about solutions.

Resources for understanding reproducibility and improving research practice

Goodman A., Pepe A, Blocker A. W., Borgman C. L., Cranmer K., et al. (2014) Ten simple rules for the care and feeding of scientific data. PLOS Computational Biology 10(4): e1003542.

Ioannidis J. P. A. (2005) Why most published research findings are false. PLOS Medicine 2(8): e124.

Kitzes, J., Turek, D., & Deniz, F. (Eds.). (2017). The Practice of Reproducible Research: Case Studies and Lessons from the Data-Intensive Sciences. Oakland, CA: University of California Press.

Munafò, M. R., Nosek, B. A., Bishop, D. V., Button, K. S., Chambers, C. D., du Sert, N. P., et al. (2017). A manifesto for reproducible science. Nature Human Behaviour, 1, 0021.

Wilson Gl, Bryan J., Cranston K., Kitzes J., Nederbragt L., et al. (2017) Good enough practices in scientific computing. PLOS Computational Biology 13(6): e1005510.