Chapter 9 Orientation to Open Science

Screencasted Lecture Link

This lesson focuses on …

9.2 The Crises

9.2.1 The Replicability Crisis

Replicability is re-performing the experiment with new data. Summarizing results of replicability studies, Stevens (2017) noted the following:

  • In medicine, only 25% of previously published studies were able to replicate/confirm statistically significant effects.
  • In behavioral economics, 61%
  • In psychology, 36% (when 97% of the original studies did so)

What accounts for these discrepancies?

  • Differences in design and methods between the original and replication studies;
  • False negatives (Type II error) in the replication;
  • False positives (Type I error) in the original study;
  • Confirmation bias at any step in the original study, such as
    • Developing tests that attempt to confirm rather than disconfirm hypothesis
    • Perceiving behaviors in a manner that align with expectations (as opposed to outcomes),
    • Reporting the confirmatory (and ignoring the disconfirmatory) results (p-hacking)
  • The file-drawer problem: inability to publish results (original or replication) with non-significant results.
    • 24% of all NIH-funded trials aimed at evaluating the efficacy of psychological treatments for major depression were not published; when included in meta-analyses, this led to a 25% reduction in the estimated effect of psychotherapy (Hengartner, 2018).
    • For CBT-specific trials, when unpublished trials were included in a meta-analysis, there was a 37% reduction in the estimated efficacy of the intervention.

9.2.2 The Reproducibility Crisis

“An article […] in a scientific publication is not the scholarship itself, it is merely advertising of the scholarship. The actual scholarship is the complete software development environment and the complete set of instructions which generated the figures” (Buckheit & Donoho, 1995).

Reproducibility is reperforming the same analysis with the same code, using a different analyst.

In most published research, this is not possible. In a survey of 441 biomedical articles (2000 to 2014), only one was fully reproducible.

9.3 Open Science: What & Why

Core principles of science (at a time when the integrity of science is being questioned) include (Alter & Gonzalez, 2018):

  • Research transparency: the methods are described with sufficient detail such that the study can be replicated; also codebooks, analytic/processing scripts (including those used to create tables and figures) are available
  • Reproducibility: verify the published findings with the same dataset.
  • Replicability: find similar share with a different/new data set.
  • Data sharing: some version of the data (raw/primary, de-identified, perhaps blurred) available freely or through an application/vetting process

9.3.1 Research Transparency

Alter and Gonzalez (2018) suggested that research transparency includes more than sharing of data. Research transparency also includes the procedures used to create and analyze the data. Why?

  • Data are often refined, corrected, and manipulated before analysis. These steps may be part of the process that makes a statistically significant finding possible; that is, other approaches to scrubbing/scoring data will produce different (unstable) results.
  • Transparency requires that these steps be clearly articulated – in order and with all details of the process.

9.3.2 Who owns the data?

It depends – and is worth careful consideration.

  • Institutions: – While the PI is often considered to be the custodian/steward of the data, it is often contractually understood that the institution owns data when it is externally funded (e.g., NIH, NSF).
  • The principal investigator (PI)/lab? Maybe. Even in the absence of ownership the P.I. is likely the responsible party.
  • The community? At least as co-owner? From community participatory action or critical-ideological approaches, the community should likely have a say in data management.

9.4 Open Science: Where

9.4.1 Data Repositories:

Institutions that make commitments to preserve, maintain, and distribute data over time. They also make data citable by assigning persistent identifiers (e.g., the DOI in journal articles).

  • Domain specific: provide services for a specific field. Focus on limited types of data and invest heavily in curating it for reuse.
    – Examples: ICPSR (quantitative data for social and behavioral research), Databrary (videos used by developmental psychologists).
  • General: serve a broad range of disciplines and provide fewer data curation services. Designed to be self-service in that the depositor provides documentation and meta-data (e.g., “data about data” such as details in a codebook).
    • Mendeley, Figshare, Dataverse
  • Institutional: operated by libraries with a broad mission to document and preserve all the research produced by faculty, staff, students. The scope and services varies by institutions. Large universities hire staff with data management, documentation, and preservation services.

9.4.2 Journals: Supplemental files

Many journals provide the option of uploading supplemental files. These could be a variety of things: testing/interview protocols, supplemental tables/figures, and data/meta-data.

9.5 Open Science: Who are the Stakeholders?

9.5.1 Groups

Research data have the potential to harm individuals and their communities (Ross et al., 2018). We generally apply the ethical principle of beneficence to the individual; but we are wise to think of it as it applies to communities as well. Consider the case where a research team collected blood samples from members of the Havasupai tribe for a diabetes study. Later it was discovered that samples had been shared with other researchers to study issues including inbreeding, human migration, and schizophrenia. Separate from whether individual participants were identified, this resulted in social and psychological harm to the tribe as a whole.

9.5.2 Individual Research participants

A substantial objection to sharing research data is that confidential information about research participants will be protected.

Evaluating harm from disclosure risks can be considered bidimensionally:

Probability of reidentification
Potential harm Lo Hi
Lo
Hi

Consider the following examples in each of the quadrants.

Low potential for harm:

  • Opinions in a national poll, perception tasks in an experimental setting

High potential for harm:

  • Mental health, drug use, criminal activity, sexual behavior

Hi probability of re-identification:

  • Direct identifiers (name, phone number, SSN)
  • Geospatial locations
  • Longitudinal designs with repeated interviews and contextual info (grade, school)

Some data cannot be completely anonymized without destroying its research value.

9.5.3 The Common Rule

The Common Rule (Federal Policy for the Protection of Human Subjects (“Federal Policy for the Protection of Human Subjects (’Common Rule,” 2009) is the chartering document guiding IRB activity.

The chartering document for institutional review boards (IRB) defines minimal risk as it relates to research endeavors. To the degree that the probability and magnitude of harm or discomfort in the research activity does not exceed those ordinarily encountered in daily life or during a routine physical or psychological evaluation, data can be shared without any additional precautions.

Historically, informed consent language said something like, “Your data will only be available to members of our research team…at the end it will be destroyed.” The new version of the Common Rule (Protections (OHRP), 2017) includes guidelines for “broad consent” which covers storage and future secondary research with data that includes identifiable protection. The Common Rule suggests that participants should be given a general description of the types of research that may be conducted with the information collected from them and informed about the types of identifiable private information that will be kept and the types of researchers who may have access to that information.

This is all emerging – dynamic consent has been suggested as a way to allow patients to decide who can use their health data for research. Patients who have provided samples (e.g., blood, DNA) get texts/surveys through their phones to grant/deny consent for new uses.

9.5.3.1 Data creators

Among a variety of scientists, an NSF funded study reported that psychologists were the most negative about data sharing (Martone et al., 2018), with 30% indicating that data should be shared. What are the concerns (Martone et al., 2018)?

  • Fear for reputation: making errors and being called out.
  • Fear of being scooped: someone else will beat me to my next question.
  • Fear of liability: release of primary data for certain participants might lead to prosecution.
  • It’s a lot of work. I have to do this, too!?!?
  • My data are far too complex/sophisticated to be understood by anyone else on the planet.
  • My data are not useful beyond the purposes I have planned for them.
  • How will the field develop/evolve if we just reanalyze old data?

Yet, there are counterpoints (Martone et al., 2018):

“The issue of who is harmed by sharing data needs to be balanced against who is harmed by not sharing data?” (p. 117)
The Issue A Counterpoint
Reputation/Liability What if errors (if any) are undiscovered? Or findings are spurious (and only because of biased ways of data preparation)?
Being Scooped Curiously, in microbiology, the original data creators tended to publish two years after the data were made publically available; other authors tended to publish five to six years after the data were made available.
Unrewarded effort “If data are essential products of scholarship, those who create data must be appropriately acknowledged and rewarded” ((Alter & Gonzalez, 2018), p. 153).
Data isn’t useful beyond the single project Martone (Martone et al., 2018) recounted the results of the VISION-SCI project where the scientific community made individual data sets available (even those with NS effect – “file-drawer” and “background” data). Although individual studies (usually with restricted samples) produced unstable and inconsistent findings, the combined data provided a “fuller sampling of …the ‘syndromic space’” (p. 114) and resulted in robust predictive models.
The field will stagnate with no new data being collected Even the most amazing, longitudinal sets such as the High School and Beyond longitudinal set that has been used in so many examples (High School & Beyond (HS&B) - Overview, n.d.)(High School & Beyond, National Educational Longitudinal Set) become obsolete and are replaced by newer samples and improved measures/techniques.

Other practical considerations:

  • Authors may request (and journals may grant) an embargo period ranging from a few months to two years (or at least until the article is published) before the data is released (Martone et al., 2018).
  • When published data are part of larger data sets, only the variables used in the study are the only ones required to be made available.
  • “Citations” are the basis of scholarly recognition. PIs are wise to use a depository that will provide a formal citation (5 elements: author, title, date, publisher/distributor, location; with persistent identifier) such that the data will be included in the reference list.
    • When we make data (and/or script/code) available in a repository – we should list it on our resume/CV!

Similarly, scripts/code for all aspects of data management, analysis, and manipulation can be formally prepared, archived, and cited.

9.6 Open Science: How

9.6.1 Regarding research transparency

Establish (now, while you are just learning) the regular practice of a standard research workflow. It should

  • Be ordered, such that it maps along with the results section of the published manuscript.
  • Contain all of the script/code/syntax (in our case, the .rmd file) that does everything: defines variables, cleans data, includes/excludes data, manages missingness, scores scales/subscales, tests assumptions, runs the analyses, creates figures/tables.
  • Include narration of what you are doing and your rationale for doing so.
  • And also a “bug log”; a document that notes errors and corrections.
  • This should not be a “disorganized set of statistical commands” (Alter & Gonzalez, 2018, p. 148)

Alter and Gonzalez (2018) went further to suggest that the script/code should be keyed to the final publication (e.g., page or paragraph number).

9.6.2 Regarding data ownership

  • Get clarification on responsibilities and restrictions regarding the data that was used (note I did not write your data) in your projects (institutional, legal [FERPA, HIPAA, The Common Rule]).
  • When starting a new project, include as much detail as possible in the IRB application and informed consent about all the possible uses for data.

Below is an excerpt from a recent IRB application that projects how data might be used (credit to Tom Carpenter, PhD, for examples of language to use in the informed consent).

Confidentiality
The data we are using is completely de-identified and is already available to instructors as well as their chairs, deans, and members of tenure/promotion committees. All data that is retained for the purpose of the study may also be used in future research, presentations, and/or for teaching purposes by the Principal Investigator listed above.
Consistent with both journal/guild expectations and the ethical principles of open science, a fully anonymous and non-identifiable version of the dataset may be posted online (e.g., to the APA-endorsed “Open Science Framework” [www.osf.io] or to the journal, submitted with the research article). No data posted will contain any information that could identify participants in any way, either directly or indirectly. All data will be thoroughly inspected by the research prior to posting to confirm that no participant-provided responses could inadvertently identify or expose a participant.
Posting data (commonly referred to as “data sharing”) is necessary for reproducibility and replicability in science, allows peer reviewers and meta-analysts to check statistical assumptions, protects the field against data fraud, and is increasingly seen as an ethical obligation within psychological science.
Bikos RVT IRB application, November 2020

9.6.3 Regarding Data Sharing

Alter and Gonzalez (2018) recommendeded using domain-specific repositories whenever possible. Because they are more connected to the research community(ies) that will use them, they will be most likely to provide the discipline-specific curation services that will maintain the value for future reuse.

Alter and Gonzalez (2018) discouragde attaching research data as part of the supplementary materials associated with the publication because publishers do not manage research data with the same best practices that a data repository would. For example they may convert research data to text or pdf files and the materials may not be as discoverable as they would in a repository.

Best practices in data sharing are FAIR: Findable, Accessible, Interoperable, Resuable (Martone et al., 2018). Check out Table 2 in the Martone et al article for the details and implications of these practices.

9.6.4 More Resources:

Alter and Gonzalez (2018) provided a list of resources about data sharing and open science. The resources that are websites are likely updating as the field evolves. The Open Science Framework (OSF) is one that is frequently referenced in psychology. The help pages have a section devoted to FAQs and best practices – especially about the practical steps/logistics such as version control, file naming, and making a data dictionary: https://help.osf.io/hc/en-us

9.7 And also, Preregistration

In pre-registration, researchers describe their hypotheses, methods, and analyses before a piece of research is conducted – in a way that can be externally verified (Veer & Giner-Sorolla, 2016).

This is emerging, but there are two general types:

  • Reviewed pre-registration or registered report: A cooperative model between reviewers and researchers that involves feedback to the research design that can be incorporated before the study is conducted. This can be accompanied by a guarantee of publication regardless of the outcome.
  • Unreviewed pre-registration: No review prior data collection. The research plan is written and time-stamped before the study. The authors can conduct research as usual.

In both, authors can still follow-up analyses with exploratory research.

Advocates of pre-registration suggest (Veer & Giner-Sorolla, 2016):

  • It prioritizes theory and method over results
  • It distinguishes confirmatory from exploratory research
  • It reduces publication bias
  • It reduces reporting bias within a single study
  • It offers researchers additional input and review before they start
  • It can lead to faster dissemination (particularly with registered reports)

Concerns about pre-registration:

  • More work? For researchers and reviewers.
  • Will it dampen exploration? – van ’t Veer and Giner-Sorolla (2016) have suggested that exploratory work should still be reported.
  • What’s the value of a null literature (i.e., a bunch of studies with non-significance)?
    – It can save the scientific community time and effort. – It could help identify methodological problems such as being underpowered.
  • Idea theft – will others steal designs? – Pre-registered studies can stay private (in OSF up to 4 years) until the project is finished.

van ’t Veer and Giner-Sorolla (2016) provided guidance in the form of templates/instructions for registering a study for social psychology. Let’s go take a look at the OSF to see their pre-registration model.

9.8 I’m a Graduate Student – What does it mean to me?

Most journals now require a persistent identifier (ORCID) for researchers. These are connected to all published articles. You can obtain yours at: https://orcid.org/register

You can talk with your RVT about trying out pre-registration with lab projects. APS (Association of Psychological Science) are a signatory to the Transparency and Open Science Guidelines (TOPS). Let’s take a quick look at what that means for publishing: https://www.psychologicalscience.org/publications/open-science

In participating journals, articles can be awarded badges for: https://www.psychologicalscience.org/publications/badges

  • Pre-registered
  • Open Data
  • Open Materials

As you plan your research projects, think ahead to the journals to which you might submit. See if they are part of such a process. Although it may delay your start time, if they are part of the registered reports process, you might be guaranteed submission if your pre-registration is reviewed and approved by the journal.

R version 4.0.0 (2020-04-24)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252 
[2] LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] psych_2.1.6

loaded via a namespace (and not attached):
 [1] rstudioapi_0.13 knitr_1.34      magrittr_2.0.1  mnormt_2.0.2   
 [5] lattice_0.20-41 R6_2.5.1        rlang_0.4.11    fastmap_1.1.0  
 [9] stringr_1.4.0   tools_4.0.0     parallel_4.0.0  grid_4.0.0     
[13] tmvnsim_1.0-2   nlme_3.1-147    xfun_0.25       jquerylib_0.1.4
[17] htmltools_0.5.2 yaml_2.2.1      digest_0.6.27   bookdown_0.24  
[21] sass_0.4.0      evaluate_0.14   rmarkdown_2.10  stringi_1.7.4  
[25] compiler_4.0.0  bslib_0.3.0     jsonlite_1.7.2 

References

Alter, G., & Gonzalez, R. (2018). Responsible practices for data sharing. American Psychologist, 73(2), 146–156. https://doi.org/10.1037/amp0000258

Buckheit, J. B., & Donoho, D. L. (1995). Wavelab and Reproducible Research (Technical Report No. 474). Stanford University. https://statistics.stanford.edu/sites/g/files/sbiybj6031/f/EFS%20NSF%20474.pdf

Federal Policy for the Protection of Human Subjects (’Common Rule. (2009). [Text]. In HHS.gov. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/common-rule/index.html

High School & Beyond (HS&B) - Overview. (n.d.). Retrieved August 7, 2021, from https://nces.ed.gov/surveys/hsb/

Martone, M. E., Garcia-Castro, A., & VandenBos, G. R. (2018). Data sharing in psychology. American Psychologist, 73(2), 111–125. https://doi.org/10.1037/amp0000242

Protections (OHRP), O. for H. R. (2017). Revised Common Rule [Text]. In HHS.gov. https://www.hhs.gov/ohrp/regulations-and-policy/regulations/finalized-revisions-common-rule/index.html

Ross, M. W., Iguchi, M. Y., & Panicker, S. (2018). Ethical aspects of data sharing and research participant protections. American Psychologist, 73(2), 138–145. https://doi.org/10.1037/amp0000240

Stevens, J. R. (2017). Replicability and Reproducibility in Comparative Psychology. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.00862

Veer, A. E. van ’t, & Giner-Sorolla, R. (2016). Pre-registration in social psychology—A discussion and suggested template. Journal of Experimental Social Psychology, 67, 2–12. https://doi.org/10.1016/j.jesp.2016.03.004