Difference between revisions of "Journal:Restricted data management: The current practice and the future"

From LIMSWiki
Jump to navigationJump to search
(Saving and adding more.)
(Finished adding rest of content)
 
(3 intermediate revisions by the same user not shown)
Line 18: Line 18:
|website      = [https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844 https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844]
|website      = [https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844 https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844]
|download    = [https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844/753 https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844/753] (PDF)
|download    = [https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844/753 https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844/753] (PDF)
}}
{{ombox
| type      = notice
| image    = [[Image:Emblem-important-yellow.svg|40px]]
| style    = width: 500px;
| text      = This article should be considered a work in progress and incomplete. Consider this article incomplete until this notice is removed.
}}
}}
==Abstract==
==Abstract==
Many [[Information privacy|restricted data]] managing organizations across the world have adapted the Five Safes framework (i.e., safe data, projects, people, setting, and output) for their [[Information management|management]] of restricted and confidential data. While the Five Safes have been well integrated throughout the data life cycle, organizations observe several unintended challenges regarding making that data be [[Journal:The FAIR Guiding Principles for scientific data management and stewardship|FAIR]] (findable, accessible, interoperable, and reusable). In the current study, we review the current practice on restricted data management and discuss challenges and future directions, especially focusing on data use agreements, disclosure risks review, and training. In the future, restricted data managing organizations may need to proactively take into consideration reducing inequalities in access to scientific development, preventing unethical use of data in their management of restricted and confidential data, and managing various types of data.
Many [[Information privacy|restricted data]] managing organizations across the world have adapted the [[Five Safes]] framework (i.e., safe data, projects, people, setting, and output) for their [[Information management|management]] of restricted and confidential data. While the Five Safes have been well integrated throughout the data life cycle, organizations observe several unintended challenges regarding making that data be [[Journal:The FAIR Guiding Principles for scientific data management and stewardship|FAIR]] (findable, accessible, interoperable, and reusable). In the current study, we review the current practice on restricted data management and discuss challenges and future directions, especially focusing on data use agreements, disclosure risks review, and training. In the future, restricted data managing organizations may need to proactively take into consideration reducing inequalities in access to scientific development, preventing unethical use of data in their management of restricted and confidential data, and managing various types of data.


'''Keywords''': confidentiality, data governance, FAIR, training
'''Keywords''': confidentiality, data governance, FAIR, training


==Introduction==
==Introduction==
Since the introduction of the Five Safes in the mid-2010s [Desai, Ritchie, and Welpton, 2016; Ritchie, 2017], many organizations [[Information management|managing]] [[Information privacy|restricted data]] have adopted the framework for the management of restricted and confidential data.  The Five Safes framework helps organizations set guidelines for safe data created by data providers, safe projects for public good, safe people who are authenticated data users, safe settings in which data are being used, and safe outputs from [[Data analysis|analyzing data]]. The Five Safes have been well-integrated throughout the data life cycle, and have led to good stewardship practices to make scientific data [[Journal:The FAIR Guiding Principles for scientific data management and stewardship|FAIR]] (findable, accessible, interoperable, and reusable). It also helps multiple stakeholders balance data utilization with protection of subject privacy and data confidentiality. Despite successful implementation of the Five Safes, organizations encounter unintended challenges. In this paper, we review the current practice of restricted data management and discuss challenges and future directions, focusing on data use agreements, disclosure risk review, and training for data users.
Since the introduction of the [[Five Safes]] in the mid-2010s<ref>{{Cite journal |last=Desai, T.; Ritchie, F.; Welpton, R. |year=2016 |title=Five Safes: Designing data access for research |url=https://www2.uwe.ac.uk/faculties/bbs/documents/1601.pdf |format=PDF |journal=Economics Working Paper Series |issue=1601 |pages=1–27}}</ref><ref>{{Cite journal |last=Ritchie |first=Felix |date=2017-09-20 |title=The 'Five Safes': A Framework For Planning, Designing And Evaluating Data Access Solutions |url=https://zenodo.org/record/897821 |journal=Zenodo |doi=10.5281/ZENODO.897821}}</ref>, many organizations [[Information management|managing]] [[Information privacy|restricted data]] have adopted the framework for the management of restricted and confidential data.  The Five Safes framework helps organizations set guidelines for safe data created by data providers, safe projects for public good, safe people who are authenticated data users, safe settings in which data are being used, and safe outputs from [[Data analysis|analyzing data]]. The Five Safes have been well-integrated throughout the data life cycle, and have led to good stewardship practices to make scientific data [[Journal:The FAIR Guiding Principles for scientific data management and stewardship|FAIR]] (findable, accessible, interoperable, and reusable). It also helps multiple stakeholders balance data utilization with protection of subject privacy and data confidentiality. Despite successful implementation of the Five Safes, organizations encounter unintended challenges. In this paper, we review the current practice of restricted data management and discuss challenges and future directions, focusing on data use agreements, disclosure risk review, and training for data users.


While organizations implement multiple modes of data access (e.g., virtual data enclaves [VDEs], physical data enclaves [PDEs], secure [[Encryption|encrypted]] file downloads), our discussion may apply mostly to VDE and PDE. Further, our discourse is centered around quantitative data, although we do not restrict the implications to only that type of data. In other words, even though our discussion on current practices may be largely reliant on our experience with quantitative data accessible via VDE or PDE, the implications of our study may extend to newly emerged data types such as [[research]] notes, video, and electroencephalography.
While organizations implement multiple modes of data access (e.g., virtual data enclaves [VDEs], physical data enclaves [PDEs], secure [[Encryption|encrypted]] file downloads), our discussion may apply mostly to VDE and PDE. Further, our discourse is centered around quantitative data, although we do not restrict the implications to only that type of data. In other words, even though our discussion on current practices may be largely reliant on our experience with quantitative data accessible via VDE or PDE, the implications of our study may extend to newly emerged data types such as [[research]] notes, video, and electroencephalography.


==Data use agreements==
==Data use agreements==
Data use agreements (DUAs) are risk mitigation tools that clarify expectations among multiple stakeholders. [O’Hara, 2020] DUAs must be entered into before any use or access to data by users, and may require periodic updates. DUAs may contain all Five Safes components: safe data (description of how data have been and will be treated for protection of any disclosure risks); safe people (data users’ credentials); safe projects (research proposals demonstrating the intended data use); safe setting (plans for safe data access and handling); and safe outputs (procedures or rules on output publication and release). For some organizations, DUAs are stand-alone documents containing all five components. Other organizations require quite short DUAs, accompanied by separate materials such as a detailed research proposal, approval or exemption from an Institutional Review Board (IRB), and CVs from participants in the research project. Involvement of multiple stakeholders in DUAs means that DUAs allow for negotiations and pursuit of consensus among parties.
Data use agreements (DUAs) are risk mitigation tools that clarify expectations among multiple stakeholders.<ref name=":0">{{Cite book |last=O'Hara, A. |year=2020 |editor-last=Cole, S.; Dhaliwal, I.; Sautmann, A. et al. |title=Handbook on Using Administrative Data for Research and Evidence-based Policy |url=https://admindatahandbook.mit.edu/book/v1.0-rc4/dua.html |chapter=Chapter 3. Model Data Use Agreements: A Practical Guide |publisher=Massachusetts Institute of Technology}}</ref> DUAs must be entered into before any use or access to data by users, and may require periodic updates. DUAs may contain all Five Safes components: safe data (description of how data have been and will be treated for protection of any disclosure risks); safe people (data users’ credentials); safe projects (research proposals demonstrating the intended data use); safe setting (plans for safe data access and handling); and safe outputs (procedures or rules on output publication and release). For some organizations, DUAs are stand-alone documents containing all five components. Other organizations require quite short DUAs, accompanied by separate materials such as a detailed research proposal, approval or exemption from an Institutional Review Board (IRB), and CVs from participants in the research project. Involvement of multiple stakeholders in DUAs means that DUAs allow for negotiations and pursuit of consensus among parties.


Many organizations are bound by federal, state, and local laws, regulations, or policies reflecting their capability to access direct identifiers in the datasets. DUAs specify terms and conditions for data access and use, and clarify liability issues in advance. This upfront emphasis on DUAs would help mitigate confusion regarding liability in case of data breaches or suspected security incidents. DUAs require data users’ authenticated credentials; some organizations additionally ask for involvement of the researchers’ institutions in DUAs as a leverage to enforce consequences for the institution. [Levenstein, 2020] Not only for legal leverage, but also the involvement of institutional representatives in DUAs would help implement safe use of data by researchers. Research shows that many data users care more about their personal penalties (loss of access and funding, opinions of colleagues) rather than legal ones, if any incident happens. [Green, et al., 2017] Having multiple layers of liability may safeguard data breaches or protocol violations by users. However, involvement of the institutions in the DUAs may impose a hurdle for research teams with collaborators from multiple institutions or from different countries. DUAs for research projects of this nature may have to consider heterogeneous requirements with regard to data privacy, confidentiality, and liability issues, which may cause significant delays in the process of data use.
Many organizations are bound by federal, state, and local laws, regulations, or policies reflecting their capability to access direct identifiers in the datasets. DUAs specify terms and conditions for data access and use, and clarify liability issues in advance. This upfront emphasis on DUAs would help mitigate confusion regarding liability in case of data breaches or suspected security incidents. DUAs require data users’ authenticated credentials; some organizations additionally ask for involvement of the researchers’ institutions in DUAs as a leverage to enforce consequences for the institution.<ref name=":1">{{Cite web |last=Levenstein, H. |date=March 2020 |title=Addressing Challenges of Restricted Data Access |work=Deep Blue Documents |url=https://deepblue.lib.umich.edu/handle/2027.42/156407 |publisher=University of Michigan Library}}</ref> Not only for legal leverage, but also the involvement of institutional representatives in DUAs would help implement safe use of data by researchers. Research shows that many data users care more about their personal penalties (loss of access and funding, opinions of colleagues) rather than legal ones, if any incident happens.<ref name=":2">{{Cite web |last=Green, E.; Ritchie, F.; Newbam, J. et al. |date=2017 |title=Lessons learned in training ‘safe users’ of confidential data |work=Work Session on Statistical Data Confidentiality |url=https://pdfs.semanticscholar.org/548f/4ad0434c0f67183d557fed9661bd8baa2c07.pdf |format=PDF |publisher=UWE Bristol}}</ref> Having multiple layers of liability may safeguard data breaches or protocol violations by users. However, involvement of the institutions in the DUAs may impose a hurdle for research teams with collaborators from multiple institutions or from different countries. DUAs for research projects of this nature may have to consider heterogeneous requirements with regard to data privacy, confidentiality, and liability issues, which may cause significant delays in the process of data use.


Below, we discuss four distinctive challenges that organizations encounter with regard to restricted data management: limited opportunities of data access for certain groups of individuals; DUAs for research projects involving multiple institutions; limitations on binding laws against failure to DUA compliance; and costs to access data.
Below, we discuss four distinctive challenges that organizations encounter with regard to restricted data management: limited opportunities of data access for certain groups of individuals; DUAs for research projects involving multiple institutions; limitations on binding laws against failure to DUA compliance; and costs to access data.


===Limited opportunities for data access by certain groups===
===Limited opportunities for data access by certain groups===
As described, institutional involvement may help enforce consequences for both the institution and individual researchers. Data users who are affiliated with so-called typical research institutions (e.g., universities, government agencies, research institutes) have an institutional representative involved in the DUA process, and work with organizations without substantial challenges. Most of the processes are seamless, unless stakeholders raise concerns. (Even with concerns, the most serious challenge may be a delay in the process.) However, a requirement of institutional involvement can impose an insurmountable hurdle for those without an institutional affiliation, such as freelance journalists or students without academic advisors or from institutions with no experience. Researchers and institutions negotiate details in DUAs and pursue consensus with data managing organizations, which could be a tremendous burden for small institutions. While institutional involvement is meant to help keep safe people safer, it may have unintentionally excluded researchers without that leverage. An exemption for those who have been authorized and been good users at other organizations may need to be considered, and a template agreement that may mitigate the burdens should be available. [Levenstein, et al., 2018; O’Hara, 2020] Effective user training for ethical and scientific use of data may be helpful to alleviate concerns regarding data misuse by those with limited experience.
As described, institutional involvement may help enforce consequences for both the institution and individual researchers. Data users who are affiliated with so-called typical research institutions (e.g., universities, government agencies, research institutes) have an institutional representative involved in the DUA process, and work with organizations without substantial challenges. Most of the processes are seamless, unless stakeholders raise concerns. (Even with concerns, the most serious challenge may be a delay in the process.) However, a requirement of institutional involvement can impose an insurmountable hurdle for those without an institutional affiliation, such as freelance journalists or students without academic advisors or from institutions with no experience. Researchers and institutions negotiate details in DUAs and pursue consensus with data managing organizations, which could be a tremendous burden for small institutions. While institutional involvement is meant to help keep safe people safer, it may have unintentionally excluded researchers without that leverage. An exemption for those who have been authorized and been good users at other organizations may need to be considered, and a template agreement that may mitigate the burdens should be available.<ref name=":0" /><ref name=":1" /> Effective user training for ethical and scientific use of data may be helpful to alleviate concerns regarding data misuse by those with limited experience.


DUAs (or other supplement materials) require safe settings to access restricted data. Safe setting in DUAs designates a space in which no authorized views are allowable, for instance, an office space with a door that lacks a window. Shared space is not accepted by some organizations as a secure setting. Again, this requirement may impose a barrier for those with limited resources, such as students who would access restricted data in a shared office or cubicle. Organizations may need to consider embracing those who have limited resources by accommodating their needs (e.g., using a privacy screen for those who access data in a shared office).
DUAs (or other supplement materials) require safe settings to access restricted data. Safe setting in DUAs designates a space in which no authorized views are allowable, for instance, an office space with a door that lacks a window. Shared space is not accepted by some organizations as a secure setting. Again, this requirement may impose a barrier for those with limited resources, such as students who would access restricted data in a shared office or cubicle. Organizations may need to consider embracing those who have limited resources by accommodating their needs (e.g., using a privacy screen for those who access data in a shared office).


===DUAs for research projects involving multiple institutions===
===DUAs for research projects involving multiple institutions===
When researchers from multiple institutions collaborate in a single research project, each institution would enter into the DUAs. DUAs clarify expectations and responsibilities for each institution according to the research plan. The process is often complicated when institutions are located in different countries (e.g., legitimacy of credential authentication or IRB approval in different languages). O’Hara [2020] suggests considering other forms of documentation in multi-site research projects, such as a memorandum of understanding (MOU) and identification of conflicts of interest. In some cases, requiring identical DUAs with all participating institutions, although requiring extensive time to complete, may reduce confusion as compared to differing DUAs across institutions. Ultimately, to streamline the process of multi-site research projects, it may be helpful for organizations to consider incentives for good data users in different projects or even in different organizations. For example, the Research Passport of the Inter-university Consortium for Political and Social Research (ICPSR) expedites access to restricted data by giving researchers credits and visibility for “safe” actions in their past experiences with restricted data. [Levenstein, et al., 2018] This type of verification on users’ cumulative “safe” actions would tremendously help the procedures of DUAs across multiple institutions.
When researchers from multiple institutions collaborate in a single research project, each institution would enter into the DUAs. DUAs clarify expectations and responsibilities for each institution according to the research plan. The process is often complicated when institutions are located in different countries (e.g., legitimacy of credential authentication or IRB approval in different languages). O’Hara<ref name=":0" /> suggests considering other forms of documentation in multi-site research projects, such as a memorandum of understanding (MOU) and identification of conflicts of interest. In some cases, requiring identical DUAs with all participating institutions, although requiring extensive time to complete, may reduce confusion as compared to differing DUAs across institutions. Ultimately, to streamline the process of multi-site research projects, it may be helpful for organizations to consider incentives for good data users in different projects or even in different organizations. For example, the Research Passport of the Inter-university Consortium for Political and Social Research (ICPSR) expedites access to restricted data by giving researchers credits and visibility for “safe” actions in their past experiences with restricted data.<ref>{{Cite web |last=Levenstein, M.C.; Tyler, A.R.B.; Davidson Bleckman, J. |date=16 May 2018 |title=The Researcher Passport: Improving Data Access and Confidentiality Protection |work=Deep Blue Documents |url=https://deepblue.lib.umich.edu/handle/2027.42/143808 |publisher=University of Michigan Library}}</ref> This type of verification on users’ cumulative “safe” actions would tremendously help the procedures of DUAs across multiple institutions.


===Limitations on binding laws against DUA non-compliance===
===Limitations on binding laws against DUA non-compliance===
Failure to comply with a DUA may result in immediate termination of data access and further actions that depend on the severity of the failure. Organizations establish procedures to respond to [[Information security|data security]] and breach incidents; some funders require a one-hour reporting and procedures to minimize the damage of the data breach or confidentiality disclosure. In the United States, violation of the [[Health Insurance Portability and Accountability Act]] (HIPAA) privacy standards can impose a civil monetary penalty on the individual by the [[United States Department of Health and Human Services|Department of Health and Human Services]]. Organizations bound by specific laws such as HIPAA must follow the high-level legal boundary. Nonetheless, most data security incidents are unintentional or inadvertent violations of the protocol. They may pose minimal risk for subjects in datasets, and thus better be handled with effective user training. Organizations may better consider DUAs as a tool for all stakeholders to share responsibilities for data confidentiality (e.g., a community model) [Green, et al., 2017] rather than the one for policing or punishing one party (e.g., a policing model). [Green, et al., 2017]
Failure to comply with a DUA may result in immediate termination of data access and further actions that depend on the severity of the failure. Organizations establish procedures to respond to [[Information security|data security]] and breach incidents; some funders require a one-hour reporting and procedures to minimize the damage of the data breach or confidentiality disclosure. In the United States, violation of the [[Health Insurance Portability and Accountability Act]] (HIPAA) privacy standards can impose a civil monetary penalty on the individual by the [[United States Department of Health and Human Services|Department of Health and Human Services]]. Organizations bound by specific laws such as HIPAA must follow the high-level legal boundary. Nonetheless, most data security incidents are unintentional or inadvertent violations of the protocol. They may pose minimal risk for subjects in datasets, and thus better be handled with effective user training. Organizations may better consider DUAs as a tool for all stakeholders to share responsibilities for data confidentiality (e.g., a community model)<ref name=":2" /> rather than the one for policing or punishing one party (e.g., a policing model).<ref name=":2" />


===Costs to access restricted data===
===Costs to access restricted data===
Even marginal costs of accessing data can be burdensome to researchers, but such costs are also important to consider for organizations. Data access costs include staff efforts to set up the access and to create datasets for users. The costs could unintentionally exclude some groups of researchers, such as junior scholars without research funds. Organizations and funding agencies could proactively intervene  by waiving the costs of data access for researchers with limited resources. Doing so would help achieve Open Science [OECD, 2015]—aiming to share data with minimal barriers for all researchers from different backgrounds.
Even marginal costs of accessing data can be burdensome to researchers, but such costs are also important to consider for organizations. Data access costs include staff efforts to set up the access and to create datasets for users. The costs could unintentionally exclude some groups of researchers, such as junior scholars without research funds. Organizations and funding agencies could proactively intervene  by waiving the costs of data access for researchers with limited resources. Doing so would help achieve Open Science<ref>{{Cite journal |last=OECD |date=2015-10-15 |title=Making Open Science a Reality |url=https://www.oecd-ilibrary.org/science-and-technology/making-open-science-a-reality_5jrs2f963zs1-en |journal=OECD Science, Technology and Industry Policy Papers |language=en |issue=25 |doi=10.1787/5jrs2f963zs1-en}}</ref>—aiming to share data with minimal barriers for all researchers from different backgrounds.


==Disclosure review practices==
==Disclosure review practices==
Safe output refers to statistical products created from the restricted and/or sensitive data that are being vetted and approved as non-disclosive. Organizations help researchers utilize restricted data as effectively as possible without compromising data privacy and confidentiality. Safe output by safe people must go through a vetting process for disclosure risks. Disclosure review rules and procedures are set up in earlier steps of the data access process, such as the DUAs. Some data providers prefer to set up standards of disclosure avoidance rules and procedures with organizations in the data depositing process. Data providers and organizations also often discuss dissemination modes and tiers of access to establish the disclosure avoidance rules and procedures.


Disclosure review rules and procedures vary by types of data and access modes. Alves and Ritchie<ref name=":3">{{Cite journal |last=Alves |first=Kyle |last2=Ritchie |first2=Felix |date=2020-11-25 |title=Runners, repeaters, strangers and aliens: Operationalising efficient output disclosure control |url=https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SJI-200661 |journal=Statistical Journal of the IAOS |volume=36 |issue=4 |pages=1281–1293 |doi=10.3233/SJI-200661}}</ref> articulate two approaches to managing output-vetting: “rules-based” and “principle-based” approaches. The rules-based approach establishes a certain set of strict rules regarding disclosive information and scrutinizes research outputs created from restricted data based on the rules. On the other hand, the principle-based approach allows flexible negotiation between researchers and output vetting staff. The goal of the organizations is to implement efficient and effective procedures to protect data confidentiality and minimize disclosure risks, as well as to maximize data utilization.<ref name=":4">{{Cite web |last=Griffiths, E.; Greci, C.; Kotrotsios, Y. et al. |date=July 2019 |title=Handbook on Statistical Disclosure Control for Outputs |url=https://ukdataservice.ac.uk/app/uploads/thf_datareport_aw_web.pdf |format=PDF |publisher=Safe  DataAccess Professionals Working Group}}</ref><ref>{{Cite web |last=Levenstein, M. |title=Managing research and data for reproducibility and transparency |work=Office of Planning, Research and Evaluation 2019 Open Science Methods Meeting |url=https://opremethodsmeeting.org/wp-content/uploads/2019/10/Reproducibility_Levenstein_presentation.pdf |format=PDF |publisher=Institute for Social Research, University of Michigan}}</ref> Most organizations apply the rules-based output vetting approach, with a certain level of flexibility, to various data types.
We review below the current practice and future directions in four domains: common output vetting requirements at organizations; reviewers of statistical outputs; automatic disclosure review procedure; and self-vetting that relies on “safe setting” and “safe people.”
===Outputting vetting requirements===
Organizations set up a standardized procedure for output vetting, including but not limited to output format, contents, and timeline to process each request. To illustrate, Table 1 summarizes output vetting requirements and considerations currently in place at many data archives at ICPSR. Most organizations have their own requirements and considerations in the restricted data use process. While standardizing the process and requirements could help streamline the procedures, it seems implausible due to different requirement by funders and data providers.
{|
| style="vertical-align:top;" |
{| class="wikitable" border="1" cellpadding="5" cellspacing="0" width="80%"
|-
  | colspan="3" style="background-color:white; padding-left:10px; padding-right:10px;" |'''Table 1.''' Output vetting requirements and considerations at ICPSR.
|-
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Item
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Requirements
  ! style="background-color:#e2e2e2; padding-left:10px; padding-right:10px;" |Examples
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |'''Format'''
  | style="background-color:white; padding-left:10px; padding-right:10px;" |* Presentation-ready format required/preferred (.pdf, .docx, .xlsx).
  | style="background-color:white; padding-left:10px; padding-right:10px;" |* Raw outputs from statistical packages (e.g., SAS log, Statalog-files, M-Plus log) not accepted.
|- 
  | style="background-color:white; padding-left:10px; padding-right:10px;" |'''Contents'''
  | style="background-color:white; padding-left:10px; padding-right:10px;" |* A description of the sample, sub-sample, analytic approach, and definitions of variables used in the analyses.<br />* Summary statistics for variables used in the analysis.<br />* Checklist (help self-vet before sending it to the vetting staff).<br />* Supporting documents (programming files).
  | style="background-color:white; padding-left:10px; padding-right:10px;" |* Minimum cell size threshold is clearly described in the output vetting instruction.<br />* Minimum cell size threshold differs by type of dataset and linkage capability.
|-   
  | style="background-color:white; padding-left:10px; padding-right:10px;" |'''Timeline'''
  | style="background-color:white; padding-left:10px; padding-right:10px;" |* Depends on the output, but most vetting is completed within 10 business days.
  | style="background-color:white; padding-left:10px; padding-right:10px;" |* Missing requirements, insufficient supporting documents or materials would significantly extend the timeline.
|-   
|}
|}
===Reviewers of the statistical outputs===
It is preferred that organizations have output-vetting reviewers with background in [[statistics]] or subject areas, but this is not a requirement. More important aspects are: 1) independence of the reviewers; 2) the four eyes principle; and 3) manageable workload without excessive pressure.<ref name=":4" />
Most organizations have designated individuals responsible for output vetting. For example, there are at least five experts at ICPSR all the time, with two or three back-ups, who vet outputs created from VDE or PDE. These experts are mostly ICPSR staff members who are not affiliated with any research projects of users (i.e., displaying independence). To bolster the confidence regarding whether to release output, organizations have a group of reviewers (four eyes principle; managing workload). Some organizations operate a committee who discuss the risks of data confidentiality and privacy from research outputs. The committee usually consists of a group of experts to oversee data confidentiality and evaluate disclosure risks from the use of restricted data. For example, the ICPSR Disclosure Review Board (DRB) fills a leadership and scholarly role in the disclosure avoidance community, and serves as a decision-making body within the ICPSR with regard to disclosure risks and exceptions to existing policies. The ICPSR DRB consists of a Chair (ICPSR Privacy and Security Officer), Vice-Chair, and 10 experts within and outside the organization. Individual ICPSR reviewers can query the DRB about disclosure risks on outputs and defer the approval decision to the DRB. Further, DRB reviews the ICPSR disclosure rules in light of new regulations and changes to the wider data environment, assesses new disclosure reduction methods and technologies for possible adoption, and develops rules around them. The ICPSR DRB convenes every month.
Having a group of experts (e.g., a committee) who can provide a second set of eyes on disclosure risks would be beneficial with regard to confidentiality and privacy protection, but it could create frustration for data users on a tight timeline. It is important for organizations to consider the procedure of committee involvement to be flexible, e.g., an ''ad hoc'' subcommittee available for immediate consultation on specific requests.
===Automated disclosure review===
Organizations try to standardize the process of disclosure review despite disparate requirements by data type, funding agencies, and data depositors that hamper progress. High-level standardization of the disclosure review process helps streamline the vetting process, and may save the vetting timeline. In terms of vetting guidelines, standardization would be easy for the rules-based approach (setting common strict rules across datasets and organizations), but it could diminish the utilization of data if some of the output were unnecessarily determined as risky. Standardizing output vetting using the principle-based approach may be easier to implement; having a rule of thumb to vet each output and releasing if risks are negligible.<ref name=":4" /> One caveat regarding standardization of the principle-based approach is that organizations may want highly-qualified expert reviewers to assess the disclosure risks of statistical outputs.
Most organizations support a pool of experts to perform disclosure risk reviews, which is often time- and resource-consuming. Instead, organizations may consider an automated disclosure review system since output checking for disclosure risks is not necessarily a statistical matter but an operational matter.<ref name=":3" /> In fact, some organizations have already implemented a machine-driven output checking for disclosure risks with regard to relatively simple matters such as minimum cell thresholds, although other organizations still rely on humans for the output checking. Stocchi and Bujnowska<ref>{{Cite journal |last=Stocchi, M.; Bujnowksa, A. |year=2021 |title=Automatic checking of research outputs |url=https://unece.org/sites/default/files/2021-12/SDC2021_Day2_Stocchi_AD.pdf |format=PDF |journal=Proceedings of the 2021 Conference of European Statisticians |pages=1–7}}</ref> summarized the automatic Stata programming developed by Ritchie ''et al.''<ref name=":5">{{Cite book |last=Ritchie, F.; Green, E.; Smith, J. |date=2021 |title=Automatic Checking of Research Outputs (ACRO): A tool for dynamic disclosure checks : 2021 edition. |url=https://data.europa.eu/doi/10.2785/75954 |publisher=European Commission, Statistical Office of the European Union |place=LU |doi=10.2785/75954}}</ref>, suggesting that the automated checking may work more effectively by a joint effort with expert personnel. Ritchie ''et al.''<ref name=":5" /> also pointed out that automated tools may over-protect data by treating every possible case as an actual risk (which might compromise the utilization of restricted data). Also, the tool may over- or under-protect disclosure risks due to its inability to determine the context of data use.<ref name=":5" /> A combination of the automated review process with expert check-ups might be most effective. Further, safe output created by safe users may help the automatic disclosure review system work the best. Organizations may invest in user training for good output preparation and checking behaviors, which eventually saves reviewers’ efforts and other resources.
===Self-vetting that relies on "safe setting" and "safe people"===
Outputs created within a VDE or PDE must go through a vetting process before retrieval, either by experts or by automated vetting system. On the other hand, organizations have to rely on an output self-vetting by data users who access data via a secure download method. Organizations do not scrutinize each output created from the secure download but still strive to ensure “safe setting” and “safe people” by providing training and guidelines. Audits on data management and use in safe settings by safe people are also conducted by many organizations. However, given greater risks of disclosure with secure [[Encryption|encrypted]] data download dissemination, efforts for safe data may be required.
==Training==
Organizations require user training before accessing data, which includes, but is not limited to, data confidentiality, data use procedures (e.g., steps to restricted data application, output review process), and sanctions in relation to violations of data use protocols. Training may include passive materials (e.g., print-outs or videos), interactive materials (e.g., one-on-one phone or video sessions), or quizzes. While written training materials may work better for users to follow procedures, animated and interactive materials also provide benefits in terms of translating the training into the practice.<ref>{{Cite journal |last=Palmiter |first=Susan |last2=Elkerton |first2=Jay |last3=Baggett |first3=Patricia |date=1991-05 |title=Animated demonstrations vs written instructions for learning procedural tasks: a preliminary investigation |url=https://linkinghub.elsevier.com/retrieve/pii/0020737391900194 |journal=International Journal of Man-Machine Studies |language=en |volume=34 |issue=5 |pages=687–701 |doi=10.1016/0020-7373(91)90019-4}}</ref> Combination of both passive and interactive training approaches would operate the best. Training requirements depend on types of data, funding agencies, data providers, and methods to access data (e.g., VDE/PDE or secure download); thus, user training and staff training may vary within the organization.
Recently, there has been growing consensus that user training should focus on a “community model,” not a “policing model.”<ref name=":2" /> Training based on the policing model operates as a tool to make sure that researchers obey rules, assuming data users to be potential rule-breakers. On the other hand, the community model considers data users as colleagues with a shared goal of data confidentiality. (Details about the training theory are available in Green ''et al.''<ref name=":2" />) In fact, many organizations rarely encounter substantial data breach incidents, but most of the common incidents result from inadvertent mistakes and ignorance of protocols by researchers. Effective training may better catalyze attitudinal shifts by focusing less on punishment.<ref name=":2" />
From an organization perspective, effective training requires extensive resources. Some restricted data accessing mechanisms require yearly updates to all research materials such as DUAs, IRB approval/exemption, and training. For organizations with diverse datasets and various types of users, tracking the yearly progress for every researcher and team may be burdensome. While the community model training would work effectively, having a good facilitator may not be easy for some organizations, and updating the materials frequently may be a hurdle for many organizations. Some organizations are moving toward automated and routine training for data users and also their staff, which may resolve some issues. Also, standard training that authorizes users to access data across organizations may help reduce the burden that is imposed to organizations.
While from a user perspective it is effective to have a condensed, succinct version of training, the content of training may keep being extended. For example, there have been growing concerns for data providers and managing organizations that data are being misused. The conclusions of research where restricted data is being used are sometimes harmful to specific groups or stigmatizing to a certain group of individuals. Organizations now consider inclusion of [[Big data ethics|data ethics]] in training materials, although how to incorporate ethics issues in away the community model can be implemented is still in question.
==Conclusions==
In the past few decades, there have been efforts by multiple stakeholders (e.g., researchers, organizations, publishers, and funders of scientific research) to make scientific data FAIR. Technological advances such as search tools, vocabularies, and infrastructures have assisted in discovery and reuse of scientific data. Many organizations have implemented the Five Safes framework in their data management to protect the confidentiality of human subjects, as well as to promote reproducibility and transparency. Despite the effort, we observe that the safeguards could generate unintended challenges to certain groups of individuals (e.g., institutional approval that could exclude researchers without institutional affiliation) or in different areas (e.g., rigorous output checking that requires extensive insights from experts). This may raise questions for organizations with regard to future directions of data management with the Five Safes; for example, whether and how organizations govern the inequalities in access to scientific development and prevent unethical use of data (such as exploitation of indigenous populations, group harm to underrepresented or minority groups), which is one of the essentials of Open Science.<ref>{{Cite journal |last=UNESCO |date=2021 |title=UNESCO Recommendation on Open Science |url=https://unesdoc.unesco.org/ark:/48223/pf0000379949 |doi=10.54677/mnmh8546}}</ref> Furthermore, organizations now face additional challenges with newly emerged data types. Organizations may need to consider a more streamlined and standardized data management policy while allowing for a greater degree of flexibility to incorporate such data in the future.
==Acknowledgements==
None.


==References==
==References==
Line 63: Line 119:


==Notes==
==Notes==
This presentation is faithful to the original, with only a few minor changes to presentation, though grammar and word usage was substantially updated for improved readability. In some cases important information was missing from the references, and that information was added.  
This presentation is faithful to the original, with only a few minor changes to presentation, though grammar and word usage was substantially updated for improved readability. In some cases important information was missing from the references, and that information was added. Nothing else was changed in accordance with the NoDerivatives portion of the license.  


<!--Place all category tags here-->
<!--Place all category tags here-->

Latest revision as of 22:37, 29 April 2024

Full article title Restricted data management: The current practice and the future
Journal Journal of Privacy and Confidentiality
Author(s) Jang, Joy B.; Pienta, Amy; Levenstein, Margaret; Saul, Joe
Author affiliation(s) Inter-university Consortium for Political and Social Research (ICPSR) at University of Michigan
Primary contact Email: oyjang at umich dot edu
Year published 2023
Volume and issue 13(2)
Page(s) 1–9
DOI 10.29012/jpc.844
ISSN 2575-8527
Distribution license Creative Commons Attribution-NonCommercial-NoDeriv 4.0 International
Website https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844
Download https://journalprivacyconfidentiality.org/index.php/jpc/article/view/844/753 (PDF)

Abstract

Many restricted data managing organizations across the world have adapted the Five Safes framework (i.e., safe data, projects, people, setting, and output) for their management of restricted and confidential data. While the Five Safes have been well integrated throughout the data life cycle, organizations observe several unintended challenges regarding making that data be FAIR (findable, accessible, interoperable, and reusable). In the current study, we review the current practice on restricted data management and discuss challenges and future directions, especially focusing on data use agreements, disclosure risks review, and training. In the future, restricted data managing organizations may need to proactively take into consideration reducing inequalities in access to scientific development, preventing unethical use of data in their management of restricted and confidential data, and managing various types of data.

Keywords: confidentiality, data governance, FAIR, training

Introduction

Since the introduction of the Five Safes in the mid-2010s[1][2], many organizations managing restricted data have adopted the framework for the management of restricted and confidential data. The Five Safes framework helps organizations set guidelines for safe data created by data providers, safe projects for public good, safe people who are authenticated data users, safe settings in which data are being used, and safe outputs from analyzing data. The Five Safes have been well-integrated throughout the data life cycle, and have led to good stewardship practices to make scientific data FAIR (findable, accessible, interoperable, and reusable). It also helps multiple stakeholders balance data utilization with protection of subject privacy and data confidentiality. Despite successful implementation of the Five Safes, organizations encounter unintended challenges. In this paper, we review the current practice of restricted data management and discuss challenges and future directions, focusing on data use agreements, disclosure risk review, and training for data users.

While organizations implement multiple modes of data access (e.g., virtual data enclaves [VDEs], physical data enclaves [PDEs], secure encrypted file downloads), our discussion may apply mostly to VDE and PDE. Further, our discourse is centered around quantitative data, although we do not restrict the implications to only that type of data. In other words, even though our discussion on current practices may be largely reliant on our experience with quantitative data accessible via VDE or PDE, the implications of our study may extend to newly emerged data types such as research notes, video, and electroencephalography.

Data use agreements

Data use agreements (DUAs) are risk mitigation tools that clarify expectations among multiple stakeholders.[3] DUAs must be entered into before any use or access to data by users, and may require periodic updates. DUAs may contain all Five Safes components: safe data (description of how data have been and will be treated for protection of any disclosure risks); safe people (data users’ credentials); safe projects (research proposals demonstrating the intended data use); safe setting (plans for safe data access and handling); and safe outputs (procedures or rules on output publication and release). For some organizations, DUAs are stand-alone documents containing all five components. Other organizations require quite short DUAs, accompanied by separate materials such as a detailed research proposal, approval or exemption from an Institutional Review Board (IRB), and CVs from participants in the research project. Involvement of multiple stakeholders in DUAs means that DUAs allow for negotiations and pursuit of consensus among parties.

Many organizations are bound by federal, state, and local laws, regulations, or policies reflecting their capability to access direct identifiers in the datasets. DUAs specify terms and conditions for data access and use, and clarify liability issues in advance. This upfront emphasis on DUAs would help mitigate confusion regarding liability in case of data breaches or suspected security incidents. DUAs require data users’ authenticated credentials; some organizations additionally ask for involvement of the researchers’ institutions in DUAs as a leverage to enforce consequences for the institution.[4] Not only for legal leverage, but also the involvement of institutional representatives in DUAs would help implement safe use of data by researchers. Research shows that many data users care more about their personal penalties (loss of access and funding, opinions of colleagues) rather than legal ones, if any incident happens.[5] Having multiple layers of liability may safeguard data breaches or protocol violations by users. However, involvement of the institutions in the DUAs may impose a hurdle for research teams with collaborators from multiple institutions or from different countries. DUAs for research projects of this nature may have to consider heterogeneous requirements with regard to data privacy, confidentiality, and liability issues, which may cause significant delays in the process of data use.

Below, we discuss four distinctive challenges that organizations encounter with regard to restricted data management: limited opportunities of data access for certain groups of individuals; DUAs for research projects involving multiple institutions; limitations on binding laws against failure to DUA compliance; and costs to access data.

Limited opportunities for data access by certain groups

As described, institutional involvement may help enforce consequences for both the institution and individual researchers. Data users who are affiliated with so-called typical research institutions (e.g., universities, government agencies, research institutes) have an institutional representative involved in the DUA process, and work with organizations without substantial challenges. Most of the processes are seamless, unless stakeholders raise concerns. (Even with concerns, the most serious challenge may be a delay in the process.) However, a requirement of institutional involvement can impose an insurmountable hurdle for those without an institutional affiliation, such as freelance journalists or students without academic advisors or from institutions with no experience. Researchers and institutions negotiate details in DUAs and pursue consensus with data managing organizations, which could be a tremendous burden for small institutions. While institutional involvement is meant to help keep safe people safer, it may have unintentionally excluded researchers without that leverage. An exemption for those who have been authorized and been good users at other organizations may need to be considered, and a template agreement that may mitigate the burdens should be available.[3][4] Effective user training for ethical and scientific use of data may be helpful to alleviate concerns regarding data misuse by those with limited experience.

DUAs (or other supplement materials) require safe settings to access restricted data. Safe setting in DUAs designates a space in which no authorized views are allowable, for instance, an office space with a door that lacks a window. Shared space is not accepted by some organizations as a secure setting. Again, this requirement may impose a barrier for those with limited resources, such as students who would access restricted data in a shared office or cubicle. Organizations may need to consider embracing those who have limited resources by accommodating their needs (e.g., using a privacy screen for those who access data in a shared office).

DUAs for research projects involving multiple institutions

When researchers from multiple institutions collaborate in a single research project, each institution would enter into the DUAs. DUAs clarify expectations and responsibilities for each institution according to the research plan. The process is often complicated when institutions are located in different countries (e.g., legitimacy of credential authentication or IRB approval in different languages). O’Hara[3] suggests considering other forms of documentation in multi-site research projects, such as a memorandum of understanding (MOU) and identification of conflicts of interest. In some cases, requiring identical DUAs with all participating institutions, although requiring extensive time to complete, may reduce confusion as compared to differing DUAs across institutions. Ultimately, to streamline the process of multi-site research projects, it may be helpful for organizations to consider incentives for good data users in different projects or even in different organizations. For example, the Research Passport of the Inter-university Consortium for Political and Social Research (ICPSR) expedites access to restricted data by giving researchers credits and visibility for “safe” actions in their past experiences with restricted data.[6] This type of verification on users’ cumulative “safe” actions would tremendously help the procedures of DUAs across multiple institutions.

Limitations on binding laws against DUA non-compliance

Failure to comply with a DUA may result in immediate termination of data access and further actions that depend on the severity of the failure. Organizations establish procedures to respond to data security and breach incidents; some funders require a one-hour reporting and procedures to minimize the damage of the data breach or confidentiality disclosure. In the United States, violation of the Health Insurance Portability and Accountability Act (HIPAA) privacy standards can impose a civil monetary penalty on the individual by the Department of Health and Human Services. Organizations bound by specific laws such as HIPAA must follow the high-level legal boundary. Nonetheless, most data security incidents are unintentional or inadvertent violations of the protocol. They may pose minimal risk for subjects in datasets, and thus better be handled with effective user training. Organizations may better consider DUAs as a tool for all stakeholders to share responsibilities for data confidentiality (e.g., a community model)[5] rather than the one for policing or punishing one party (e.g., a policing model).[5]

Costs to access restricted data

Even marginal costs of accessing data can be burdensome to researchers, but such costs are also important to consider for organizations. Data access costs include staff efforts to set up the access and to create datasets for users. The costs could unintentionally exclude some groups of researchers, such as junior scholars without research funds. Organizations and funding agencies could proactively intervene by waiving the costs of data access for researchers with limited resources. Doing so would help achieve Open Science[7]—aiming to share data with minimal barriers for all researchers from different backgrounds.

Disclosure review practices

Safe output refers to statistical products created from the restricted and/or sensitive data that are being vetted and approved as non-disclosive. Organizations help researchers utilize restricted data as effectively as possible without compromising data privacy and confidentiality. Safe output by safe people must go through a vetting process for disclosure risks. Disclosure review rules and procedures are set up in earlier steps of the data access process, such as the DUAs. Some data providers prefer to set up standards of disclosure avoidance rules and procedures with organizations in the data depositing process. Data providers and organizations also often discuss dissemination modes and tiers of access to establish the disclosure avoidance rules and procedures.

Disclosure review rules and procedures vary by types of data and access modes. Alves and Ritchie[8] articulate two approaches to managing output-vetting: “rules-based” and “principle-based” approaches. The rules-based approach establishes a certain set of strict rules regarding disclosive information and scrutinizes research outputs created from restricted data based on the rules. On the other hand, the principle-based approach allows flexible negotiation between researchers and output vetting staff. The goal of the organizations is to implement efficient and effective procedures to protect data confidentiality and minimize disclosure risks, as well as to maximize data utilization.[9][10] Most organizations apply the rules-based output vetting approach, with a certain level of flexibility, to various data types.

We review below the current practice and future directions in four domains: common output vetting requirements at organizations; reviewers of statistical outputs; automatic disclosure review procedure; and self-vetting that relies on “safe setting” and “safe people.”

Outputting vetting requirements

Organizations set up a standardized procedure for output vetting, including but not limited to output format, contents, and timeline to process each request. To illustrate, Table 1 summarizes output vetting requirements and considerations currently in place at many data archives at ICPSR. Most organizations have their own requirements and considerations in the restricted data use process. While standardizing the process and requirements could help streamline the procedures, it seems implausible due to different requirement by funders and data providers.

Table 1. Output vetting requirements and considerations at ICPSR.
Item Requirements Examples
Format * Presentation-ready format required/preferred (.pdf, .docx, .xlsx). * Raw outputs from statistical packages (e.g., SAS log, Statalog-files, M-Plus log) not accepted.
Contents * A description of the sample, sub-sample, analytic approach, and definitions of variables used in the analyses.
* Summary statistics for variables used in the analysis.
* Checklist (help self-vet before sending it to the vetting staff).
* Supporting documents (programming files).
* Minimum cell size threshold is clearly described in the output vetting instruction.
* Minimum cell size threshold differs by type of dataset and linkage capability.
Timeline * Depends on the output, but most vetting is completed within 10 business days. * Missing requirements, insufficient supporting documents or materials would significantly extend the timeline.

Reviewers of the statistical outputs

It is preferred that organizations have output-vetting reviewers with background in statistics or subject areas, but this is not a requirement. More important aspects are: 1) independence of the reviewers; 2) the four eyes principle; and 3) manageable workload without excessive pressure.[9]

Most organizations have designated individuals responsible for output vetting. For example, there are at least five experts at ICPSR all the time, with two or three back-ups, who vet outputs created from VDE or PDE. These experts are mostly ICPSR staff members who are not affiliated with any research projects of users (i.e., displaying independence). To bolster the confidence regarding whether to release output, organizations have a group of reviewers (four eyes principle; managing workload). Some organizations operate a committee who discuss the risks of data confidentiality and privacy from research outputs. The committee usually consists of a group of experts to oversee data confidentiality and evaluate disclosure risks from the use of restricted data. For example, the ICPSR Disclosure Review Board (DRB) fills a leadership and scholarly role in the disclosure avoidance community, and serves as a decision-making body within the ICPSR with regard to disclosure risks and exceptions to existing policies. The ICPSR DRB consists of a Chair (ICPSR Privacy and Security Officer), Vice-Chair, and 10 experts within and outside the organization. Individual ICPSR reviewers can query the DRB about disclosure risks on outputs and defer the approval decision to the DRB. Further, DRB reviews the ICPSR disclosure rules in light of new regulations and changes to the wider data environment, assesses new disclosure reduction methods and technologies for possible adoption, and develops rules around them. The ICPSR DRB convenes every month.

Having a group of experts (e.g., a committee) who can provide a second set of eyes on disclosure risks would be beneficial with regard to confidentiality and privacy protection, but it could create frustration for data users on a tight timeline. It is important for organizations to consider the procedure of committee involvement to be flexible, e.g., an ad hoc subcommittee available for immediate consultation on specific requests.

Automated disclosure review

Organizations try to standardize the process of disclosure review despite disparate requirements by data type, funding agencies, and data depositors that hamper progress. High-level standardization of the disclosure review process helps streamline the vetting process, and may save the vetting timeline. In terms of vetting guidelines, standardization would be easy for the rules-based approach (setting common strict rules across datasets and organizations), but it could diminish the utilization of data if some of the output were unnecessarily determined as risky. Standardizing output vetting using the principle-based approach may be easier to implement; having a rule of thumb to vet each output and releasing if risks are negligible.[9] One caveat regarding standardization of the principle-based approach is that organizations may want highly-qualified expert reviewers to assess the disclosure risks of statistical outputs.

Most organizations support a pool of experts to perform disclosure risk reviews, which is often time- and resource-consuming. Instead, organizations may consider an automated disclosure review system since output checking for disclosure risks is not necessarily a statistical matter but an operational matter.[8] In fact, some organizations have already implemented a machine-driven output checking for disclosure risks with regard to relatively simple matters such as minimum cell thresholds, although other organizations still rely on humans for the output checking. Stocchi and Bujnowska[11] summarized the automatic Stata programming developed by Ritchie et al.[12], suggesting that the automated checking may work more effectively by a joint effort with expert personnel. Ritchie et al.[12] also pointed out that automated tools may over-protect data by treating every possible case as an actual risk (which might compromise the utilization of restricted data). Also, the tool may over- or under-protect disclosure risks due to its inability to determine the context of data use.[12] A combination of the automated review process with expert check-ups might be most effective. Further, safe output created by safe users may help the automatic disclosure review system work the best. Organizations may invest in user training for good output preparation and checking behaviors, which eventually saves reviewers’ efforts and other resources.

Self-vetting that relies on "safe setting" and "safe people"

Outputs created within a VDE or PDE must go through a vetting process before retrieval, either by experts or by automated vetting system. On the other hand, organizations have to rely on an output self-vetting by data users who access data via a secure download method. Organizations do not scrutinize each output created from the secure download but still strive to ensure “safe setting” and “safe people” by providing training and guidelines. Audits on data management and use in safe settings by safe people are also conducted by many organizations. However, given greater risks of disclosure with secure encrypted data download dissemination, efforts for safe data may be required.

Training

Organizations require user training before accessing data, which includes, but is not limited to, data confidentiality, data use procedures (e.g., steps to restricted data application, output review process), and sanctions in relation to violations of data use protocols. Training may include passive materials (e.g., print-outs or videos), interactive materials (e.g., one-on-one phone or video sessions), or quizzes. While written training materials may work better for users to follow procedures, animated and interactive materials also provide benefits in terms of translating the training into the practice.[13] Combination of both passive and interactive training approaches would operate the best. Training requirements depend on types of data, funding agencies, data providers, and methods to access data (e.g., VDE/PDE or secure download); thus, user training and staff training may vary within the organization.

Recently, there has been growing consensus that user training should focus on a “community model,” not a “policing model.”[5] Training based on the policing model operates as a tool to make sure that researchers obey rules, assuming data users to be potential rule-breakers. On the other hand, the community model considers data users as colleagues with a shared goal of data confidentiality. (Details about the training theory are available in Green et al.[5]) In fact, many organizations rarely encounter substantial data breach incidents, but most of the common incidents result from inadvertent mistakes and ignorance of protocols by researchers. Effective training may better catalyze attitudinal shifts by focusing less on punishment.[5]

From an organization perspective, effective training requires extensive resources. Some restricted data accessing mechanisms require yearly updates to all research materials such as DUAs, IRB approval/exemption, and training. For organizations with diverse datasets and various types of users, tracking the yearly progress for every researcher and team may be burdensome. While the community model training would work effectively, having a good facilitator may not be easy for some organizations, and updating the materials frequently may be a hurdle for many organizations. Some organizations are moving toward automated and routine training for data users and also their staff, which may resolve some issues. Also, standard training that authorizes users to access data across organizations may help reduce the burden that is imposed to organizations.

While from a user perspective it is effective to have a condensed, succinct version of training, the content of training may keep being extended. For example, there have been growing concerns for data providers and managing organizations that data are being misused. The conclusions of research where restricted data is being used are sometimes harmful to specific groups or stigmatizing to a certain group of individuals. Organizations now consider inclusion of data ethics in training materials, although how to incorporate ethics issues in away the community model can be implemented is still in question.

Conclusions

In the past few decades, there have been efforts by multiple stakeholders (e.g., researchers, organizations, publishers, and funders of scientific research) to make scientific data FAIR. Technological advances such as search tools, vocabularies, and infrastructures have assisted in discovery and reuse of scientific data. Many organizations have implemented the Five Safes framework in their data management to protect the confidentiality of human subjects, as well as to promote reproducibility and transparency. Despite the effort, we observe that the safeguards could generate unintended challenges to certain groups of individuals (e.g., institutional approval that could exclude researchers without institutional affiliation) or in different areas (e.g., rigorous output checking that requires extensive insights from experts). This may raise questions for organizations with regard to future directions of data management with the Five Safes; for example, whether and how organizations govern the inequalities in access to scientific development and prevent unethical use of data (such as exploitation of indigenous populations, group harm to underrepresented or minority groups), which is one of the essentials of Open Science.[14] Furthermore, organizations now face additional challenges with newly emerged data types. Organizations may need to consider a more streamlined and standardized data management policy while allowing for a greater degree of flexibility to incorporate such data in the future.

Acknowledgements

None.

References

  1. Desai, T.; Ritchie, F.; Welpton, R. (2016). "Five Safes: Designing data access for research" (PDF). Economics Working Paper Series (1601): 1–27. https://www2.uwe.ac.uk/faculties/bbs/documents/1601.pdf. 
  2. Ritchie, Felix (20 September 2017). "The 'Five Safes': A Framework For Planning, Designing And Evaluating Data Access Solutions". Zenodo. doi:10.5281/ZENODO.897821. https://zenodo.org/record/897821. 
  3. 3.0 3.1 3.2 O'Hara, A. (2020). "Chapter 3. Model Data Use Agreements: A Practical Guide". In Cole, S.; Dhaliwal, I.; Sautmann, A. et al.. Handbook on Using Administrative Data for Research and Evidence-based Policy. Massachusetts Institute of Technology. https://admindatahandbook.mit.edu/book/v1.0-rc4/dua.html. 
  4. 4.0 4.1 Levenstein, H. (March 2020). "Addressing Challenges of Restricted Data Access". Deep Blue Documents. University of Michigan Library. https://deepblue.lib.umich.edu/handle/2027.42/156407. 
  5. 5.0 5.1 5.2 5.3 5.4 5.5 Green, E.; Ritchie, F.; Newbam, J. et al. (2017). "Lessons learned in training ‘safe users’ of confidential data" (PDF). Work Session on Statistical Data Confidentiality. UWE Bristol. https://pdfs.semanticscholar.org/548f/4ad0434c0f67183d557fed9661bd8baa2c07.pdf. 
  6. Levenstein, M.C.; Tyler, A.R.B.; Davidson Bleckman, J. (16 May 2018). "The Researcher Passport: Improving Data Access and Confidentiality Protection". Deep Blue Documents. University of Michigan Library. https://deepblue.lib.umich.edu/handle/2027.42/143808. 
  7. OECD (15 October 2015). "Making Open Science a Reality" (in en). OECD Science, Technology and Industry Policy Papers (25). doi:10.1787/5jrs2f963zs1-en. https://www.oecd-ilibrary.org/science-and-technology/making-open-science-a-reality_5jrs2f963zs1-en. 
  8. 8.0 8.1 Alves, Kyle; Ritchie, Felix (25 November 2020). "Runners, repeaters, strangers and aliens: Operationalising efficient output disclosure control". Statistical Journal of the IAOS 36 (4): 1281–1293. doi:10.3233/SJI-200661. https://www.medra.org/servlet/aliasResolver?alias=iospress&doi=10.3233/SJI-200661. 
  9. 9.0 9.1 9.2 Griffiths, E.; Greci, C.; Kotrotsios, Y. et al. (July 2019). "Handbook on Statistical Disclosure Control for Outputs" (PDF). Safe DataAccess Professionals Working Group. https://ukdataservice.ac.uk/app/uploads/thf_datareport_aw_web.pdf. 
  10. Levenstein, M.. "Managing research and data for reproducibility and transparency" (PDF). Office of Planning, Research and Evaluation 2019 Open Science Methods Meeting. Institute for Social Research, University of Michigan. https://opremethodsmeeting.org/wp-content/uploads/2019/10/Reproducibility_Levenstein_presentation.pdf. 
  11. Stocchi, M.; Bujnowksa, A. (2021). "Automatic checking of research outputs" (PDF). Proceedings of the 2021 Conference of European Statisticians: 1–7. https://unece.org/sites/default/files/2021-12/SDC2021_Day2_Stocchi_AD.pdf. 
  12. 12.0 12.1 12.2 Ritchie, F.; Green, E.; Smith, J. (2021). Automatic Checking of Research Outputs (ACRO): A tool for dynamic disclosure checks : 2021 edition.. LU: European Commission, Statistical Office of the European Union. doi:10.2785/75954. https://data.europa.eu/doi/10.2785/75954. 
  13. Palmiter, Susan; Elkerton, Jay; Baggett, Patricia (1 May 1991). "Animated demonstrations vs written instructions for learning procedural tasks: a preliminary investigation" (in en). International Journal of Man-Machine Studies 34 (5): 687–701. doi:10.1016/0020-7373(91)90019-4. https://linkinghub.elsevier.com/retrieve/pii/0020737391900194. 
  14. UNESCO (2021). UNESCO Recommendation on Open Science. doi:10.54677/mnmh8546. https://unesdoc.unesco.org/ark:/48223/pf0000379949. 

Notes

This presentation is faithful to the original, with only a few minor changes to presentation, though grammar and word usage was substantially updated for improved readability. In some cases important information was missing from the references, and that information was added. Nothing else was changed in accordance with the NoDerivatives portion of the license.