Difference between revisions of "Journal:Smart information systems in cybersecurity: An ethical analysis"
Shawndouglas (talk | contribs) (Saving and adding more.) |
Shawndouglas (talk | contribs) (Saving and adding more.) |
||
Line 69: | Line 69: | ||
Privacy is a central issue in cybersecurity, as increasing amounts of personal data are gathered and stored in the cloud. Furthermore, these data can be highly sensitive, such as health or bank records.<ref name="ManjikianCyber17">{{cite book |title=Cybersecurity Ethics |author=Manjikian, M. |publisher=Routledge |pages=81–112 |year=2017 |isbn=9781138717527}}</ref> While the data at risk from attack is private, in order to identify an attack, particularly when SIS are involved, an effective cybersecurity system must maintain an awareness of “typical” behavior so that “atypical” behavior stands out more obviously. However, doing this requires ongoing development of personal profiles of users of a particular system, which in turn involves monitoring their behavior online. In cases of both attack and prevention of attacks, users’ privacy risks are compromised. | Privacy is a central issue in cybersecurity, as increasing amounts of personal data are gathered and stored in the cloud. Furthermore, these data can be highly sensitive, such as health or bank records.<ref name="ManjikianCyber17">{{cite book |title=Cybersecurity Ethics |author=Manjikian, M. |publisher=Routledge |pages=81–112 |year=2017 |isbn=9781138717527}}</ref> While the data at risk from attack is private, in order to identify an attack, particularly when SIS are involved, an effective cybersecurity system must maintain an awareness of “typical” behavior so that “atypical” behavior stands out more obviously. However, doing this requires ongoing development of personal profiles of users of a particular system, which in turn involves monitoring their behavior online. In cases of both attack and prevention of attacks, users’ privacy risks are compromised. | ||
A related issues is that of control of data, which may be seen as an aspect of privacy<ref name="MoorePriv15">{{cite book |title=Privacy, Security and Accountability: Ethics, Law and Policy |author=Moore, A.D. |publisher=Rowman and Littlefield |year=2015 |isbn=9781783484768}}</ref><ref name="MoorePrivacy12">{{cite journal |title=Privacy: Its Meaning and Value |journal=American Philosophical Quarterly |author=Moore, A.D. |volume=40 |pages=215–27 |year=2003 |url=https://ssrn.com/abstract=1980880}}</ref> or additional to privacy concerns.<ref name="AllenPrivacy99">{{cite journal |title=Privacy-as-Data Control: Conceptual, Practical, and Moral Limits of the Paradigm |journal=Connecticut Law Review |author=Allen, A.L. |volume=32 |pages=861–75 |year=1999 |url=https://heinonline.org/HOL/LandingPage?collection=journals&handle=hein.journals/conlr32&div=35}}</ref><ref name="MacnishGov16">{{cite journal |title=Government Surveillance and Why Defining Privacy Matters in a Post‐Snowden World |journal=Journal of Applied Philosophy |author=Macnish, K. |volume=35 |issue=2 |pages=417–32 |year=2018 |doi=10.1111/japp.12219}}</ref> In either case, the control of data is a critical factor, as once an attack is successful, control is lost. The data may then be used for a variety of ends, not only relating to violations of privacy but also for political or other gain, as was the case with Cambridge Analytica<ref name="CadwalladrRevealed18">{{cite web |url=https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election |title=Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach |author=Cadwalladr, C.; Graham-Harrison, E. |work=The Guardian |date=17 March 2018}}</ref>, where the problem was not only privacy concerns, but also the control of users’ data, which enabled discrete, targeted political advertising concerning the U.K.’s referendum on membership of the | A related issues is that of control of data, which may be seen as an aspect of privacy<ref name="MoorePriv15">{{cite book |title=Privacy, Security and Accountability: Ethics, Law and Policy |author=Moore, A.D. |publisher=Rowman and Littlefield |year=2015 |isbn=9781783484768}}</ref><ref name="MoorePrivacy12">{{cite journal |title=Privacy: Its Meaning and Value |journal=American Philosophical Quarterly |author=Moore, A.D. |volume=40 |pages=215–27 |year=2003 |url=https://ssrn.com/abstract=1980880}}</ref> or additional to privacy concerns.<ref name="AllenPrivacy99">{{cite journal |title=Privacy-as-Data Control: Conceptual, Practical, and Moral Limits of the Paradigm |journal=Connecticut Law Review |author=Allen, A.L. |volume=32 |pages=861–75 |year=1999 |url=https://heinonline.org/HOL/LandingPage?collection=journals&handle=hein.journals/conlr32&div=35}}</ref><ref name="MacnishGov16">{{cite journal |title=Government Surveillance and Why Defining Privacy Matters in a Post‐Snowden World |journal=Journal of Applied Philosophy |author=Macnish, K. |volume=35 |issue=2 |pages=417–32 |year=2018 |doi=10.1111/japp.12219}}</ref> In either case, the control of data is a critical factor, as once an attack is successful, control is lost. The data may then be used for a variety of ends, not only relating to violations of privacy but also for political or other gain, as was the case with Cambridge Analytica<ref name="CadwalladrRevealed18">{{cite web |url=https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election |title=Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach |author=Cadwalladr, C.; Graham-Harrison, E. |work=The Guardian |date=17 March 2018}}</ref>, where the problem was not only privacy concerns, but also the control of users’ data, which enabled discrete, targeted political advertising concerning the U.K.’s referendum on membership of the European Union and the United States presidential election, both in 2016.<ref name="IencaCambridge18">{{cite web |url=https://blogs.scientificamerican.com/observations/cambridge-analytica-and-online-manipulation/ |title=Cambridge Analytica and Online Manipulation |author=Ienca, M.; Vayena, E. |work=Scientific American |date=30 March 2018 |accessdate=10 July 2018}}</ref> | ||
While the E.U. has sought to resolve concerns with privacy and control of data through the introduction of the General Data Protection Regulation<ref name="EUReg16">{{cite web |url=https://publications.europa.eu/en/publication-detail/-/publication/3e485e15-11bd-11e6-ba9a-01aa75ed71a1/language-en |title=Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) |author=Council of the European Union, European Parliament |publisher=European Union |date=27 April 2016}}</ref>, this has raised its own concerns. While European companies must follow strict regulations in developing SIS-related algorithms when it comes to accessing personal data, the same only applies to non-European companies when they practice in Europe. This leads to a concern of “data dumping, in which research is carried out in countries with lower barriers for use of personal data, rather than jump through bureaucratic hurdles in Europe. The result is that the data of non-European citizens is placed at higher risk than that of Europeans.”<ref name="MacnishEthics19" /> | |||
Incidental findings also fall under this category, as data derived from regular scans with the goal of profile-building can uncover new information about an individual which they did not want to reveal. Decisions should be made in advance on how to reveal that information and to whom it should be revealed; for example, the discovery that an employee is looking for another job. | |||
===Vulnerabilities and disclosure=== | |||
An awareness or a duty to find vulnerabilities in a network which leave it open to an attack can help cybersecurity professionals understand the magnitude of a particular attack. However, disclosure of vulnerabilities to a particular authority, such as the company responsible, also risks the leak of that vulnerability from the responsible authority to communities of hackers so that that network or others may be exploited.<ref name="MacnishEthics19" /> If vulnerabilities are made public, then the public visibility of a system and therefore its commercial viability may be threatened. For example, Wolter Pieters has pointed out the challenge of exposing vulnerabilities in e-voting systems: prior to an election and the systems will not be trusted; after an election and the election result will be called into question. However, if the vulnerability is not disclosed, then an attack may occur, which genuinely compromises the election. A related issue here is whether cybersecurity researchers looking at the techniques and practices of hackers should have a duty to expose vulnerabilities as an act of professional whistle-blowing. By rendering this a duty, there is less pressure on the professional to have to decide what is the right thing to do in a particular case, such as when competing financial interests may argue against such revelations.<ref name="DavisThink91">{{cite journal |title=Thinking like an engineer: The place of a code of ethics in the practice of a profession |journal=Philosophy & Public Affairs |author=Davis, M. |volume=20 |issue=2 |pages=150–67 |year=1991 |url=https://www.jstor.org/stable/2265293}}</ref> As noted above, ethical issues arising from vulnerability disclosure are true of cybersecurity generally, whether involving SIS or not. | |||
===Competence of research ethics committees=== | |||
Within universities and many research institutions, research ethics committees (RECs) or institutional review boards oversee applications for research to provide protection for research participants. However, RECs are often composed of experts in ethics who have limited awareness of cybersecurity practice, or computer scientists who lack ethical expertise. An example of this occurred when potentially harmful research was carried out on non-consenting individuals in totalitarian states which effectively tested the firewalls of those states.<ref name="BurnettEncore15" /> While this research clearly put individuals at risk without their consent, at least two RECs determined that the research was not of relevance for ethical review because it did not concern human participants or personal data. It did, however, concern IP addresses, which could easily be linked to a person, putting that person at risk.<ref name="MacnishEthics19" /> In the case of research using SIS, the potential for obscurity of the data could render the link with individuals more difficult to recognize still. Furthermore, it should be noted that these are concerns which arise in institutions with access to an REC. As pointed out by Macnish and van der Ham<ref name="MacnishEthics19" />, many private companies do not have any ethical oversight facilities. | |||
==References== | ==References== | ||
Line 76: | Line 85: | ||
==Notes== | ==Notes== | ||
This presentation is faithful to the original, with only a few minor changes to presentation, grammar, and punctuation. In some cases important information was missing from the references, and that information was added. The 2018 article by Sobers on 60 must-know cybersecurity facts has been updated in 2019; an archived version from 2018 is used in this version. The Lundgren and Möller citation has changed since the original article published online; this version represents the new information. The original cites an article by Macnish and van der Ham, but the research doesn't appear to be published yet; found a draft on GitHub to cite. | This presentation is faithful to the original, with only a few minor changes to presentation, grammar, and punctuation. In some cases important information was missing from the references, and that information was added. The 2018 article by Sobers on 60 must-know cybersecurity facts has been updated in 2019; an archived version from 2018 is used in this version. The Lundgren and Möller citation has changed since the original article published online; this version represents the new information. The original cites an article by Macnish and van der Ham, but the research doesn't appear to be published yet; found a draft on GitHub to cite. Non-figured "flavor" images from the original were not included here. | ||
<!--Place all category tags here--> | <!--Place all category tags here--> |
Revision as of 22:02, 3 June 2019
Full article title | Smart information systems in cybersecurity: An ethical analysis |
---|---|
Journal | ORBIT Journal |
Author(s) | Macnish, Kevin; Fernandez-Inguanzo, Ana; Kirichenko, Alexey |
Author affiliation(s) | University of Twente, F-Secure |
Primary contact | Email: k dot macnish at utwente dot nl |
Year published | 2019 |
Volume and issue | 2(2) |
Page(s) | 105 |
DOI | 10.29297/orbit.v2i2.105 |
ISSN | 2515-8562 |
Distribution license | Creative Commons Attribution 4.0 International |
Website | https://www.orbit-rri.org/ojs/index.php/orbit/article/view/105 |
Download | https://www.orbit-rri.org/ojs/index.php/orbit/article/view/105/117 (PDF) |
This article should not be considered complete until this message box has been removed. This is a work in progress. |
Abstract
This report provides an overview of the current implementation of smart information systems (SIS) in the field of cybersecurity. It also identifies the positive and negative aspects of using SIS in cybersecurity, including ethical issues which could arise while using SIS in this area. One company working in the industry of telecommunications (Company A) is analysed in this report. Further specific ethical issues that arise when using SIS technologies in Company A are critically evaluated. Finally, conclusions are drawn on the case study, and areas for improvement are suggested.
Keywords: cybersecurity, ethics, smart information systems, big data
Introduction
Increasing numbers of items are becoming connected to the internet. Cisco—a global leader in information technology, networking, and cybersecurity—estimates that more than 8.7 billion devices were connected to the internet by the end of 2012, a number that will likely rise to over 40 billion in 2020.[1] Cybersecurity has therefore become an important concern both publicly and privately. In the public sector, governments have created and enlarged cybersecurity divisions such as the U.S. Cyber Command and the Chinese “Information Security Base,” whose mission is to provide security to critical national security assets.[1]
In the private sphere, companies are struggling to keep up with the required need for security in the face of increasingly sophisticated attacks from a variety of sources. In 2017, there were “over 130 large-scale, targeted breaches [by hackers of computer networks] in the U.S.,” and “between January 1, 2005 and April 18, 2018 there have been 8,854 recorded breaches.”[2] Furthermore, cyberattacks affect not only the online world, but also lead to vulnerabilities in the physical world, particularly when an attack threatens industries such as healthcare, communications, energy, or military networks, putting large swathes of society at risk. Indeed, it has been argued that some cyberattacks could constitute legitimate grounds for declarations of (physical) war.[3]
Cybersecurity is therefore a complex and multi-disciplinary issue. Security has been defined in the international relations and security studies spheres both as “the absence of threats to acquired values”[4] and “the “absence of harm to acquired values.”[5] Within the profession, cybersecurity is more commonly defined in terms of confidentiality, integrity, and availability of information.[6] A 2014 literature review on the meanings attributed to cybersecurity has led to the broader definition of cybersecurity as "the organization and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems.”[7]
Cybersecurity therefore can be seen to encompass property rights of ownership of networks that could come under attack, as well as other concerns attributed with these, such as issues of access, extraction, contribution, removal, management, exclusion, and alienation.[8] Hence cybersecurity fulfills a similar role to physical security in protecting property from some level of intrusion. Craigen et al. also argue that cybersecurity refers not only to a technical domain, but also that the values underlying that domain should be included in the description of cybersecurity.[7] Seen this way, ethical issues and values form bedrock to cybersecurity research as identifying the values which cybersecurity seeks to protect.
The case study is divided into four main sections. The next two sections focus on the technical aspects of cybersecurity and a literature review of academic articles concerning ethical issues in cybersecurity, respectively. Then the practice of cybersecurity research is presented through an interview conducted with four employees at a major telecommunications software and hardware company, Com-pany A. Finally, the last section critically evaluates ethical issues that have arisen in the use of SIS technologies in cybersecurity.
The use of smart information systems in cybersecurity
The introduction of big data and artificial intelligence (smart information systems, or SIS) in cybersecurity is still in its early phase. Currently there is comparatively little work carried out on cybersecurity using SIS for several reasons. These include the remarkable diversity of cyberattacks (e.g., different approaches to hacking systems and introducing malware), the danger of false positives and false negatives, and the relatively low intelligence of existing SIS.
Taking these in turn, the diversity of attacks—both in the source of the attack, the focus of the attack, and the motivation of the attack—is significant. Attacks can be launched from outside an organization (e.g., from a hacking collective, such as Anonymous) or from an insider (e.g., a disaffected employee looking to damage a system). They may come from a single source, typically masked through using the darknet, or from a source who has engaged in a number of “hops” (moving from one computer on a network to another, thus masking the original source) such that the originator could appear to be in a hospital or in a military base. If an attack were to appear to come from a military base, this might encourage the attacked party to “hack back.” However, if the military base were an artificial screen presented in front of a hospital, the reverse hack could bring down that hospital’s computer networks. The focus of the attack could be on imitating a user or system administrator (local IT expert) or on exploiting a security flaw in unpatched code (programming in a network that has a flaw which has not yet been fixed, also known as a zero-day exploit). The motivation of the attack can range from state security and intelligence gathering (e.g., U.S. Intelligence spying on Chinese military installations), to financial incentives through blackmail (e.g., encrypting a company’s files and agreeing to decrypt them only when the company has paid the hacker a certain sum of money). This diversity means that it is extremely difficulty to develop a SIS that will effectively recognize an attack for what it is.
Secondly, the danger of false positives and false negatives is significant in light of the difficulty of recognizing an attack. If an attack is not recognized by a SIS as a false negative, it may be successful. This is particularly the case if security personnel have come to place undue trust in the automation and do not provide quality assurance of the SIS, a behavior known as “automation bias.”[9][10] By contrast, the SIS could be so cautious that it may lead to an excessive number of false positives in which a legitimate interaction is falsely labelled an attack and not permitted to continue. This leads to frustration and could entail the eventual disabling of the SIS.[11]
Thirdly, and despite some hype in the media, SIS are still at a relatively unintelligent stage of development. Computer vision systems designed to identify people loitering, for example, recognize that a person has not left a circle with radius x in y number of seconds, but they cannot determine why the person is there or what their intent may be. As such, the inability to determine intentions from actions renders automated systems relatively impotent.
Despite these concerns, there are some potential grounds for use of SIS in cybersecurity. The most effective is in scanning systems for known attacks, or known abnormal patterns of behavior that have a very high likelihood of being an attack. When coupled with a human operator to scan any alerts and so determine whether to take action, the combined human-machine security system can prove to be effective, albeit still facing the above problems of automation bias and excessive false positives.[12]
Literature review: Ethical issues of using SIS in cybersecurity
In this section we will conduct a literature review of the most fundamental ethical issues in cybersecurity that are being proposed in the academic environment. Our goal is to compare them with the interview that has been conducted in a major telecommunications software and hardware company, Company A, in order to give an overview on the ethical issues in cybersecurity.
The literature review was carried out through a combination of online search using generic engines, such as Google and Google Scholar, and discipline-specific search engines on websites such as PhilPapers.org and The Philosopher's Index. Selected papers were then read and, where appropriate, the bibliographic references were used to locate further literature. Generic search on Google also provided links to trade publications and websites that were a further source of background information.
The ethical issues to arise from the literature review were informed consent, protection from harm, privacy and control of data, vulnerabilities and disclosure, competence of research ethics committees, security issues, trust and transparency, risk, responsibility, and business interests and codes of conduct.
Informed consent
Acquiring informed consent is an important activity for cybersecurity, and one that has been at the heart of research ethics and practice for decades.[13][14] Consent is variously valued as the respect for autonomy[14] or the minimization of harm.[15] As such, the justification for informed consent is a considerable challenge for data analytics where anonymized data may be used without explicit consent of the person from whom it originates. This is also true within global cybersecurity, where a number of complicating issues arise, such as the complexity of informing users about detailed technical aspects in order to provide necessary information, as well as language barriers.[16] This, though, is the case for many other areas of research such as medical or social sciences, and the scripts need not be different in cybersecurity.[17]
Nonetheless, challenges of complexity, and of conveying that complexity in a manner that is sufficiently informative for a non-expert to make a decision, remain. Wolter Pieters notes that information provision does not correspond merely to the amount of information communicated, but to how it is presented, and that the type of information given is justified and appropriate. “One cannot speak about informed consent if one gives too little information, but one cannot speak about informed consent either if one gives too much. Indeed, giving too much information might lead to uninformed dissent, as distrust is invited by superfluous information.”[18]
Protection from harm
Cybersecurity has the potential to cause harm to its users, even when that harm is not intended. Concerns exist regarding the disclosure of vulnerabilities (such as a flaw in a security program which would allow for a hacker to break into the network with relative ease), for example, such as whether they should be disclosed publicly once a company has failed to address them. If not, then the vulnerability entails that a person may be at risk of attack, which is particularly concerning if the device at risk is medical in nature, such as a pacemaker.[19][20] However, disclosure could bring the vulnerability to the awareness of potential attackers who had not considered it previously. This is true of cybersecurity generally, whether involving SIS or not.
Privacy and control of data
Privacy is a central issue in cybersecurity, as increasing amounts of personal data are gathered and stored in the cloud. Furthermore, these data can be highly sensitive, such as health or bank records.[21] While the data at risk from attack is private, in order to identify an attack, particularly when SIS are involved, an effective cybersecurity system must maintain an awareness of “typical” behavior so that “atypical” behavior stands out more obviously. However, doing this requires ongoing development of personal profiles of users of a particular system, which in turn involves monitoring their behavior online. In cases of both attack and prevention of attacks, users’ privacy risks are compromised.
A related issues is that of control of data, which may be seen as an aspect of privacy[22][23] or additional to privacy concerns.[24][25] In either case, the control of data is a critical factor, as once an attack is successful, control is lost. The data may then be used for a variety of ends, not only relating to violations of privacy but also for political or other gain, as was the case with Cambridge Analytica[26], where the problem was not only privacy concerns, but also the control of users’ data, which enabled discrete, targeted political advertising concerning the U.K.’s referendum on membership of the European Union and the United States presidential election, both in 2016.[27]
While the E.U. has sought to resolve concerns with privacy and control of data through the introduction of the General Data Protection Regulation[28], this has raised its own concerns. While European companies must follow strict regulations in developing SIS-related algorithms when it comes to accessing personal data, the same only applies to non-European companies when they practice in Europe. This leads to a concern of “data dumping, in which research is carried out in countries with lower barriers for use of personal data, rather than jump through bureaucratic hurdles in Europe. The result is that the data of non-European citizens is placed at higher risk than that of Europeans.”[17]
Incidental findings also fall under this category, as data derived from regular scans with the goal of profile-building can uncover new information about an individual which they did not want to reveal. Decisions should be made in advance on how to reveal that information and to whom it should be revealed; for example, the discovery that an employee is looking for another job.
Vulnerabilities and disclosure
An awareness or a duty to find vulnerabilities in a network which leave it open to an attack can help cybersecurity professionals understand the magnitude of a particular attack. However, disclosure of vulnerabilities to a particular authority, such as the company responsible, also risks the leak of that vulnerability from the responsible authority to communities of hackers so that that network or others may be exploited.[17] If vulnerabilities are made public, then the public visibility of a system and therefore its commercial viability may be threatened. For example, Wolter Pieters has pointed out the challenge of exposing vulnerabilities in e-voting systems: prior to an election and the systems will not be trusted; after an election and the election result will be called into question. However, if the vulnerability is not disclosed, then an attack may occur, which genuinely compromises the election. A related issue here is whether cybersecurity researchers looking at the techniques and practices of hackers should have a duty to expose vulnerabilities as an act of professional whistle-blowing. By rendering this a duty, there is less pressure on the professional to have to decide what is the right thing to do in a particular case, such as when competing financial interests may argue against such revelations.[29] As noted above, ethical issues arising from vulnerability disclosure are true of cybersecurity generally, whether involving SIS or not.
Competence of research ethics committees
Within universities and many research institutions, research ethics committees (RECs) or institutional review boards oversee applications for research to provide protection for research participants. However, RECs are often composed of experts in ethics who have limited awareness of cybersecurity practice, or computer scientists who lack ethical expertise. An example of this occurred when potentially harmful research was carried out on non-consenting individuals in totalitarian states which effectively tested the firewalls of those states.[16] While this research clearly put individuals at risk without their consent, at least two RECs determined that the research was not of relevance for ethical review because it did not concern human participants or personal data. It did, however, concern IP addresses, which could easily be linked to a person, putting that person at risk.[17] In the case of research using SIS, the potential for obscurity of the data could render the link with individuals more difficult to recognize still. Furthermore, it should be noted that these are concerns which arise in institutions with access to an REC. As pointed out by Macnish and van der Ham[17], many private companies do not have any ethical oversight facilities.
References
- ↑ 1.0 1.1 Singer, P.W.; Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know (1st ed.). Oxford University Press. ISBN 9780199918119. https://books.google.com/books?id=9VDSAQAAQBAJ.
- ↑ Sobers, R. (18 May 2018). "60 Must-Know Cybersecurity Statistics for 2018". Varonis Blog. Archived from the original on 08 November 2018. https://web.archive.org/web/20181108122758/https://www.varonis.com/blog/cybersecurity-statistics/. Retrieved 17 December 2018.
- ↑ Smith, P.T. (2018). "Cyberattacks as Casus Belli: A Sovereignty‐Based Account". Journal of Applied Philosophy 35 (2): 222–41. doi:10.1111/japp.12169.
- ↑ Wolters, A. (1952). ""National Security" as an Ambiguous Symbol". Political Science Quarterly 67 (4): 481–502. doi:10.2307/2145138.
- ↑ Baldwin, D.A. (1997). "The Concept of Security". Review of International Studies 23 (1): 5–26. https://www.cambridge.org/core/journals/review-of-international-studies/article/concept-of-security/67188B6038200A97C0B0A370FDC9D6B8.
- ↑ Lundgren, B.; Möller, N. (2019). "Defining Information Security". Science and Engineering Ethics 25 (2): 419–41. doi:10.1007/s11948-017-9992-1.
- ↑ 7.0 7.1 Craigen, D.; Diakun—Thibault, N.; Purse, R. (2014). "Defining Cybersecurity". Technology Innovation Management Review 4 (10): 13–21. doi:10.22215/timreview/835.
- ↑ Hess, C.; Ostrom, E. (2006). Understanding Knowledge as a Commons: From Theory to Practice. MIT Press. ISBN 9780262083577.
- ↑ Bainbridge, L. (1983). "Ironies of automation". Automatica 19 (6): 775–79. doi:10.1016/0005-1098(83)90046-8.
- ↑ Goddard, K.; Roudsari, A.; Wyatt, J.C. (2012). "Automation bias: A systematic review of frequency, effect mediators, and mitigators". JAMIA 19 (1): 121–7. doi:10.1136/amiajnl-2011-000089. PMC PMC3240751. PMID 21685142. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3240751.
- ↑ Tucker, E. (July 2018). "Cyber security – why you’re doing it all wrong". Computer Weekly. https://www.computerweekly.com/opinion/Cyber-security-why-youre-doing-it-all-wrong. Retrieved 17 December 2018.
- ↑ Macnish, K. (2012). "Unblinking eyes: The ethics of automating surveillance". Ethics and Information Technology 14 (2): 151–67. doi:10.1007/s10676-012-9291-0.
- ↑ Johnson M.L.; Bellovin S.M.; Keromytis A.D. (2012). "Computer Security Research with Human Subjects: Risks, Benefits and Informed Consent". In Danezis G.; Dietrich S.; Sako K.. Financial Cryptography and Data Security. Springer. pp. 131–37. doi:10.1007/978-3-642-29889-9_11. ISBN 9783642298899.
- ↑ 14.0 14.1 Miller, F.; Wertheimer, A., ed. (2009). The Ethics of Consent. Oxford University Press. ISBN 9780195335149.
Cite error: Invalid
<ref>
tag; name "MillerTheEthics09" defined multiple times with different content - ↑ Manson, N.C.; O'Neill, O. (2007). Rethinking Informed Consent in Bioethics. Cambridge University Press. doi:10.1017/CBO9780511814600. ISBN 9780511814600.
- ↑ 16.0 16.1 Burnett, S.; Feamster, N. (2015). "Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests". Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication: 653–67. doi:10.1145/2785956.2787485.
- ↑ 17.0 17.1 17.2 17.3 17.4 van der Ham, J. (14 September 2018). "jeroenh/Ethics-and-Cyber-Security/template.tex". GitHub. https://github.com/jeroenh/Ethics-and-Cyber-Security/blob/master/template.tex.
- ↑ Pieters, W. (2011). "Explanation and trust: what to tell the user in security and AI?". Ethics and Information Technology 13 (1): 53–64. doi:10.1007/s10676-010-9253-3.
- ↑ Nichols, S. (7 September 2016). "St Jude sues short-selling MedSec over pacemaker 'hack' report". The Register. https://www.theregister.co.uk/2016/09/07/st_jude_sues_over_hacking_claim/. Retrieved 04 July 2018.
- ↑ Spring, T. (31 August 2016). "Researchers: MedSec, Muddy Waters Set Bad Precedent With St. Jude Medical Short". Threat Post. https://threatpost.com/researchers-medsec-muddy-waters-set-bad-precedent-with-st-jude-medical-short/120266/. Retrieved 04 July 2018.
- ↑ Manjikian, M. (2017). Cybersecurity Ethics. Routledge. pp. 81–112. ISBN 9781138717527.
- ↑ Moore, A.D. (2015). Privacy, Security and Accountability: Ethics, Law and Policy. Rowman and Littlefield. ISBN 9781783484768.
- ↑ Moore, A.D. (2003). "Privacy: Its Meaning and Value". American Philosophical Quarterly 40: 215–27. https://ssrn.com/abstract=1980880.
- ↑ Allen, A.L. (1999). "Privacy-as-Data Control: Conceptual, Practical, and Moral Limits of the Paradigm". Connecticut Law Review 32: 861–75. https://heinonline.org/HOL/LandingPage?collection=journals&handle=hein.journals/conlr32&div=35.
- ↑ Macnish, K. (2018). "Government Surveillance and Why Defining Privacy Matters in a Post‐Snowden World". Journal of Applied Philosophy 35 (2): 417–32. doi:10.1111/japp.12219.
- ↑ Cadwalladr, C.; Graham-Harrison, E. (17 March 2018). "Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach". The Guardian. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election.
- ↑ Ienca, M.; Vayena, E. (30 March 2018). "Cambridge Analytica and Online Manipulation". Scientific American. https://blogs.scientificamerican.com/observations/cambridge-analytica-and-online-manipulation/. Retrieved 10 July 2018.
- ↑ Council of the European Union, European Parliament (27 April 2016). "Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance)". European Union. https://publications.europa.eu/en/publication-detail/-/publication/3e485e15-11bd-11e6-ba9a-01aa75ed71a1/language-en.
- ↑ Davis, M. (1991). "Thinking like an engineer: The place of a code of ethics in the practice of a profession". Philosophy & Public Affairs 20 (2): 150–67. https://www.jstor.org/stable/2265293.
Notes
This presentation is faithful to the original, with only a few minor changes to presentation, grammar, and punctuation. In some cases important information was missing from the references, and that information was added. The 2018 article by Sobers on 60 must-know cybersecurity facts has been updated in 2019; an archived version from 2018 is used in this version. The Lundgren and Möller citation has changed since the original article published online; this version represents the new information. The original cites an article by Macnish and van der Ham, but the research doesn't appear to be published yet; found a draft on GitHub to cite. Non-figured "flavor" images from the original were not included here.