Journal:Smart information systems in cybersecurity: An ethical analysis
Full article title | Smart information systems in cybersecurity: An ethical analysis |
---|---|
Journal | ORBIT Journal |
Author(s) | Macnish, Kevin; Fernandez-Inguanzo, Ana; Kirichenko, Alexey |
Author affiliation(s) | University of Twente, F-Secure |
Primary contact | Email: k dot macnish at utwente dot nl |
Year published | 2019 |
Volume and issue | 2(2) |
Page(s) | 105 |
DOI | 10.29297/orbit.v2i2.105 |
ISSN | 2515-8562 |
Distribution license | Creative Commons Attribution 4.0 International |
Website | https://www.orbit-rri.org/ojs/index.php/orbit/article/view/105 |
Download | https://www.orbit-rri.org/ojs/index.php/orbit/article/view/105/117 (PDF) |
This article should not be considered complete until this message box has been removed. This is a work in progress. |
Abstract
This report provides an overview of the current implementation of smart information systems (SIS) in the field of cybersecurity. It also identifies the positive and negative aspects of using SIS in cybersecurity, including ethical issues which could arise while using SIS in this area. One company working in the industry of telecommunications (Company A) is analysed in this report. Further specific ethical issues that arise when using SIS technologies in Company A are critically evaluated. Finally, conclusions are drawn on the case study, and areas for improvement are suggested.
Keywords: cybersecurity, ethics, smart information systems, big data
Introduction
Increasing numbers of items are becoming connected to the internet. Cisco—a global leader in information technology, networking, and cybersecurity—estimates that more than 8.7 billion devices were connected to the internet by the end of 2012, a number that will likely rise to over 40 billion in 2020.[1] Cybersecurity has therefore become an important concern both publicly and privately. In the public sector, governments have created and enlarged cybersecurity divisions such as the U.S. Cyber Command and the Chinese “Information Security Base,” whose mission is to provide security to critical national security assets.[1]
In the private sphere, companies are struggling to keep up with the required need for security in the face of increasingly sophisticated attacks from a variety of sources. In 2017, there were “over 130 large-scale, targeted breaches [by hackers of computer networks] in the U.S.,” and “between January 1, 2005 and April 18, 2018 there have been 8,854 recorded breaches.”[2] Furthermore, cyberattacks affect not only the online world, but also lead to vulnerabilities in the physical world, particularly when an attack threatens industries such as healthcare, communications, energy, or military networks, putting large swathes of society at risk. Indeed, it has been argued that some cyberattacks could constitute legitimate grounds for declarations of (physical) war.[3]
Cybersecurity is therefore a complex and multi-disciplinary issue. Security has been defined in the international relations and security studies spheres both as “the absence of threats to acquired values”[4] and “the “absence of harm to acquired values.”[5] Within the profession, cybersecurity is more commonly defined in terms of confidentiality, integrity, and availability of information.[6] A 2014 literature review on the meanings attributed to cybersecurity has led to the broader definition of cybersecurity as "the organization and collection of resources, processes, and structures used to protect cyberspace and cyberspace-enabled systems.”[7]
Cybersecurity therefore can be seen to encompass property rights of ownership of networks that could come under attack, as well as other concerns attributed with these, such as issues of access, extraction, contribution, removal, management, exclusion, and alienation.[8] Hence cybersecurity fulfills a similar role to physical security in protecting property from some level of intrusion. Craigen et al. also argue that cybersecurity refers not only to a technical domain, but also that the values underlying that domain should be included in the description of cybersecurity.[7] Seen this way, ethical issues and values form bedrock to cybersecurity research as identifying the values which cybersecurity seeks to protect.
The case study is divided into four main sections. The next two sections focus on the technical aspects of cybersecurity and a literature review of academic articles concerning ethical issues in cybersecurity, respectively. Then the practice of cybersecurity research is presented through an interview conducted with four employees at a major telecommunications software and hardware company, Com-pany A. Finally, the last section critically evaluates ethical issues that have arisen in the use of SIS technologies in cybersecurity.
The use of smart information systems in cybersecurity
The introduction of big data and artificial intelligence (smart information systems, or SIS) in cybersecurity is still in its early phase. Currently there is comparatively little work carried out on cybersecurity using SIS for several reasons. These include the remarkable diversity of cyberattacks (e.g., different approaches to hacking systems and introducing malware), the danger of false positives and false negatives, and the relatively low intelligence of existing SIS.
Taking these in turn, the diversity of attacks—both in the source of the attack, the focus of the attack, and the motivation of the attack—is significant. Attacks can be launched from outside an organization (e.g., from a hacking collective, such as Anonymous) or from an insider (e.g., a disaffected employee looking to damage a system). They may come from a single source, typically masked through using the darknet, or from a source who has engaged in a number of “hops” (moving from one computer on a network to another, thus masking the original source) such that the originator could appear to be in a hospital or in a military base. If an attack were to appear to come from a military base, this might encourage the attacked party to “hack back.” However, if the military base were an artificial screen presented in front of a hospital, the reverse hack could bring down that hospital’s computer networks. The focus of the attack could be on imitating a user or system administrator (local IT expert) or on exploiting a security flaw in unpatched code (programming in a network that has a flaw which has not yet been fixed, also known as a zero-day exploit). The motivation of the attack can range from state security and intelligence gathering (e.g., U.S. Intelligence spying on Chinese military installations), to financial incentives through blackmail (e.g., encrypting a company’s files and agreeing to decrypt them only when the company has paid the hacker a certain sum of money). This diversity means that it is extremely difficulty to develop a SIS that will effectively recognize an attack for what it is.
Secondly, the danger of false positives and false negatives is significant in light of the difficulty of recognizing an attack. If an attack is not recognized by a SIS as a false negative, it may be successful. This is particularly the case if security personnel have come to place undue trust in the automation and do not provide quality assurance of the SIS, a behavior known as “automation bias.”[9][10] By contrast, the SIS could be so cautious that it may lead to an excessive number of false positives in which a legitimate interaction is falsely labelled an attack and not permitted to continue. This leads to frustration and could entail the eventual disabling of the SIS.[11]
Thirdly, and despite some hype in the media, SIS are still at a relatively unintelligent stage of development. Computer vision systems designed to identify people loitering, for example, recognize that a person has not left a circle with radius x in y number of seconds, but they cannot determine why the person is there or what their intent may be. As such, the inability to determine intentions from actions renders automated systems relatively impotent.
Despite these concerns, there are some potential grounds for use of SIS in cybersecurity. The most effective is in scanning systems for known attacks, or known abnormal patterns of behavior that have a very high likelihood of being an attack. When coupled with a human operator to scan any alerts and so determine whether to take action, the combined human-machine security system can prove to be effective, albeit still facing the above problems of automation bias and excessive false positives.[12]
Literature review: Ethical issues of using SIS in cybersecurity
In this section we will conduct a literature review of the most fundamental ethical issues in cybersecurity that are being proposed in the academic environment. Our goal is to compare them with the interview that has been conducted in a major telecommunications software and hardware company, Company A, in order to give an overview on the ethical issues in cybersecurity.
The literature review was carried out through a combination of online search using generic engines, such as Google and Google Scholar, and discipline-specific search engines on websites such as PhilPapers.org and The Philosopher's Index. Selected papers were then read and, where appropriate, the bibliographic references were used to locate further literature. Generic search on Google also provided links to trade publications and websites that were a further source of background information.
The ethical issues to arise from the literature review were informed consent, protection from harm, privacy and control of data, vulnerabilities and disclosure, competence of research ethics committees, security issues, trust and transparency, risk, responsibility, and business interests and codes of conduct.
Informed consent
Acquiring informed consent is an important activity for cybersecurity, and one that has been at the heart of research ethics and practice for decades.[13][14] Consent is variously valued as the respect for autonomy[14] or the minimization of harm.[15] As such, the justification for informed consent is a considerable challenge for data analytics where anonymized data may be used without explicit consent of the person from whom it originates. This is also true within global cybersecurity, where a number of complicating issues arise, such as the complexity of informing users about detailed technical aspects in order to provide necessary information, as well as language barriers.[16] This, though, is the case for many other areas of research such as medical or social sciences, and the scripts need not be different in cybersecurity.[17]
References
- ↑ 1.0 1.1 Singer, P.W.; Friedman, A. (2014). Cybersecurity and Cyberwar: What Everyone Needs to Know (1st ed.). Oxford University Press. ISBN 9780199918119. https://books.google.com/books?id=9VDSAQAAQBAJ.
- ↑ Sobers, R. (18 May 2018). "60 Must-Know Cybersecurity Statistics for 2018". Varonis Blog. Archived from the original on 08 November 2018. https://web.archive.org/web/20181108122758/https://www.varonis.com/blog/cybersecurity-statistics/. Retrieved 17 December 2018.
- ↑ Smith, P.T. (2015). "Cyberattacks as Casus Belli: A Sovereignty‐Based Account". Journal of Applied Philosophy 35 (2): 222–41. doi:10.1111/japp.12169.
- ↑ Wolters, A. (1952). ""National Security" as an Ambiguous Symbol". Political Science Quarterly 67 (4): 481–502. doi:10.2307/2145138.
- ↑ Baldwin, D.A. (1997). "The Concept of Security". Review of International Studies 23 (1): 5–26. https://www.cambridge.org/core/journals/review-of-international-studies/article/concept-of-security/67188B6038200A97C0B0A370FDC9D6B8.
- ↑ Lundgren, B.; Möller, N. (2019). "Defining Information Security". Science and Engineering Ethics 25 (2): 419–41. doi:10.1007/s11948-017-9992-1.
- ↑ 7.0 7.1 Craigen, D.; Diakun—Thibault, N.; Purse, R. (2014). "Defining Cybersecurity". Technology Innovation Management Review 4 (10): 13–21. doi:10.22215/timreview/835.
- ↑ Hess, C.; Ostrom, E. (2006). Understanding Knowledge as a Commons: From Theory to Practice. MIT Press. ISBN 9780262083577.
- ↑ Bainbridge, L. (1983). "Ironies of automation". Automatica 19 (6): 775–79. doi:10.1016/0005-1098(83)90046-8.
- ↑ Goddard, K.; Roudsari, A.; Wyatt, J.C. (2012). "Automation bias: A systematic review of frequency, effect mediators, and mitigators". JAMIA 19 (1): 121–7. doi:10.1136/amiajnl-2011-000089. PMC PMC3240751. PMID 21685142. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3240751.
- ↑ Tucker, E. (July 2018). "Cyber security – why you’re doing it all wrong". Computer Weekly. https://www.computerweekly.com/opinion/Cyber-security-why-youre-doing-it-all-wrong. Retrieved 17 December 2018.
- ↑ Macnish, K. (2012). "Unblinking eyes: The ethics of automating surveillance". Ethics and Information Technology 14 (2): 151–67. doi:10.1007/s10676-012-9291-0.
- ↑ Johnson M.L.; Bellovin S.M.; Keromytis A.D. (2012). "Computer Security Research with Human Subjects: Risks, Benefits and Informed Consent". In Danezis G.; Dietrich S.; Sako K.. Financial Cryptography and Data Security. Springer. pp. 131–37. doi:10.1007/978-3-642-29889-9_11. ISBN 9783642298899.
- ↑ 14.0 14.1 Miller, F.; Wertheimer, A., ed. (2009). The Ethics of Consent. Oxford University Press. ISBN 9780195335149.
Cite error: Invalid
<ref>
tag; name "MillerTheEthics09" defined multiple times with different content - ↑ Manson, N.C.; O'Neill, O. (2007). Rethinking Informed Consent in Bioethics. Cambridge University Press. doi:10.1017/CBO9780511814600. ISBN 9780511814600.
- ↑ Burnett, S.; Feamster, N. (2015). "Encore: Lightweight Measurement of Web Censorship with Cross-Origin Requests". Proceedings of the 2015 ACM Conference on Special Interest Group on Data Communication: 653–67. doi:10.1145/2785956.2787485.
- ↑ van der Ham, J. (14 September 2018). "jeroenh/Ethics-and-Cyber-Security/template.tex". GitHub. https://github.com/jeroenh/Ethics-and-Cyber-Security/blob/master/template.tex.
Notes
This presentation is faithful to the original, with only a few minor changes to presentation, grammar, and punctuation. In some cases important information was missing from the references, and that information was added. The 2018 article by Sobers on 60 must-know cybersecurity facts has been updated in 2019; an archived version from 2018 is used in this version. The Lundgren and Möller citation has changed since the original article published online; this version represents the new information. The original cites an article by Macnish and van der Ham, but the research doesn't appear to be published yet; found a draft on GitHub to cite.