Difference between revisions of "Template:Article of the week"
Shawndouglas (talk | contribs) (Updated article of the week text) |
Shawndouglas (talk | contribs) (Updated article of the week text) |
||
Line 1: | Line 1: | ||
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig9 Brown JMIRMedInfo2020 8-9.png|240px]]</div> | <!--<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig9 Brown JMIRMedInfo2020 8-9.png|240px]]</div>//--> | ||
'''"[[Journal: | '''"[[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Explainability for artificial intelligence in healthcare: A multidisciplinary perspective]]"''' | ||
Explainability is one of the most heavily debated topics when it comes to the application of [[artificial intelligence]] (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue; instead, it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Taking AI-based [[clinical decision support system]]s as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using Beauchamp and Childress' ''Principles of Biomedical Ethics'' (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. ('''[[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Full article...]]''')<br /> | |||
<br /> | <br /> | ||
''Recently featured'': | ''Recently featured'': | ||
{{flowlist | | {{flowlist | | ||
* [[Journal:Secure record linkage of large health data sets: Evaluation of a hybrid cloud model|Secure record linkage of large health data sets: Evaluation of a hybrid cloud model]] | |||
* [[Journal:Risk assessment for scientific data|Risk assessment for scientific data]] | * [[Journal:Risk assessment for scientific data|Risk assessment for scientific data]] | ||
* [[Journal:Methods for quantification of cannabinoids: A narrative review|Methods for quantification of cannabinoids: A narrative review]] | * [[Journal:Methods for quantification of cannabinoids: A narrative review|Methods for quantification of cannabinoids: A narrative review]] | ||
}} | }} |
Revision as of 17:05, 22 November 2021
"Explainability for artificial intelligence in healthcare: A multidisciplinary perspective"
Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue; instead, it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using Beauchamp and Childress' Principles of Biomedical Ethics (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. (Full article...)
Recently featured: