Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text)
(Updated article of the week text.)
 
(136 intermediate revisions by the same user not shown)
Line 1: Line 1:
<!--<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig9 Brown JMIRMedInfo2020 8-9.png|240px]]</div>//-->
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Signoroni NatComm23 14.png|240px]]</div>
'''"[[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Explainability for artificial intelligence in healthcare: A multidisciplinary perspective]]"'''
'''"[[Journal:Hierarchical AI enables global interpretation of culture plates in the era of digital microbiology|Hierarchical AI enables global interpretation of culture plates in the era of digital microbiology]]"'''


Explainability is one of the most heavily debated topics when it comes to the application of [[artificial intelligence]] (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue; instead, it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. Taking AI-based [[clinical decision support system]]s as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using Beauchamp and Childress' ''Principles of Biomedical Ethics'' (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. ('''[[Journal:Explainability for artificial intelligence in healthcare: A multidisciplinary perspective|Full article...]]''')<br />
Full [[laboratory automation]] is revolutionizing work habits in an increasing number of clinical [[microbiology]] facilities worldwide, generating huge streams of [[Imaging|digital images]] for interpretation. Contextually, [[deep learning]] (DL) architectures are leading to paradigm shifts in the way computers can assist with difficult visual interpretation tasks in several domains. At the crossroads of these epochal trends, we present a system able to tackle a core task in clinical microbiology, namely the global interpretation of diagnostic [[Bacteria|bacterial]] [[Cell culture|culture]] plates, including presumptive [[pathogen]] identification. This is achieved by decomposing the problem into a hierarchy of complex subtasks and addressing them with a multi-network architecture we call DeepColony ... ('''[[Journal:Hierarchical AI enables global interpretation of culture plates in the era of digital microbiology|Full article...]]''')<br />
<br />
''Recently featured'':
''Recently featured'':
{{flowlist |
{{flowlist |
* [[Journal:Secure record linkage of large health data sets: Evaluation of a hybrid cloud model|Secure record linkage of large health data sets: Evaluation of a hybrid cloud model]]
* [[Journal:Critical analysis of the impact of AI on the patient–physician relationship: A multi-stakeholder qualitative study|Critical analysis of the impact of AI on the patient–physician relationship: A multi-stakeholder qualitative study]]
* [[Journal:Risk assessment for scientific data|Risk assessment for scientific data]]
* [[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]
* [[Journal:Methods for quantification of cannabinoids: A narrative review|Methods for quantification of cannabinoids: A narrative review]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
}}
}}

Latest revision as of 15:02, 3 June 2024

Fig1 Signoroni NatComm23 14.png

"Hierarchical AI enables global interpretation of culture plates in the era of digital microbiology"

Full laboratory automation is revolutionizing work habits in an increasing number of clinical microbiology facilities worldwide, generating huge streams of digital images for interpretation. Contextually, deep learning (DL) architectures are leading to paradigm shifts in the way computers can assist with difficult visual interpretation tasks in several domains. At the crossroads of these epochal trends, we present a system able to tackle a core task in clinical microbiology, namely the global interpretation of diagnostic bacterial culture plates, including presumptive pathogen identification. This is achieved by decomposing the problem into a hierarchy of complex subtasks and addressing them with a multi-network architecture we call DeepColony ... (Full article...)
Recently featured: