Difference between revisions of "Template:Article of the week"

From LIMSWiki
Jump to navigationJump to search
(Updated article of the week text.)
(Updated article of the week text)
(414 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1_Bellary_PerspectivesClinRes2014_5-4.jpg|220px]]</div>
<div style="float: left; margin: 0.5em 0.9em 0.4em 0em;">[[File:Fig1 Niszczota EconBusRev23 9-2.png|240px]]</div>
'''"[[Journal:Basics of case report form designing in clinical research|Basics of case report form designing in clinical research]]"'''
'''"[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Judgements of research co-created by generative AI: Experimental evidence]]"'''


Case report form (CRF) is a specialized document in clinical research. It should be study protocol driven, robust in content and have material to collect the study specific data. Though paper CRFs are still used largely, use of electronic CRFs (eCRFs) are gaining popularity due to the advantages they offer such as improved data quality, online discrepancy management and faster database lock etc. Main objectives behind CRF development are preserving and maintaining quality and integrity of data. CRF design should be standardized to address the needs of all users such as investigator, site coordinator, study monitor, data entry personnel, medical coder and statistician. Data should be organized in a format that facilitates and simplifies data analysis. Collection of large amount of data will result in wasted resources in collecting and processing it and in many circumstances, will not be utilized for analysis. Apart from that, standard guidelines should be followed while designing the CRF. CRF completion manual should be provided to the site personnel to promote accurate data entry by them. ('''[[Journal:Why health services research needs geoinformatics: Rationale and case example|Full article...]]''')<br />
The introduction of [[ChatGPT]] has fuelled a public debate on the appropriateness of using generative [[artificial intelligence]] (AI) ([[large language model]]s or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (''N'' = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... ('''[[Journal:Judgements of research co-created by generative AI: Experimental evidence|Full article...]]''')<br />
 
''Recently featured'':
<br />
{{flowlist |
''Recently featured'': [[Journal:Why health services research needs geoinformatics: Rationale and case example|Why health services research needs geoinformatics: Rationale and case example]], [[Journal:Return on investment in electronic health records in primary care practices: A mixed-methods study|Return on investment in electronic health records in primary care practices: A mixed-methods study]], [[Journal:Use of handheld computers in clinical practice: A systematic review|Use of handheld computers in clinical practice: A systematic review]]
* [[Journal:Geochemical biodegraded oil classification using a machine learning approach|Geochemical biodegraded oil classification using a machine learning approach]]
* [[Journal:Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study|Knowledge of internal quality control for laboratory tests among laboratory personnel working in a biochemistry department of a tertiary care center: A descriptive cross-sectional study]]
* [[Journal:Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study|Sigma metrics as a valuable tool for effective analytical performance and quality control planning in the clinical laboratory: A retrospective study]]
}}

Revision as of 15:26, 20 May 2024

Fig1 Niszczota EconBusRev23 9-2.png

"Judgements of research co-created by generative AI: Experimental evidence"

The introduction of ChatGPT has fuelled a public debate on the appropriateness of using generative artificial intelligence (AI) (large language models or LLMs) in work, including a debate on how they might be used (and abused) by researchers. In the current work, we test whether delegating parts of the research process to LLMs leads people to distrust researchers and devalues their scientific work. Participants (N = 402) considered a researcher who delegates elements of the research process to a PhD student or LLM and rated three aspects of such delegation. Firstly, they rated whether it is morally appropriate to do so. Secondly, they judged whether—after deciding to delegate the research process—they would trust the scientist (who decided to delegate) to oversee future projects ... (Full article...)
Recently featured: