Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
JMIR Med Educ ; 10: e51757, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39137029

RESUMO

BACKGROUND: ChatGPT was not intended for use in health care, but it has potential benefits that depend on end-user understanding and acceptability, which is where health care students become crucial. There is still a limited amount of research in this area. OBJECTIVE: The primary aim of our study was to assess the frequency of ChatGPT use, the perceived level of knowledge, the perceived risks associated with its use, and the ethical issues, as well as attitudes toward the use of ChatGPT in the context of education in the field of health. In addition, we aimed to examine whether there were differences across groups based on demographic variables. The second part of the study aimed to assess the association between the frequency of use, the level of perceived knowledge, the level of risk perception, and the level of perception of ethics as predictive factors for participants' attitudes toward the use of ChatGPT. METHODS: A cross-sectional survey was conducted from May to June 2023 encompassing students of medicine, nursing, dentistry, nutrition, and laboratory science across the Americas. The study used descriptive analysis, chi-square tests, and ANOVA to assess statistical significance across different categories. The study used several ordinal logistic regression models to analyze the impact of predictive factors (frequency of use, perception of knowledge, perception of risk, and ethics perception scores) on attitude as the dependent variable. The models were adjusted for gender, institution type, major, and country. Stata was used to conduct all the analyses. RESULTS: Of 2661 health care students, 42.99% (n=1144) were unaware of ChatGPT. The median score of knowledge was "minimal" (median 2.00, IQR 1.00-3.00). Most respondents (median 2.61, IQR 2.11-3.11) regarded ChatGPT as neither ethical nor unethical. Most participants (median 3.89, IQR 3.44-4.34) "somewhat agreed" that ChatGPT (1) benefits health care settings, (2) provides trustworthy data, (3) is a helpful tool for clinical and educational medical information access, and (4) makes the work easier. In total, 70% (7/10) of people used it for homework. As the perceived knowledge of ChatGPT increased, there was a stronger tendency with regard to having a favorable attitude toward ChatGPT. Higher ethical consideration perception ratings increased the likelihood of considering ChatGPT as a source of trustworthy health care information (odds ratio [OR] 1.620, 95% CI 1.498-1.752), beneficial in medical issues (OR 1.495, 95% CI 1.452-1.539), and useful for medical literature (OR 1.494, 95% CI 1.426-1.564; P<.001 for all results). CONCLUSIONS: Over 40% of American health care students (1144/2661, 42.99%) were unaware of ChatGPT despite its extensive use in the health field. Our data revealed the positive attitudes toward ChatGPT and the desire to learn more about it. Medical educators must explore how chatbots may be included in undergraduate health care education programs.


Assuntos
Conhecimentos, Atitudes e Prática em Saúde , Humanos , Estudos Transversais , Feminino , Masculino , Adulto , Inquéritos e Questionários , Estudantes de Ciências da Saúde/psicologia , Estudantes de Ciências da Saúde/estatística & dados numéricos , Atitude do Pessoal de Saúde , Adulto Jovem , Estudantes de Medicina/psicologia , Estudantes de Medicina/estatística & dados numéricos
2.
J Endourol ; 38(8): 763-777, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38874270

RESUMO

Background: Among emerging AI technologies, Chat-Generative Pre-Trained Transformer (ChatGPT) emerges as a notable language model, uniquely developed through artificial intelligence research. Its proven versatility across various domains, from language translation to healthcare data processing, underscores its promise within medical documentation, diagnostics, research, and education. The current comprehensive review aimed to investigate the utility of ChatGPT in urology education and practice and to highlight its potential limitations. Methods: The authors conducted a comprehensive literature review of the use of ChatGPT and its applications in urology education, research, and practice. Through a systematic review of the literature, with a search strategy using databases, such as PubMed and Embase, we analyzed the advantages and limitations of using ChatGPT in urology and evaluated its potential impact. Results: A total of 78 records were eligible for inclusion. The benefits of ChatGPT were frequently cited across various contexts. In educational/academic benefits mentioned in 21 records (87.5%), ChatGPT showed the ability to assist urologists by offering precise information and responding to inquiries derived from patient data analysis, thereby supporting decision making; in 18 records (75%), advantages comprised personalized medicine, predictive capabilities for disease risks and outcomes, streamlining clinical workflows and improved diagnostics. Nevertheless, apprehensions were expressed regarding potential misinformation, underscoring the necessity for human supervision to guarantee patient safety and address ethical concerns. Conclusion: The potential applications of ChatGPT hold the capacity to bring about transformative changes in urology education, research, and practice. AI technology can serve as a useful tool to augment human intelligence; however, it is essential to use it in a responsible and ethical manner.


Assuntos
Inteligência Artificial , Urologia , Humanos , Urologia/educação , Atenção à Saúde
3.
J Hand Surg Glob Online ; 6(2): 164-168, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38903829

RESUMO

Purpose: Currently, there is a paucity of prior investigations and studies examining applications for artificial intelligence (AI) in upper-extremity (UE) surgical education. The purpose of this investigation was to assess the performance of a novel AI tool (ChatGPT) on UE questions on the Orthopaedic In-Training Examination (OITE). We aimed to compare the performance of ChatGPT to the examination performance of hand surgery residents. Methods: We selected questions from the 2020-2022 OITEs that focused on both the hand and UE as well as the shoulder and elbow content domains. These questions were divided into two categories: those with text-only prompts (text-only questions) and those that included supplementary images or videos (media questions). Two authors (B.K.F. and G.S.M.) converted the accompanying media into text-based descriptions. Included questions were inputted into ChatGPT (version 3.5) to generate responses. Each OITE question was entered into ChatGPT three times: (1) open-ended response, which requested a free-text response; (2) multiple-choice responses without asking for justification; and (3) multiple-choice response with justification. We referred to the OITE scoring guide for each year in order to compare the percentage of correct AI responses to correct resident responses. Results: A total of 102 UE OITE questions were included; 59 were text-only questions, and 43 were media-based. ChatGPT correctly answered 46 (45%) of 102 questions using the Multiple Choice No Justification prompt requirement (42% for text-based and 44% for media questions). Compared to ChatGPT, postgraduate year 1 orthopaedic residents achieved an average score of 51% correct. Postgraduate year 5 residents answered 76% of the same questions correctly. Conclusions: ChatGPT answered fewer UE OITE questions correctly compared to hand surgery residents of all training levels. Clinical relevance: Further development of novel AI tools may be necessary if this technology is going to have a role in UE education.

4.
JMIR Med Educ ; 10: e54507, 2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38801706

RESUMO

Unlabelled: Large language models (LLMs), like ChatGPT, are transforming the landscape of medical education. They offer a vast range of applications, such as tutoring (personalized learning), patient simulation, generation of examination questions, and streamlined access to information. The rapid advancement of medical knowledge and the need for personalized learning underscore the relevance and timeliness of exploring innovative strategies for integrating artificial intelligence (AI) into medical education. In this paper, we propose coupling evidence-based learning strategies, such as active recall and memory cues, with AI to optimize learning. These strategies include the generation of tests, mnemonics, and visual cues.


Assuntos
Inteligência Artificial , Educação Médica , Humanos , Educação Médica/métodos , Aprendizagem , Medicina Baseada em Evidências/educação , Medicina Baseada em Evidências/métodos
5.
Adv Med Educ Pract ; 15: 393-400, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38751805

RESUMO

Introduction: This research investigated the capabilities of ChatGPT-4 compared to medical students in answering MCQs using the revised Bloom's Taxonomy as a benchmark. Methods: A cross-sectional study was conducted at The University of the West Indies, Barbados. ChatGPT-4 and medical students were assessed on MCQs from various medical courses using computer-based testing. Results: The study included 304 MCQs. Students demonstrated good knowledge, with 78% correctly answering at least 90% of the questions. However, ChatGPT-4 achieved a higher overall score (73.7%) compared to students (66.7%). Course type significantly affected ChatGPT-4's performance, but revised Bloom's Taxonomy levels did not. A detailed association check between program levels and Bloom's taxonomy levels for correct answers by ChatGPT-4 showed a highly significant correlation (p<0.001), reflecting a concentration of "remember-level" questions in preclinical and "evaluate-level" questions in clinical courses. Discussion: The study highlights ChatGPT-4's proficiency in standardized tests but indicates limitations in clinical reasoning and practical skills. This performance discrepancy suggests that the effectiveness of artificial intelligence (AI) varies based on course content. Conclusion: While ChatGPT-4 shows promise as an educational tool, its role should be supplementary, with strategic integration into medical education to leverage its strengths and address limitations. Further research is needed to explore AI's impact on medical education and student performance across educational levels and courses.

7.
JMIR Med Educ ; 10: e55048, 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38686550

RESUMO

Background: The deployment of OpenAI's ChatGPT-3.5 and its subsequent versions, ChatGPT-4 and ChatGPT-4 With Vision (4V; also known as "GPT-4 Turbo With Vision"), has notably influenced the medical field. Having demonstrated remarkable performance in medical examinations globally, these models show potential for educational applications. However, their effectiveness in non-English contexts, particularly in Chile's medical licensing examinations-a critical step for medical practitioners in Chile-is less explored. This gap highlights the need to evaluate ChatGPT's adaptability to diverse linguistic and cultural contexts. Objective: This study aims to evaluate the performance of ChatGPT versions 3.5, 4, and 4V in the EUNACOM (Examen Único Nacional de Conocimientos de Medicina), a major medical examination in Chile. Methods: Three official practice drills (540 questions) from the University of Chile, mirroring the EUNACOM's structure and difficulty, were used to test ChatGPT versions 3.5, 4, and 4V. The 3 ChatGPT versions were provided 3 attempts for each drill. Responses to questions during each attempt were systematically categorized and analyzed to assess their accuracy rate. Results: All versions of ChatGPT passed the EUNACOM drills. Specifically, versions 4 and 4V outperformed version 3.5, achieving average accuracy rates of 79.32% and 78.83%, respectively, compared to 57.53% for version 3.5 (P<.001). Version 4V, however, did not outperform version 4 (P=.73), despite the additional visual capabilities. We also evaluated ChatGPT's performance in different medical areas of the EUNACOM and found that versions 4 and 4V consistently outperformed version 3.5. Across the different medical areas, version 3.5 displayed the highest accuracy in psychiatry (69.84%), while versions 4 and 4V achieved the highest accuracy in surgery (90.00% and 86.11%, respectively). Versions 3.5 and 4 had the lowest performance in internal medicine (52.74% and 75.62%, respectively), while version 4V had the lowest performance in public health (74.07%). Conclusions: This study reveals ChatGPT's ability to pass the EUNACOM, with distinct proficiencies across versions 3.5, 4, and 4V. Notably, advancements in artificial intelligence (AI) have not significantly led to enhancements in performance on image-based questions. The variations in proficiency across medical fields suggest the need for more nuanced AI training. Additionally, the study underscores the importance of exploring innovative approaches to using AI to augment human cognition and enhance the learning process. Such advancements have the potential to significantly influence medical education, fostering not only knowledge acquisition but also the development of critical thinking and problem-solving skills among health care professionals.


Assuntos
Avaliação Educacional , Licenciamento em Medicina , Feminino , Humanos , Masculino , Chile , Competência Clínica/normas , Avaliação Educacional/métodos , Avaliação Educacional/normas
8.
Neurourol Urodyn ; 43(4): 935-941, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38451040

RESUMO

INTRODUCTION: Artificial intelligence (AI) shows immense potential in medicine and Chat generative pretrained transformer (ChatGPT) has been used for different purposes in the field. However, it may not match the complexity and nuance of certain medical scenarios. This study evaluates the accuracy of ChatGPT 3.5 and 4 in providing recommendations regarding the management of postprostatectomy urinary incontinence (PPUI), considering The Incontinence After Prostate Treatment: AUA/SUFU Guideline as the best practice benchmark. MATERIALS AND METHODS: A set of questions based on the AUA/SUFU Guideline was prepared. Queries included 10 conceptual questions and 10 case-based questions. All questions were open and entered into the ChatGPT with a recommendation to limit the answer to 200 words, for greater objectivity. Responses were graded as correct (1 point); partially correct (0.5 point), or incorrect (0 point). Performances of versions 3.5 and 4 of ChatGPT were analyzed overall and separately for the conceptual and the case-based questions. RESULTS: ChatGPT 3.5 scored 11.5 out of 20 points (57.5% accuracy), while ChatGPT 4 scored 18 (90.0%; p = 0.031). In the conceptual questions, ChatGPT 3.5 provided accurate answers to six questions along with one partially correct response and three incorrect answers, with a final score of 6.5. In contrast, ChatGPT 4 provided correct answers to eight questions and partially correct answers to two questions, scoring 9.0. In the case-based questions, ChatGPT 3.5 scored 5.0, while ChatGPT 4 scored 9.0. The domains where ChatGPT performed worst were evaluation, treatment options, surgical complications, and special situations. CONCLUSION: ChatGPT 4 demonstrated superior performance compared to ChatGPT 3.5 in providing recommendations for the management of PPUI, using the AUA/SUFU Guideline as a benchmark. Continuous monitoring is essential for evaluating the development and precision of AI-generated medical information.


Assuntos
Inteligência Artificial , Incontinência Urinária , Masculino , Humanos , Comportamento Social , Pelve , Prostatectomia , Proteínas Repressoras
9.
Rev. colomb. anestesiol ; 52(1)mar. 2024.
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1535710

RESUMO

Introduction: Over the past few months, ChatGPT has raised a lot of interest given its ability to perform complex tasks through natural language and conversation. However, its use in clinical decision-making is limited and its application in the field of anesthesiology is unknown. Objective: To assess ChatGPT's basic and clinical reasoning and its learning ability in a performance test on general and specific anesthesia topics. Methods: A three-phase assessment was conducted. Basic knowledge of anesthesia was assessed in the first phase, followed by a review of difficult airway management and, finally, measurement of decision-making ability in ten clinical cases. The second and the third phases were conducted before and after feeding ChatGPT with the 2022 guidelines of the American Society of Anesthesiologists on difficult airway management. Results: On average, ChatGPT succeded 65% of the time in the first phase and 48% of the time in the second phase. Agreement in clinical cases was 20%, with 90% relevance and 10% error rate. After learning, ChatGPT improved in the second phase, and was correct 59% of the time, with agreement in clinical cases also increasing to 40%. Conclusions: ChatGPT showed acceptable accuracy in the basic knowledge test, high relevance in the management of specific difficult airway clinical cases, and the ability to improve after learning.


Introducción: En los últimos meses, ChatGPT ha suscitado un gran interés debido a su capacidad para realizar tareas complejas a través del lenguaje natural y la conversación. Sin embargo, su uso en la toma de decisiones clínicas es limitado y su aplicación en el campo de anestesiología es desconocido. Objetivo: Evaluar el razonamiento básico, clínico y la capacidad de aprendizaje de ChatGPT en una prueba de rendimiento sobre temas generales y específicos de anestesiología. Métodos: Se llevó a cabo una evaluación dividida en tres fases. Se valoraron conocimientos básicos de anestesiología en la primera fase, seguida de una revisión del manejo de vía aérea difícil y, finalmente, se midió la toma de decisiones en diez casos clínicos. La segunda y tercera fases se realizaron antes y después de alimentar a ChatGPT con las guías de la Sociedad Americana de Anestesiólogos del manejo de la vía aérea difícil del 2022. Resultados: ChatGPT obtuvo una tasa de acierto promedio del 65 % en la primera fase y del 48 % en la segunda fase. En los casos clínicos, obtuvo una concordancia del 20 %, una relevancia del 90 % y una tasa de error del 10 %. Posterior al aprendizaje, ChatGPT mejoró su tasa de acierto al 59 % en la segunda fase y aumentó la concordancia al 40 % en los casos clínicos. Conclusiones: ChatGPT demostró una precisión aceptable en la prueba de conocimientos básicos, una alta relevancia en el manejo de los casos clínicos específicos de vía aérea difícil y la capacidad de mejoría secundaria a un aprendizaje.

10.
Rev. colomb. anestesiol ; 52(1)mar. 2024.
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1535712

RESUMO

The rapid advancement of Artificial Intelligence (AI) has taken the world by "surprise" due to the lack of regulation over this technological innovation which, while promising application opportunities in different fields of knowledge, including education, simultaneously generates concern, rejection and even fear. In the field of Health Sciences Education, clinical simulation has transformed educational practice; however, its formal insertion is still heterogeneous, and we are now facing a new technological revolution where AI has the potential to transform the way we conceive its application.


El rápido avance de la inteligencia artificial (IA) ha tomado al mundo por "sorpresa" debido a la falta de regulación sobre esta innovación tecnológica, que si bien promete oportunidades de aplicación en diferentes campos del conocimiento, incluido el educativo, también genera preocupación e incluso miedo y rechazo. En el campo de la Educación en Ciencias de la Salud la Simulación Clínica ha transformado la práctica educativa; sin embargo, aún es heterogénea su inserción formal, y ahora nos enfrentamos a una nueva revolución tecnológica, en la que las IA tienen el potencial de transformar la manera en que concebimos su aplicación.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA