http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Ying Wang,Xiu-Liang Zhu,Mohamad Wasil Peeroo,Zi-Hua Qian,Dan Shi,Shu-Mei Wei,Ri-Sheng Yu 대한영상의학회 2015 Korean Journal of Radiology Vol.16 No.1
To describe the imaging features of pelvic solitary plasmacytoma and to correlate them with the pathologic grade. A retrospective study was performed on the imaging features of 10 patients with a histological diagnosis of pelvic solitary plasmacytoma. The imaging studies were assessed for bone expansion, cortical destruction, signal intensity/density of soft tissue mass and enhancement manifestations, which were then correlated to the pathologic grade. The imaging features of pelvic solitary plasmacytoma revealed 3 different types: multilocular type (n = 5), unilocular type (n = 2) and complete osteolytic destruction type (n = 3) on computed tomography and MRI. Pathologically, the tumors were classified into low, intermediate and high grades. Features such as multilocular change, perilesional osteosclerosis, slight expansion, local bone cortex disruptions and masses inside bone destruction, often suggest a low-grade solitary plasmacytoma; complete osteolytic destruction, huge soft tissue mass, and osseous defects imply a higher pathologic grade. Pelvic solitary plasmacytoma has various imaging manifestations, while a slight expansile osteolytic feature with multilocular change or homogeneous enhancement highly suggests its diagnosis. The distinctive imaging features of pelvic solitary plasmacytoma are well correlated to the pathologic grade.
Bashar Zaidat,Nancy Shrestha,Ashley M. Rosenberg,Wasil Ahmed,Rami Rajjoub,Timothy Hoang,Mateo Restrepo Mejia,Akiro H. Duey,Justin E. Tang,Jun S. Kim,Samuel K. Cho 대한척추신경외과학회 2024 Neurospine Vol.21 No.1
Objective: Large language models, such as chat generative pre-trained transformer (ChatGPT), have great potential for streamlining medical processes and assisting physicians in clinical decision-making. This study aimed to assess the potential of ChatGPT’s 2 models (GPT-3.5 and GPT-4.0) to support clinical decision-making by comparing its responses for antibiotic prophylaxis in spine surgery to accepted clinical guidelines. Methods: ChatGPT models were prompted with questions from the North American Spine Society (NASS) Evidence-based Clinical Guidelines for Multidisciplinary Spine Care for Antibiotic Prophylaxis in Spine Surgery (2013). Its responses were then compared and assessed for accuracy. Results: Of the 16 NASS guideline questions concerning antibiotic prophylaxis, 10 responses (62.5%) were accurate in ChatGPT’s GPT-3.5 model and 13 (81%) were accurate in GPT4.0. Twenty-five percent of GPT-3.5 answers were deemed as overly confident while 62.5% of GPT-4.0 answers directly used the NASS guideline as evidence for its response. Conclusion: ChatGPT demonstrated an impressive ability to accurately answer clinical questions. GPT-3.5 model’s performance was limited by its tendency to give overly confident responses and its inability to identify the most significant elements in its responses. GPT-4.0 model’s responses had higher accuracy and cited the NASS guideline as direct evidence many times. While GPT-4.0 is still far from perfect, it has shown an exceptional ability to extract the most relevant research available compared to GPT-3.5. Thus, while ChatGPT has shown far-reaching potential, scrutiny should still be exercised regarding its clinical use at this time.
Mateo Restrepo Mejia,Juan Sebastian Arroyave,Michael Saturno,Laura Chelsea Mazudie Ndjonko,Bashar Zaidat,Rami Rajjoub,Wasil Ahmed,Ivan Zapolsky,Samuel K. Cho 대한척추신경외과학회 2024 Neurospine Vol.21 No.1
Objective: Large language models like chat generative pre-trained transformer (ChatGPT) have found success in various sectors, but their application in the medical field remains limited. This study aimed to assess the feasibility of using ChatGPT to provide accurate medical information to patients, specifically evaluating how well ChatGPT versions 3.5 and 4 aligned with the 2012 North American Spine Society (NASS) guidelines for lumbar disk herniation with radiculopathy. Methods: ChatGPT's responses to questions based on the NASS guidelines were analyzed for accuracy. Three new categories—overconclusiveness, supplementary information, and incompleteness—were introduced to deepen the analysis. Overconclusiveness referred to recommendations not mentioned in the NASS guidelines, supplementary information denoted additional relevant details, and incompleteness indicated omitted crucial information from the NASS guidelines. Results: Out of 29 clinical guidelines evaluated, ChatGPT-3.5 demonstrated accuracy in 15 responses (52%), while ChatGPT-4 achieved accuracy in 17 responses (59%). ChatGPT-3.5 was overconclusive in 14 responses (48%), while ChatGPT-4 exhibited overconclusiveness in 13 responses (45%). Additionally, ChatGPT-3.5 provided supplementary information in 24 responses (83%), and ChatGPT-4 provided supplemental information in 27 responses (93%). In terms of incompleteness, ChatGPT-3.5 displayed this in 11 responses (38%), while ChatGPT-4 showed incompleteness in 8 responses (23%). Conclusion: ChatGPT shows promise for clinical decision-making, but both patients and healthcare providers should exercise caution to ensure safety and quality of care. While these results are encouraging, further research is necessary to validate the use of large language models in clinical settings.