RISS 학술연구정보서비스

다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      • 좁혀본 항목

      • 좁혀본 항목 보기순서

        • 원문유무
        • 음성지원유무
        • 원문제공처
        • 등재정보
        • 학술지명
        • 주제분류
        • 발행연도
        • 작성언어
        • 저자

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        우리 나라 의학교육 평가의 새 지평

        백상호 한국보건의료인국가시험원 2005 보건의료교육평가 Vol.2 No.1

        Over the last two decades, there have been a number of significant changes in the evaluation system in medical education in Korea. One major improvement in this respect has been the listing of learning objectives at medical schools and the construction of a content outline for the Korean Medical Licensing Examination that can be used as a basis of evaluation. Item analysis has become a routine method for obtaining information that often provides valuable feedback concerning test items after the completion of a written test. The use of item response theory in analyzing test items has been spreading in medical schools as a way to evaluate performance tests and computerized adaptive testing. A series of recent studies have documented an upward trend in the adoption of the objective structured clinical examination (OSCE) and clinical practice examination (CPX) for measuring skill and attitude domains, in addition to tests of the knowledge domain. There has been an obvious increase in regional consortiums involving neighboring medical schools that share the planning and administration of the OSCE and CPX; this includes recruiting and training standardized patients. Such consortiums share common activities, such as case development and program evaluation. A short history and the pivotal roles of four organizations that have brought about significant changes in the examination system are discussed briefly.

      • KCI등재

        A proposal for the future of medical education accreditation in Korea

        임기영 한국보건의료인국가시험원 2020 보건의료교육평가 Vol.17 No.-

        For the past 20 years, the medical education accreditation program of the Korean Institute of Medical Education and Evaluation (KIMEE) has contributed significantly to the standardization and improvement of the quality of basic medical education in Korea. Itshould now contribute to establishing and promoting the future of medical education. The Accreditation Standards of KIMEE 2019(ASK2019) have been adopted since 2019, with the goal of achieving world-class medical education by applying a learner-centeredcurriculum using a continuum framework for the 3 phases of formal medical education: basic medical education, postgraduate medicaleducation, and continuing professional development. ASK2019 will also be able to promote medical education that meets communityneeds and employs systematic assessments throughout the education process. These are important changes that can be used to gaugethe future of the medical education accreditation system. Furthermore, globalization, inter-professional education, health systems science, and regular self-assessment systems are emerging as essential topics for the future of medical education. It is time for the medicaleducation accreditation system in Korea to observe and adopt new trends in global medical education.

      • KCI등재

        Nurse educators' experiences with student incivility: a meta-synthesis of qualitative studies

        박은준,강현욱 한국보건의료인국가시험원 2020 보건의료교육평가 Vol.17 No.-

        This study aimed to synthesize the best available qualitative research evidence on nurse educators' experiences with student incivility inundergraduate nursing classrooms. A meta-synthesis of qualitative evidence using thematic synthesis was conducted. A systematicsearch was performed of 12 databases for relevant literature published by March 31, 2019. Two reviewers independently conductedcritical quality appraisals using the checklist for qualitative research developed by the Joanna Briggs Institute. Eleven studies that metthe inclusion criteria were selected for review. From the pooled study findings, 26 descriptive themes were generated and categorizedinto the following 5 analytical themes: (1) factors contributing to student incivility, (2) management of student incivility, (3) impact:professional and personal damage, (4) impact: professional growth, and (5) initiatives for the future. Many nurse educators becameconfident in their role of providing accountability as both educators and gatekeepers and experienced professional growth. However,others experienced damage to their personal and professional life and lost their motivation to teach. Nurse educators recommended thefollowing strategies for preventing or better managing student incivility: institutional efforts by the university, unified approaches forstudent incivility within a nursing program, a faculty-to-faculty network for mentoring, and better teaching and learning strategies forindividual educators. These strategies would help all nurse educators experience professional growth by successfully preventing andmanaging student incivility.

      • KCI등재

        Is accreditation in medical education in Korea an opportunity or a burden?

        정한나,전우택,안신기 한국보건의료인국가시험원 2020 보건의료교육평가 Vol.17 No.-

        The accreditation process is both an opportunity and a burden for medical schools in Korea. The line that separates the two is based onhow medical schools recognize and utilize the accreditation process. In other words, accreditation is a burden for medical schools if theyview the accreditation process as merely a formal procedure or a means to maintain accreditation status for medical education. However, if medical schools acknowledge the positive value of the accreditation process, accreditation can be both an opportunity and a toolfor developing medical education. The accreditation process has educational value by catalyzing improvements in the quality, equity,and efficiency of medical education and by increasing the available options. For the accreditation process to contribute to medical education development, accrediting agencies and medical schools must first be recognized as partners of an educational alliance workingtogether towards common goals. Secondly, clear guidelines on accreditation standards should be periodically reviewed and shared. Finally, a formative self-evaluation process must be introduced for institutions to utilize the accreditation process as an opportunity to develop medical education. This evaluation system could be developed through collaboration among medical schools, academic societiesfor medical education, and the accrediting authority.

      • KCI등재
      • KCI등재

        Introduction to the LIVECAT web-based computerized adaptive testing platform

        서동기 한국보건의료인국가시험원 2020 보건의료교육평가 Vol.17 No.-

        This study introduces LIVECAT, a web-based computerized adaptive testing platform. This platform provides many functions, including writing item content, managing an item bank, creating and administering a test, reporting test results, and providing informationabout a test and examinees. The LIVECAT provides examination administrators with an easy and flexible environment for composingand managing examinations. It is available at http://www.thecatkorea.com/. Several tools were used to program LIVECAT, as follows:operating system, Amazon Linux; web server, nginx 1.18; WAS, Apache Tomcat 8.5; database, Amazon RDMS—Maria DB; and languages, JAVA8, HTML5/CSS, Javascript, and jQuery. The LIVECAT platform can be used to implement several item response theory(IRT) models such as the Rasch and 1-, 2-, 3-parameter logistic models. The administrator can choose a specific model of test construction in LIVECAT. Multimedia data such as images, audio files, and movies can be uploaded to items in LIVECAT. Two scoring methods (maximum likelihood estimation and expected a posteriori) are available in LIVECAT and the maximum Fisher information itemselection method is applied to every IRT model in LIVECAT. The LIVECAT platform showed equal or better performance comparedwith a conventional test platform. The LIVECAT platform enables users without psychometric expertise to easily implement and perform computerized adaptive testing at their institutions. The most recent LIVECAT version only provides a dichotomous item response model and the basic components of CAT. Shortly, LIVECAT will include advanced functions, such as polytomous item response models, weighted likelihood estimation method, and content balancing method.

      • KCI등재

        Similarity of the cut score in test sets with different item amounts using the modified Angoff, modified Ebel, and Hofstee standard-setting methods for the Korean Medical Licensing Examination

        박장희,임미경,김나진,안덕선,김영민 한국보건의료인국가시험원 2020 보건의료교육평가 Vol.17 No.-

        Purpose: The Korea Medical Licensing Exam (KMLE) typically contains a large number of items. The purpose of this study was to investigate whether there is a difference in the cut score between evaluating all items of the exam and evaluating only some items whenconducting standard-setting. Methods: We divided the item sets that appeared on 3 recent KMLEs for the past 3 years into 4 subsets of each year of 25% each basedon their item content categories, discrimination index, and difficulty index. The entire panel of 15 members assessed all the items (360items, 100%) of the year 2017. In split-half set 1, each item set contained 184 (51%) items of year 2018 and each set from split-half set 2contained 182 (51%) items of the year 2019 using the same method. We used the modified Angoff, modified Ebel, and Hofstee methods in the standard-setting process. Results: Less than a 1% cut score difference was observed when the same method was used to stratify item subsets containing 25%,51%, or 100% of the entire set. When rating fewer items, higher rater reliability was observed. Conclusion: When the entire item set was divided into equivalent subsets, assessing the exam using a portion of the item set (90 out of360 items) yielded similar cut scores to those derived using the entire item set. There was a higher correlation between panelists' individual assessments and the overall assessments.

      • KCI등재

        Using the Angoff method to set a standard on mock exams for the Korean Nursing Licensing Examination

        임미경,신수진 한국보건의료인국가시험원 2020 보건의료교육평가 Vol.17 No.-

        Purpose: This study explored the possibility of using the Angoff method, in which panel experts determine the cut score of an exam,for the Korean Nursing Licensing Examination (KNLE). Two mock exams for the KNLE were analyzed. The Angoff standard settingprocedure was conducted and the results were analyzed. We also aimed to examine the procedural validity of applying the Angoff method in this context. Methods: For both mock exams, we set a pass-fail cut score using the Angoff method. The standard setting panel consisted of 16 nursing professors. After the Angoff procedure, the procedural validity of establishing the standard was evaluated by investigating the responses of the standard setters. Results: The descriptions of the minimally competent person for the KNLE were presented at the levels of general and subject performance. The cut scores of first and second mock exams were 74.4 and 76.8, respectively. These were higher than the traditional cut score(60% of the total score of the KNLE). The panel survey showed very positive responses, with scores higher than 4 out of 5 points on aLikert scale. Conclusion: The scores calculated for both mock tests were similar, and were much higher than the existing cut scores. In the secondsimulation, the standard deviation of the Angoff rating was lower than in the first simulation. According to the survey results, proceduralvalidity was acceptable, as shown by a high level of confidence. The results show that determining cut scores by an expert panel is an applicable method.

      • KCI등재

        의사 국가시험 합격선 설정에 관한 측정학적 접근

        이규민 한국보건의료인국가시험원 2004 보건의료교육평가 Vol.1 No.1

        National Health Personnel Licensing Examination Board (hereafter NHPLEB) has used 60% correct responses of overall tests and 40% correct responses of each subject area test as a criterion to give physician licenses to satisfactory candidates. The 60%-40% criterion seems reasonable to laypersons without pychometric or measurement knowledge, but it may causes several severe problems on pychometrician' s perspective. This paper pointed out several problematic cases that can be encountered by using the 60%-40% criterion, and provided several pychometric alternatives that could overcome these problems. A fairly new approach, named Bookmark standard setting method, was introduced and explained in detail as an example. This paper concluded with five considerations when the NHPLEB decides to adopt a pychometric standard setting approach to set a cutscore for a licensure test like medical licensing examination.

      • KCI등재
      맨 위로 스크롤 이동