RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        Differences in Writing Performance on Independent and Integrated Writing Tasks

        김보람 중앙대학교 외국학연구소 2014 외국학연구 Vol.- No.27

        The present study examined two types of integrated tasks (reading-writing and listening-writing) by comparing them with the independent (writing-only) task to probe into two questions: (1) whether performance on the integrated tasks differs from that on the independent task, and (2) how reading and listening skills are respectively associated with performance on integrated tasks. Three types of writing (144 written pieces) by advanced-level participants were scored by two raters using an analytical scoring rubric. The data, both overall and analytical scores, were analyzed by ANOVAs, and linear regression analyses were conducted between integrated task and proficiency (RC and LC) scores. The results showed that there was a significant difference between writing-only and listening-writing tasks, but not between writing-only and reading-writing tasks. Furthermore, the participants significantly performed better on writing-only than on listening-writing tasks in terms of content, and reading-writing tasks outperformed listening-writing tasks in vocabulary and expression. The findings suggest that integrated writing tasks elicit not only multiple language skills but also complex cognitive skills. The study also indicates that listening proficiency is a strong predictor of performance on the listening-writing integrated task, whereas reading proficiency may not have a predictive value. Pedagogical implications based on the findings were discussed with respect to how to prepare students for integrated writing tasks.

      • KCI등재

        Investigating Complex Interaction Effects Among Facet Elements in an ESL Writing Test Consisting of Integrated and Independent Tasks

        Yong-Won Lee,Robert Kantor 서울대학교 언어교육원 2015 語學硏究 Vol.51 No.3

        The main purposes of the current study are to: (a) examine the interactional effects among test-takers, tasks, and raters, as well as the main effects of these facets, in an ESL writing test consisting of both integrated and independent writing tasks and (b) thereby identify additional sources of score variability and error in the rating of test-taker responses. A total of 162 test-takers with 29 different L1 backgrounds participated in the study, each of whom took the same six writing tasks, which included 3 Listening-Writing (LW), 2 Reading-Writing (RW), and 1 Independent Writing (IW) tasks. Each of the essays was rated by each of the 6 trained raters to obtain a completely-crossed data matrix for the test-takers, tasks, and raters. A computer program, FACETS (Linacre, 1998), was used to calibrate the test-takers, tasks, and raters and conduct interaction analysis on 970 essays. Results of the analyses revealed that raters seemed to be having slight difficulty in maintaining a consistent level of severity across all of the 6 tasks and that a close inspection of the rating patterns of selected test-takers demonstrated the usefulness of interaction analysis in pinpointing particular combinations of facet elements with unusual interaction patterns. The implications of these findings for writing assessment are discussed along with the avenues for further investigation.

      • KCI등재

        독립적 쓰기과제 에세이 자동채점 점수의 신뢰도 및 타당도: 일반형, 혼합형, 및 과제별 채점모델을 중심으로

        이용원(Lee YongWon) 한국외국어대학교 영미연구소 2016 영미연구 Vol.36 No.-

        The current study aims to examine the reliability and validity of automated essay scores from substantively different types of scoring models for e-rater?? in the context of scoring TOEFL independent writing tasks. Six different variants of generic and hybrid models were created based on transformed writing data from three different samples of TOEFL?? CBT prompts. These generic models (along with prompt-specific models) were used to score a total of 61,089 essays written for seven TOEFL?? CBT prompts. The results of data analysis showed that (a) similar levels of score agreement were achieved between automated and human scorer pairs and two human rater pairs, although the automated scoring increased rating consistency across scorers (or scoring models) significantly, and (b) the human rater scores turned out to be somewhat better indicators of test-takers’ overall ESL language proficiency than the automated scores, particularly when TOEFL CBT section scores were used as validity criteria. The implications of the findings are discussed in relation to the future use of automated essay scoring in the context of scoring ESL/EFL learners’ essays. 본 연구는, 토플시험의 독립적 쓰기 과제(independent writing task)의 채점을 염두에 두고 영작문 자동채점시스템인 이레이터(e-rater??)를 사용해 일반형(generic), 혼합형(hybrid), 과제별(prompt-specific) 모형을 포함한 여러 자동화 채점모델을 만들어 보고, 이러한 채점모델들을 적용해 산출된 영어 쓰기 점수의 점수신뢰도와 타당도를 검증해 보는 데 그 목적이 있다. 이를 위해 컴퓨터기반 토플시험(TOEFL CBT) 쓰기과 제 은행에서 총 3개의 서로 다른 과제표본을 추출하고 아울러 이 쓰기과제들을 위해 쓰여진 토플 에세이의 변환점수를 사용해서 총 6 개의 일반형 및 혼합형 이레이터 자동채점모델들을 만들었다. 이런 과정을 통해 만들어진 총 6개의 일반형 및 혼합형 채점모형과 과제당 1개씩 별도로 만들어진 채점모델을 총 7개의 토플 쓰기과제들을 위해 작성된 61, 089개의 토플 에세이들을 채점하는 데 사용하였다. 데이터 분석 결과, (a) 비록 에세이 자동채점은 채점자(채점모델) 간 점수 일관성을 증대시키는 효과가 나타났지만 실제 자동채점기 대 인간 채점자 간 점수 일치도와 두 인간채점자 간 점수 일치도는 유사한 수준을 보였고, (b) 인간채점자 점수가 자동채점 점수보다는 수험자의 전반적인 영어숙달도의 좀 더 나은 지표로서 사용될 수 있음이 밝혀졌다. 아울러 본 논문에서는 앞으로 자동채점 기술이 영어를 제2언어 혹은 외국어로 배우는 학습자의 영어 에세이를 채점하는 데 사용될 때 본 연구의 분석결과가 어떤 함의를 가지게 되는지도 논의된다.

      • KCI등재

        Investigating the feasibility of generic scoring models of e-rater for TOEFL iBT independent writing tasks

        이용원 팬코리아영어교육학회 2016 영어교육연구 Vol.28 No.1

        The current study reports the findings from Phase 2 of a larger research study undertaken toinvestigate the feasibility of using generic scoring models for e-rater in the context of scoringessays for independent writing tasks for TOEFL CBT and TOEFL iBT. In Phase 1, sixdifferent variants of generic and hybrid scoring models of e-rater were created based ontransformed writing data from three different samples of TOEFL CBT prompts (n1=20, n2=20,n3= 40) with the help of ETS (Educational Testing Service) staff and then evaluated on aseparate sample of seven TOEFL CBT prompts (Lee, 2016). In the present investigation, thesesix generic/hybrid models were used, along with prompt-specific models, to score a total of3,126 essays written for two TOEFL iBT independent writing tasks from a field study and theirperformance was evaluated. Results of the analysis showed that (a) there were relatively smallscore variations among different automated scoring models and (b) similar levels of scoreagreement were achieved between the human-human rater pair and various human-automatedrater pairs, although the prompt-specific model behaved most similarly to the human raters. Interms of criterion-related validity of scores, the human rater scores turned out to be somewhatbetter indicators of test-takers’ overall ESL (English as a Second Language) languageproficiency than the automated scores in general. Nevertheless, the comparative advantage invalidity of human rater scores (over automated scores) seemed to diminish significantly, whenmore direct writing measures, such as scores for TOEFL CBT independent writing tasks, wereused as criterion measures.

      • SCOPUSKCI등재

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼