RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      KCI등재

      스토리 기반의 정보 검색 연구

      한글로보기

      https://www.riss.kr/link?id=A99885236

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information re...

      Video information retrieval has become a very important issue because of the explosive increase in video data from Web content development. Meanwhile, content-based video analysis using visual features has been the main source for video information retrieval and browsing. Content in video can be represented with content-based analysis techniques, which can extract various features from audio-visual data such as frames, shots, colors, texture, or shape. Moreover, similarity between videos can be measured through content-based analysis. However, a movie that is one of typical types of video data is organized by story as well as audio-visual data. This causes a semantic gap between significant information recognized by people and information resulting from content-based analysis, when content-based video analysis using only audio-visual data of low level is applied to information retrieval of movie. The reason for this semantic gap is that the story line for a movie is high level information, with relationships in the content that changes as the movie progresses. Information retrieval related to the story line of a movie cannot be executed by only content-based analysis techniques. A formal model is needed, which can determine relationships among movie contents, or track meaning changes, in order to accurately retrieve the story information. Recently, story-based video analysis techniques have emerged using a social network concept for story information retrieval. These approaches represent a story by using the relationships between characters in a movie, but these approaches have problems. First, they do not express dynamic changes in relationships between characters according to story development. Second, they miss profound information, such as emotions indicating the identities and psychological states of the characters. Emotion is essential to understanding a character’s motivation, conflict, and resolution. Third, they do not take account of events and background that contribute to the story. As a result, this paper reviews the importance and weaknesses of previous video analysis methods ranging from content-based approaches to story analysis based on social network. Also, we suggest necessary elements, such as character, background, and events, based on narrative structures introduced in the literature. We extract characters’ emotional words from the script of the movie Pretty Woman by using the hierarchical attribute of WordNet, which is an extensive English thesaurus. WordNet offers relationships between words (e.g., synonyms, hypernyms, hyponyms, antonyms). We present a method to visualize the emotional pattern of a character over time. Second, a character’s inner nature must be predetermined in order to model a character arc that can depict the character’s growth and development. To this end, we analyze the amount of the characters dialogue in the script and track the character’s inner nature using social network concepts, such as in-degree (incoming links) and out-degree (outgoing links). Additionally, we propose a method that can track a character’s inner nature by tracing indices such as degree, in-degree, and out-degree of the character network in a movie through its progression. Finally, the spatial background where characters meet and where events take place is an important element in the story. We take advantage of the movie script to extracting significant spatial background and suggest a scene map describing spatial arrangements and distances in the movie. Important places where main characters first meet or where they stay during long periods of time can be extracted through this scene map. In view of the aforementioned three elements (character, event, background), we extract a variety of information related to the story and evaluate the performance of the proposed method. We can track story information extracted over time and detect a change in the character’s emotion or inner nature, spatial movement,

      더보기

      참고문헌 (Reference)

      1 박승보, "화자인식을 이용한 대화 상황정보 어노테이션" 한국멀티미디어학회 12 (12): 1252-1261, 2009

      2 박승보, "스토리텔링 콘텐츠의 효과적인 관리를 위한 영화 스토리 발단부의 자동 경계 추출" 한국지능정보시스템학회 17 (17): 279-292, 2011

      3 이연호, "링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템" 한국지능정보시스템학회 17 (17): 203-219, 2011

      4 Hung, H., "Using audio and video features to classify the most dominant person in a group meeting" 835-838, 2007

      5 Gong, Y. H., "Summarizing audio-visual contents of a video program" 2003 (2003): 160-169, 2003

      6 Chatman, S., "Story and Discourse: Narrative Structure in Fiction and Film" Minumsa 1990

      7 Mckee, R., "Story : Substance, Structure, Style and the Principles of Screenwriting" Golden Bough 2002

      8 Kaminski, J., "Social networks in movies" 1-3, 2011

      9 Park, S.-B., "Social Network Analysis in a Movie using Character-net" 59 (59): 601-627, 2012

      10 Wasserman, S., "Social Network Analysis : Methods and Applications" Cambridge University Press 1994

      1 박승보, "화자인식을 이용한 대화 상황정보 어노테이션" 한국멀티미디어학회 12 (12): 1252-1261, 2009

      2 박승보, "스토리텔링 콘텐츠의 효과적인 관리를 위한 영화 스토리 발단부의 자동 경계 추출" 한국지능정보시스템학회 17 (17): 279-292, 2011

      3 이연호, "링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템" 한국지능정보시스템학회 17 (17): 203-219, 2011

      4 Hung, H., "Using audio and video features to classify the most dominant person in a group meeting" 835-838, 2007

      5 Gong, Y. H., "Summarizing audio-visual contents of a video program" 2003 (2003): 160-169, 2003

      6 Chatman, S., "Story and Discourse: Narrative Structure in Fiction and Film" Minumsa 1990

      7 Mckee, R., "Story : Substance, Structure, Style and the Principles of Screenwriting" Golden Bough 2002

      8 Kaminski, J., "Social networks in movies" 1-3, 2011

      9 Park, S.-B., "Social Network Analysis in a Movie using Character-net" 59 (59): 601-627, 2012

      10 Wasserman, S., "Social Network Analysis : Methods and Applications" Cambridge University Press 1994

      11 Son, D. W., "Social Network Analysis" Kyungmunsa 2002

      12 Nothelfer, C. E., "Shot Structure in Hollywood Film" 4 : 103-113, 2009

      13 Yang, S. G., "Semantic home photo categorization" 17 (17): 324-335, 2007

      14 Park, S.-B., "Semantic Multimedia Browsing System based on Character-net" INHA University 2011

      15 Cowgill, L., "Secrets of Screenplay Structure" Sigongart 2003

      16 Weng, C. Y., "RoleNet: movie analysis from the perspective of social network" 11 (11): 256-271, 2009

      17 Rasheed, Z., "On the use of computable features for film classification" 15 (15): 52-64, 2005

      18 Jung, B., "Narrative abstraction model for story-oriented video" 828-835, 2004

      19 Laptev, I., "Learning realistic human actions from movies" 1-8, 2008

      20 Ding, L., "Learning Relations Among Movie Characters : A Social Network Perspective" 410-423, 2010

      21 Marks, D., "Inside Story" Three Mountain Press 2007

      22 Hauptmann, A., "How many high-level concepts will fill the semantic gap in video retrieval?" 627-634, 2007

      23 Calic, J., "Efficient layout of comic-like video summaries" 17 (17): 931-936, 2007

      24 Rienks, R., "Detection and application of influence rankings in small group meetings" 257-264, 2006

      25 Roth, V., "Content-based retrieval from digital video" 17 : 531-540, 1999

      26 Brodbeck, F., "Cinemetrics"

      27 Peker, K. A., "Broadcast video program summarization using face tracks" 1053-1056, 2006

      28 Xie X.-N., "Automatic video summarization by affinity propagation clustering and semantic content mining" 203-208, 2008

      29 Ekin, A., "Automatic soccer video analysis and summarization" 12 (12): 796-807, 2003

      30 Wan, K., "Automatic mobile sports highlights" 638-641, 2005

      31 Otsuka, I., "A highlight scene detection and video summarization system using audio feature for a personal video recorder" 51 (51): 112-116, 2005

      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      인용정보 인용지수 설명보기

      학술지 이력

      학술지 이력
      연월일 이력구분 이력상세 등재구분
      2027 평가예정 재인증평가 신청대상 (재인증)
      2021-01-01 평가 등재학술지 유지 (재인증) KCI등재
      2018-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2015-03-25 학회명변경 영문명 : 미등록 -> Korea Intelligent Information Systems Society KCI등재
      2015-03-17 학술지명변경 외국어명 : 미등록 -> Journal of Intelligence and Information Systems KCI등재
      2015-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2011-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2009-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2008-02-11 학술지명변경 한글명 : 한국지능정보시스템학회 논문지 -> 지능정보연구 KCI등재
      2007-01-01 평가 등재학술지 유지 (등재유지) KCI등재
      2004-01-01 평가 등재학술지 선정 (등재후보2차) KCI등재
      2003-01-01 평가 등재후보 1차 PASS (등재후보1차) KCI등재후보
      2001-07-01 평가 등재후보학술지 선정 (신규평가) KCI등재후보
      더보기

      학술지 인용정보

      학술지 인용정보
      기준연도 WOS-KCI 통합IF(2년) KCIF(2년) KCIF(3년)
      2016 1.51 1.51 1.99
      KCIF(4년) KCIF(5년) 중심성지수(3년) 즉시성지수
      1.78 1.54 2.674 0.38
      더보기

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼