RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      A Comparative Study of LSTM and Transformer Models in Music Melody Generation

      한글로보기

      https://www.riss.kr/link?id=A108955969

      • 0

        상세조회
      • 0

        다운로드
      서지정보 열기
      • 내보내기
      • 내책장담기
      • 공유하기
      • 오류접수

      부가정보

      다국어 초록 (Multilingual Abstract)

      【背景】近年来,利用深度学习模型生成音乐已经发展为AI 音乐的主流方向,但音乐生成任务的主流模型仍然存在着一些问题,其中最大的问题就是不能有效地模拟音乐结构,使计算机创作...

      【背景】近年来,利用深度学习模型生成音乐已经发展为AI 音乐的主流方向,但音乐生成任务的主流模型仍然存在着一些问题,其中最大的问题就是不能有效地模拟音乐结构,使计算机创作出符合音乐结构的乐曲。【目的】为此,我们需要探究哪些模型能够很好地模拟音乐的结构,创造更加人性化的音乐。【方法】我们通过对比实验,对比LSTM 与Transformer 模型所生成音乐的优点与缺点,并在此基础上提出改进方案。【结果】实验结果证明,LSTM 在较短的序列上模拟音乐结构的表现优于Transformer,但其无法处理过长的序列;而Transformer 在处理较长序列的表现优于LSTM,并通过改进后能在较长的序列上有效地模拟音乐结构,创作出符合人类音乐听觉的乐曲。【结论】因此我们认为,Transformer 模型更加适合AI 音乐作曲任务,并在未来通过改进其注意力机制来提高音乐结构的识别能力是音乐生成的主流方向。

      더보기

      다국어 초록 (Multilingual Abstract)

      [Background] In recent years, using deep learning models to generate music has become the mainstream direction in AI music. However, the main models for music generation still face several issues, the biggest of which is the inability to effectively s...

      [Background] In recent years, using deep learning models to generate music has become the mainstream direction in AI music. However, the main models for music generation still face several issues, the biggest of which is the inability to effectively simulate musical structure, hindering computers from creating compositions that conform to musical structures. [Objective] To address this, we need to explore which models can effectively simulate the structure of music and create more humanized music. [Method]We conduct comparative experiments, analyzing the advantages and disadvantages of music generated by LSTM and Transformer models, and propose improvements based on the findings. [Results] Experimental results demonstrate that LSTM performs better than Transformer in simulating musical structure in shorter sequences, but struggles with longer sequences; whereas Transformer outperforms LSTM in handling longer sequences and can effectively simulate musical structure in longer sequences after improvements, creating compositions that align with human musical perception. [Conclusion]Therefore, we believe that the Transformer model is more suitable for AI music composition tasks, and improving its attention mechanism to enhance recognition of musical structure will be the mainstream direction for music generation in the future.

      더보기

      목차 (Table of Contents)

      • 1. 前言
      • 2. 研究背景
      • 3. 研究方法
      • 4. 实验
      • 5. 结论
      • 1. 前言
      • 2. 研究背景
      • 3. 研究方法
      • 4. 实验
      • 5. 结论
      • 参考文献
      더보기

      동일학술지(권/호) 다른 논문

      동일학술지 더보기

      더보기

      분석정보

      View

      상세정보조회

      0

      Usage

      원문다운로드

      0

      대출신청

      0

      복사신청

      0

      EDDS신청

      0

      동일 주제 내 활용도 TOP

      더보기

      주제

      연도별 연구동향

      연도별 활용동향

      연관논문

      연구자 네트워크맵

      공동연구자 (7)

      유사연구자 (20) 활용도상위20명

      이 자료와 함께 이용한 RISS 자료

      나만을 위한 추천자료

      해외이동버튼