RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
        • 주제분류
        • 발행연도
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Multiple Constrained Dynamic Path Optimization based on Improved Ant Colony Algorithm

        Seng Dewen,Tang Meixia,Wu Hao,Fang Xujian,Xu Haitao 보안공학연구지원센터 2014 International Journal of u- and e- Service, Scienc Vol.7 No.6

        Vehicle navigation system can effectively alleviate traffic congestion, reduce pollution, and reduce travel cost and other problems. As is known to all, the traditional ones are all just static path planning with problems of not only weak effectiveness but also lack of standard optimal path options. They usually provide only one path which represents the shortest time or shortest distance, and ignore the actual demands of the dirivers. Based on traffic data of the past, the upcoming traffic flows can be estimated. With the help of the improved ant colony algorithm, the dynamic optimal path planning results will meet the need of the travelers according with multiple actual constraints.

      • KCI등재

        Visual Analysis of Deep Q-network

        ( Dewen Seng ),( Jiaming Zhang ),( Xiaoying Shi ) 한국인터넷정보학회 2021 KSII Transactions on Internet and Information Syst Vol.15 No.3

        In recent years, deep reinforcement learning (DRL) models are enjoying great interest as their success in a variety of challenging tasks. Deep Q-Network (DQN) is a widely used deep reinforcement learning model, which trains an intelligent agent that executes optimal actions while interacting with an environment. This model is well known for its ability to surpass skilled human players across many Atari 2600 games. Although DQN has achieved excellent performance in practice, there lacks a clear understanding of why the model works. In this paper, we present a visual analytics system for understanding deep Q-network in a non-blind matter. Based on the stored data generated from the training and testing process, four coordinated views are designed to expose the internal execution mechanism of DQN from different perspectives. We report the system performance and demonstrate its effectiveness through two case studies. By using our system, users can learn the relationship between states and Q-values, the function of convolutional layers, the strategies learned by DQN and the rationality of decisions made by the agent.

      • Process Backtracking and Reconstruction based on Task Chain Model

        Jing Chen,Dewen Seng,Xujian Fang 보안공학연구지원센터 2016 International Journal of Grid and Distributed Comp Vol.9 No.4

        Fundamental information of the design unit is described using the correlation between the nodes in the task chain. Data transfer between design units is formulated as fundamental data transfer, design rule transfer and path scheme transfer, respectively. The design process is stored with the node as the unit by using the algorithm for decomposing correlated nodes. The reconstruction method is employed to eliminate the redundant nodes that exist in various previous design processes, alleviating the degree of coupling. The performance of the proposed scheme is verified by applying it to the development of the low-voltage appliance.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼