RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • A Novel Social Context-Aware Data Forwarding Scheme in Mobile Social Networks

        Fang Xu,Huyin Zhang,Min Deng,Ning Xu,Zhiyong Wang,Zenggang Xiong,Conghuan Ye 보안공학연구지원센터 2016 International Journal of Smart Home Vol.10 No.6

        Routing in disconnected delay-tolerant mobile ad hoc networks (MANETs) continues to be a challenging issue. Several works have been done to address the routing issues using the social behaviors of each node. Mobile Social Networks (MSNs) are increasingly popular type of Delay Tolerant Networks (DTNs). The routing performance improves when knowledge regarding the expected topology and the social context information of the networks. In this paper, we introduce a new metric for data forwarding based on social context information, in which node’s social context information is used to calculate the encounter utility between a node and destination, and the social relationship of network nodes is used to calculate the betweenness centrality utility of a node. We combine two utility functions to derive the social strength among users and their importance. We also present social context-based data forwarding algorithm for routing decision. Extensive simulations on real traces show that the introduced algorithm is more efficient than the existing algorithms.

      • KCI등재

        TransNav: spatial sequential transformer network for visual navigation

        Zhou Kang,Zhang Huyin,Li Fei 한국CDE학회 2022 Journal of computational design and engineering Vol.9 No.5

        Visual navigation task is to steer an embodied agent finding the given target based on observation. The effective transformer from observation of the agent to visual representation determines the navigation actions and promotes more informed navigation policy. In this work, we propose a spatial sequential transformer network (SSTNet) for learning informative visual representation in deep reinforcement learning. SSTNet is composed by spatial attention probability fused model (SAF) and sequential transformer network (STNet). SAF enforces cross-modal state into visual clues in reinforcement learning. It encodes semantic information about observed objects, as well as spatial information about their location, which jointly exploiting image inter-relations. STNet generates (imagines) the next observations and makes action inference of the aspects most relevant to the target. It decodes the image intra-relations. This way, the agent learns to understand the causality between navigation actions and dynamic changes in observations. SSTNet is conditioned on an auto-regressive model on the desired reward, past states, actions, and knowledge graph. The whole navigation framework considers the local and global visual information, as well as time sequential information. Thus, it allows the agent to navigate towards the sought-after object effectively. We evaluate our model on the AI2THOR framework show that our method attains at least $10\%$ improvement of average success rate over most state-of-the-art models. Code and datasets can be found in https://github.com/zhoukang123/SDTNet_2022.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼