RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        지식 누적을 이용한 실시간 주식시장 예측

        김진화(Jinhwa Kim),홍광헌(Kwang Hun Hong),민진영(Jin Young Min) 한국지능정보시스템학회 2011 지능정보연구 Vol.17 No.4

        One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study.

      • KCI등재

        Design and Methods of the Korean National Investigations of 70,000 Suicide Victims Through Police Records (The KNIGHTS Study)

        Eun Jin Na,Jinhwa Choi,Dajung Kim,Heeyoun Kwon,Yejin Lee,Gusang Lee,Maurizio Fava,David Mischoulon,Jihoon Jang,Hong Jin Jeon 대한신경정신의학회 2019 PSYCHIATRY INVESTIGATION Vol.16 No.10

        Objective The suicide rate in South Korea was the second highest among the Organization for Economic Cooperation and Development countries in 2017. The purpose of this study is to understand the characteristics of people who died by suicide in Korea from 2013– 2017 and to better prevent suicide. Methods This study was performed by the Korea Psychological Autopsy Center (KPAC), an affiliate of the Korea Ministry of Health and Welfare. According to the Korea National Statistical Office, the number of suicide victims nationwide was estimated to reach about 70,000 from 2013 to 2017. Comprehensive suicide records from all 254 police stations in South Korea were evaluated by 32 investigators who completed a 14-day didactic training program. Then, we evaluated the characteristics of suicide victims in association with disease data from the National Health Insurance Database (NHID), which is anonymously linked to personal information of suicide victims. Results Thirty-one of 254 police stations in the Seoul metropolitan area were analyzed by August 10, 2018. Findings showed that the characteristics of suicide victims differed according to the nature of the region. Conclusion Our results suggest that different strategies and methods are needed to prevent suicide by regional groups.

      • KCI등재

        NGS 데이터를 이용한 대용량 게놈의 디노버 어셈블리

        원정임(JungIm Won),홍상균(Sangkyoon Hong),공진화(JinHwa Kong),허선(Sun Huh),윤지희(JeeHee Yoon) 한국정보과학회 2013 정보과학회논문지 : 데이타베이스 Vol.40 No.1

        디노버 어셈블리는 레퍼런스 시퀀스 없이 리드의 염기 서열 정보를 이용하여 원래의 전체 시퀀스(original sequence)로 추정되는 시퀀스로 리드들을 재구성하는 방식이다. 최근의 NGS(Next Generation Sequencing) 기술은 대용량 리드를 훨씬 쉽게 저비용으로 생성할 수 있다는 장점이 있어, 이를 이용한 많은 연구가 이루어지고 있다. 그러나 NGS 리드 데이터를 이용한 디노버 어셈블리에 관한 연구는 국내외적으로 매우 미흡한 실정이다. 그 이유는 NGS 리드 데이터를 이용하여 디노버 어셈블리를 수행하는 경우 대용량 데이터, 복잡한 데이터 구조 및 처리 과정 등으로 인하여 매우 많은 시간과 공간이 소요될 뿐만 아니라 아직까지 다양한 분석 툴과 분석 노하우 등이 충분히 개발되어 있지 않기 때문이다. 본 연구에서는 NGS 리드 데이터를 이용한 대용량 게놈의 디노버 어셈블리를 위한 분석 방법론에 대하여 논하고, 다양한 실험과 분석을 통하여 디노버 어셈블리의 실효성과 정확성을 검증한다. 또한 디노버 어셈블리의 처리 시간 및 공간 오버헤드를 해결하기 위하여 유사 종과의 리드 정렬 결과를 활용하는 방안과 어셈블리 결과의 정확성과 효율성을 향상시키기 위하여 두 개의 서로 다른 디노버 어셈블리 알고리즘을 접목하는 하이브리드 디노버 어셈블리 기법을 제안한다. De novo assembly is a method which creates a putative original sequence by reconstructing reads without using reference sequence. Advanced studies in many areas use NGS data owing to the ultra high throughput and low cost. However, researches on de novo assembly using NGS data are not sufficient yet because de novo assembly requires a lot of execution time and memory space to process for large and complex structured NGS data. Furthermore analysis tools and know-hows to handle massive amount short reads such as NGS data have not been fully developed. This paper discusses a noble analysis method of de novo assembly in the use of large volume of NGS data and verify the effectiveness and the accuracy of the proposed method via various experiments. Then, we propose a method which aligns read data to sequences of similar species in order to overcome the resource overhead of de novo assembly. We also propose a hybrid method which combines two algorithms in order to increase the effectiveness and accuracy of de novo assembly.

      • SCISCIESCOPUS

        Fair-share scheduling in single-ISA asymmetric multicore architecture via scaled virtual runtime and load redistribution

        Kim, Myungsun,Noh, Soonhyun,Hyeon, Jinhwa,Hong, Seongsoo Elsevier 2018 Journal of parallel and distributed computing Vol.111 No.-

        <P><B>Abstract</B></P> <P>Performance-asymmetric multicore processors have been increasingly adopted in embedded systems due to their architectural benefits in improved performance and power savings. While fair-share scheduling is a crucial kernel service for such applications, it is still at an early stage with respect to performance-asymmetric multicore architecture. In this article, we first propose a new fair-share scheduler by adopting the notion of scaled CPU time that reflects the performance asymmetry between different types of cores. Using the scaled CPU time, we revise the virtual runtime of the completely fair scheduler (CFS) of the Linux kernel, and extend it into the scaled virtual runtime (SVR). In addition, we propose an SVR balancing algorithm that bounds the maximum SVR difference of tasks running on the same core types. The SVR balancing algorithm periodically partitions the tasks in the system into task groups and allocates them to the cores in such a way that tasks with smaller SVR receive larger SVR increments and thus proceed more quickly. We formally show the fairness property of the proposed algorithm. To demonstrate the effectiveness of the proposed approach, we implemented our approach into Linaro’s scheduling framework on ARM’s Versatile Express TC2 board and performed a series of experiments using the PARSEC benchmarks. The experiments show that the maximum SVR difference is only 4.09 ms in our approach, whereas it diverges indefinitely with time in the original Linaro’s scheduling framework. In addition, our approach incurs a run-time overhead of only 0.4% with an increased energy consumption of only 0.69%.</P> <P><B>Highlights</B></P> <P> <UL> <LI> A new fair-share scheduler is proposed for performance-asymmetric multicore systems. </LI> <LI> Scaled virtual runtime (SVR) is introduced to capture the asymmetry among cores. </LI> <LI> A task migration policy is proposed to balance SVRs among cores. </LI> <LI> The approach bounds the SVR differences between tasks in a cluster by a constant. </LI> <LI> Our approach incurs only negligible run-time and energy overhead. </LI> </UL> </P>

      • 대형승용차 충돌특성 및 변형양상에 관한 실험적 연구

        송지현(Jihyun Song),홍병창(Byeongchang Hong),김규현(Gyuhyun Kim),송진화(Jinhwa Song),서상욱(Sangwook Seo) 한국자동차공학회 2008 한국자동차공학회 춘 추계 학술대회 논문집 Vol.- No.-

        The purpose of this study is to conduct various crash tests of large passenger cars and measure the deformation, occupant injuries and body decelerations of vehicles. A relationship between impact speed and vehicle damage data such as deformation, body decelerations and etc are analyzed. The characteristics of domestic cars in crash tests are obtained and will be used for developing a precise accident reconstruction program by using a data which is recording at Automatic Accident Recording System(AARS). In this study, the test results of the full frontal barrier crash and side impact are reviewed and examined. The results show that the decelerations of large passenger cars are decreased on the increment of the vehicle weight. But duration of decelerations are increased on the increment of the vehicle weight.

      • Mussel-Inspired Anchoring of Polymer Loops That Provide Superior Surface Lubrication and Antifouling Properties

        Kang, Taegon,Banquy, Xavier,Heo, Jinhwa,Lim, Chanoong,Lynd, Nathaniel A.,Lundberg, Pontus,Oh, Dongyeop X.,Lee, Han-Koo,Hong, Yong-Ki,Hwang, Dong Soo,Waite, John Herbert,Israelachvili, Jacob N.,Hawker, American Chemical Society 2016 ACS NANO Vol.10 No.1

        <P>We describe robustly anchored triblock copolymers that adopt loop conformations on surfaces and endow them with unprecedented lubricating and antifouling properties. The triblocks have two end blocks with catechol-anchoring groups and a looping poly(ethylene oxide) (PEO) midblock. The loops mediate strong steric repulsion between two mica surfaces. When sheared at constant speeds of similar to 2.5 mu m/s, the surfaces exhibit an extremely low friction coefficient of similar to 0.002-0.004 without any signs of damage up to pressures of similar to 2-3 MPa that are close to most biological bearing systems. Moreover, the polymer loops enhance inhibition of cell adhesion and proliferation compared to polymers in the random coil or brush conformations. These results demonstrate that strongly anchored polymer loops are effective for high lubrication and low cell adhesion and represent a promising candidate for the development of specialized high-performance biomedical coatings.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼