RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
          펼치기
        • 등재정보
          펼치기
        • 학술지명
          펼치기
        • 주제분류
          펼치기
        • 발행연도
          펼치기
        • 작성언어
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • KCI등재

        발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색

        장중혁(Joong Hyuk Chang) 한국지능정보시스템학회 2010 지능정보연구 Vol.16 No.3

        Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledgeembedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

      • Stream Data Mining: Platforms, Algorithms, Performance Evaluators and Research Trends

        Bakshi Rohit Prasad,Sonali Agarwal 보안공학연구지원센터 2016 International Journal of Database Theory and Appli Vol.9 No.9

        Streaming data are potentially infinite sequence of incoming data at very high speed and may evolve over the time. This causes several challenges in mining large scale high speed data streams in real time. Hence, this field has gained a lot of attention of researchers in previous years. This paper discusses various challenges associated with mining such data streams. Several available stream data mining algorithms of classification and clustering are specified along with their key features and significance. Also, the significant performance evaluation measures relevant in streaming data classification and clustering are explained and their comparative significance is discussed. The paper illustrates various streaming data computation platforms that are developed and discusses each of them chronologically along with their major capabilities. This paper clearly specifies the potential research directions open in high speed large scale data stream mining from algorithmic, evolving nature and performance evaluation measurement point of view. Finally, Massive Online Analysis (MOA) framework is used as a use case to show the result of key streaming data classification and clustering algorithms on the sample benchmark dataset and their performances are critically compared and analyzed based on the performance evaluation parameters specific to streaming data mining.

      • Challenges and Issues in DATA Stream: A Review

        보안공학연구지원센터(IJHIT) 보안공학연구지원센터 2015 International Journal of Hybrid Information Techno Vol.8 No.3

        Data stream is a continuous, time varying, massive and infinitely ordered sequence of data elements. The streaming data are fast changing with time, it is impossible to acquire all the elements in a data stream. Therefore, each data element should be examined at most once in data streams. Memory usage for mining data stream should be limited due to the new data elements are continuously generated from the streams. It is essential to ensure that newly arrived stream should be immediately available whenever it is requested made this task much challenging and necessary for fraud detection in stream, taking out knowledge, for business improvement and other applications where data arrived in stream. This paper tries to highlight important issues and research challenges of data stream by means of a comprehensive review.

      • KCI등재

        순차 데이터 스트림에서 발생 간격 제한 조건을 활용한 빈발 순차 패턴 탐색

        장중혁(Joong-Hyuk Chang) 한국컴퓨터정보학회 2010 韓國컴퓨터情報學會論文誌 Vol.15 No.9

        순차 패턴 탐색은 데이터 마이닝의 주요 기법 중의 하나로서 웹기반 시스템, 전자상거래, 생물정보학 및 USN 환경 등과 같은 여러 컴퓨터 응용 분야에서 생성되는 데이터를 효율적으로 분석하기 위하여 널리 활용되고 있다. 한편 이들 응용 분야에서 생성되는 정보들은 근래들어 한정적인 데이터 집합이 아닌 구성요소가 지속적으로 생성되는 데이터 스트림 형태로 생성되고 있다. 이러한 상황을 고려하여 데이터 스트림에서 순차패턴 탐색에 대한 연구들도 활발히 진행되고 있다. 하지만 이전의 연구들은 주로 분석 대상 데이터 스트림에서 단순 순차패턴을 구하는 과정에서 마이닝 수행 시간이나 메모리 사용량 등을 줄이는데 초점을 맞추고 있으며, 따라서 해당 데이터 스트림의 특성을 효율적으로 표현할 수 있는 보다 중요하고 의미있는 패턴들을 탐색하기 위한 연구는 거의 진행되지 못하고 있다. 본 논문에서는 데이터 스트림에서 보다 의미있는 순차패턴을 탐색하기 위한 방법으로 구성요소의 발생 간격 제한 조건을 활용한 빈발 순차패턴 탐색 방법을 제안한다. 먼저 발생 간격 정의 기준 및 발생 간격제한 빈발 순차패턴의 개념을 제시하고, 이어서 데이터 스트림에서 발생 간격 제한 조건을 적용하여 빈발 순차패턴을 효율적으로 탐색할 수 있는 마이닝 방법을 제안한다. Sequential pattern mining is one of the essential data mining tasks, and it is widely used to analyze data generated in various application fields such as web-based applications, E-commerce, bioinformatics, and USN environments. Recently data generated in the application fields has been taking the form of continuous data streams rather than finite stored data sets. Considering the changes in the form of data, many researches have been actively performed to efficiently find sequential patterns over data streams. However, conventional researches focus on reducing processing time and memory usage in mining sequential patterns over a target data stream, so that a research on mining more interesting and useful sequential patterns that efficiently reflect the characteristics of the data stream has been attracting no attention. This paper proposes a mining method of sequential patterns over data streams with a gap constraint, which can help to find more interesting sequential patterns over the data streams. First, meanings of the gap for a sequential pattern and gap-constrained sequential patterns are defined, and subsequently a mining method for finding gap-constrained sequential patterns over a data stream is proposed.

      • KCI등재

        지식 누적을 이용한 실시간 주식시장 예측

        김진화(Jinhwa Kim),홍광헌(Kwang Hun Hong),민진영(Jin Young Min) 한국지능정보시스템학회 2011 지능정보연구 Vol.17 No.4

        One of the major problems in the area of data mining is the size of the data, as most data set has huge volume these days. Streams of data are normally accumulated into data storages or databases. Transactions in internet, mobile devices and ubiquitous environment produce streams of data continuously. Some data set are just buried un-used inside huge data storage due to its huge size. Some data set is quickly lost as soon as it is created as it is not saved due to many reasons. How to use this large size data and to use data on stream efficiently are challenging questions in the study of data mining. Stream data is a data set that is accumulated to the data storage from a data source continuously. The size of this data set, in many cases, becomes increasingly large over time. To mine information from this massive data, it takes too many resources such as storage, money and time. These unique characteristics of the stream data make it difficult and expensive to store all the stream data sets accumulated over time. Otherwise, if one uses only recent or partial of data to mine information or pattern, there can be losses of valuable information, which can be useful. To avoid these problems, this study suggests a method efficiently accumulates information or patterns in the form of rule set over time. A rule set is mined from a data set in stream and this rule set is accumulated into a master rule set storage, which is also a model for real-time decision making. One of the main advantages of this method is that it takes much smaller storage space compared to the traditional method, which saves the whole data set. Another advantage of using this method is that the accumulated rule set is used as a prediction model. Prompt response to the request from users is possible anytime as the rule set is ready anytime to be used to make decisions. This makes real-time decision making possible, which is the greatest advantage of this method. Based on theories of ensemble approaches, combination of many different models can produce better prediction model in performance. The consolidated rule set actually covers all the data set while the traditional sampling approach only covers part of the whole data set. This study uses a stock market data that has a heterogeneous data set as the characteristic of data varies over time. The indexes in stock market data can fluctuate in different situations whenever there is an event influencing the stock market index. Therefore the variance of the values in each variable is large compared to that of the homogeneous data set. Prediction with heterogeneous data set is naturally much more difficult, compared to that of homogeneous data set as it is more difficult to predict in unpredictable situation. This study tests two general mining approaches and compare prediction performances of these two suggested methods with the method we suggest in this study. The first approach is inducing a rule set from the recent data set to predict new data set. The seocnd one is inducing a rule set from all the data which have been accumulated from the beginning every time one has to predict new data set. We found neither of these two is as good as the method of accumulated rule set in its performance. Furthermore, the study shows experiments with different prediction models. The first approach is building a prediction model only with more important rule sets and the second approach is the method using all the rule sets by assigning weights on the rules based on their performance. The second approach shows better performance compared to the first one. The experiments also show that the suggested method in this study can be an efficient approach for mining information and pattern with stream data. This method has a limitation of bounding its application to stock market data. More dynamic real-time steam data set is desirable for the application of this method. There is also another problem in this study.

      • SCIESCOPUS

        Sliding window based weighted erasable stream pattern mining for stream data applications

        Yun, U.,Lee, G. North-Holland 2016 Future generations computer systems Vol.59 No.-

        <P>As one of the variations in frequent pattern mining, erasable pattern mining discovers patterns with benefits lower than or equal to a user-specified threshold from a product database. Although traditional erasable pattern mining algorithms can perform their own mining operations on static mining environments, they are not suitable for dealing with dynamic data stream environments. In such dynamic data streams, algorithms have to process them immediately with only one database scan in order to consider characteristics of data stream mining. However, previous tree-based erasable pattern mining methods have difficulty in processing dynamic data streams because they need two or more database scans to construct their own tree structures. In addition, they do not also consider specific information of each item within a product database, but they need to conduct mining operations considering such additional information of the items in order to find more useful erasable pattern results. For this reason, in this paper, we propose a weighted erasable pattern mining algorithm suitable for sliding window-based data stream environments. The algorithm employs tree and list data structures for more efficient mining processes and solves the problems of previous erasable pattern mining approaches by using a sliding window-based stream processing technique and an item weight-based pattern pruning method. We compare performance of the proposed algorithm to state-of-the-art tree-based approaches with respect to various real and synthetic datasets. Experimental results show that our method is more efficient and scalable than the competitors in terms of runtime, memory, and pattern generation. (C) 2015 Elsevier B.V. All rights reserved.</P>

      • KCI등재

        데이터 스트림 환경을 위한 유틸리티 기반 웹 방문 패턴의 마이닝 기법

        아메드 파한,최호진,정병수 한국정보과학회 2010 데이타베이스 연구 Vol.26 No.2

        Web access sequence mining can discover the frequently accessed web pages pursued by users. Utility‐based web access sequence mining handles non‐binary occurrences of web pages and extracts more useful knowledge from web logs. However, the existing utility‐based web access sequence mining approach considers web access sequences from the very beginning of web logs and therefore it is not suitable for mining data streams where the volume of data is huge and unbounded. At the same time, it cannot find the recent change of knowledge in data streams adaptively. The existing approach has many other limitations such as considering only forward references of web access sequences, suffers in the level‐wise candidate generation‐and‐test methodology, and needs several database scans, etc. In this paper, we propose a new approach for utility‐based web access sequence mining over data streams with a sliding window method. Our approach can not only handle large‐scale data but also efficiently discover the recently generated information from data streams. Moreover, it can solve the other limitations of the existing algorithms over data streams. Extensive performance analysis shows that our approach is very efficient and outperforms the existing algorithms. 유틸리티 기반 웹 방문 패턴의 마이닝은 웹 페이지의 중요도, 사용자가 웹 페이지에 머문 시간을 유틸리티로정의하여 유티릴티 값이 큰 방문 패턴을 탐색하는 마이닝 기법을 말한다. 기존의 유틸리티 기반 웹 방문 패턴의 마이닝 기법들은 여러 번의 데이터베이스 스캔을 필요로 하는 것으로 데이터 스트림과 같은 웹 로그 데이터를 처리하기에는 적절하지 못하였다. 본 논문에서는 슬라이딩 윈도우 모델을 기반으로 한 번의 데이터베이스스캔을 통하여 유틸리티 기반 웹 방문 패턴을 탐색하는 기법을 제안한다. 제안하는 기법은 대용량의 데이터에대하서도 좋은 확장성을 보이고 슬라이딩 윈도우 모델을 통하여 최근의 정보를 탐색할 수 있는 기법임을 여러실험을 통하여 중명한다.

      • KCI등재

        단일 스캔을 통한 웹 방문 패턴의 탐색 기법

        김낙민(Nakmin Kim),정병수(Byeong-Soo Jeong),아메드 파한(Chowdhury Farhan Ahmed) 한국정보과학회 2010 정보과학회논문지 : 데이타베이스 Vol.37 No.5

        인터넷 사용의 급증과 더불어 보다 편리한 인터넷 서비스를 위한 여러 연구가 활발히 진행되어 왔다. 웹 로그 데이터로부터 빈번하게 발생되는 웹 페이지들의 방문 시퀀스를 탐색하는 기법 역시 효과적인 웹 사이트를 설계하기 위한 목적으로 많이 연구되어 왔다. 그러나 기존의 방법들은 모두 여러 번의 데이터베이스 스캔을 필요로 하는 방법으로 지속적으로 생성되는 웹 로그 데이터로부터 빠르게 실시간적으로 웹 페이지 방문 시퀀스를 탐색하기에는 많은 어려움이 있었다. 또한 점진적(incremental)이고 대화형식(interactive)의 탐색 기법 역시 지속적으로 생성되는 웹 로그 데이터를 처리하기 위하여 필요한 기능들이다. 본 논문에서는 지속적으로 생성되는 웹 로그 데이터로부터 단일 스캔을 통하여 빈번히 발생하는 웹 페이지 방문 시퀀스를 점진적이고 대화 형식적인 방법으로 탐색하는 방법을 제안한다. 제안하는 방법은 WTS(web traversal sequence)-트리 구조를 사용하며 다양한 실험을 통하여 기존의 방법들에 비해 성능적으로 우수하고 효과적인 방법임을 증명한다. Web access sequence mining can discover the frequently accessed web pages pursued by users. Utility-based web access sequence mining handles non-binary occurrences of web pages and extracts more useful knowledge from web logs. However, the existing utility-based web access sequence mining approach considers web access sequences from the very beginning of web logs and therefore it is not suitable for mining data streams where the volume of data is huge and unbounded. At the same time, it cannot find the recent change of knowledge in data streams adaptively. The existing approach has many other limitations such as considering only forward references of web access sequences, suffers in the level-wise candidate generation-and-test methodology, needs several database scans, etc. In this paper, we propose a new approach for high utility web access sequence mining over data streams with a sliding window method. Our approach can not only handle large-scale data but also efficiently discover the recently generated information from data streams. Moreover, it can solve the other limitations of the existing algorithm over data streams. Extensive performance analyses show that our approach is very efficient and outperforms the existing algorithm.

      • KCI등재

        데이터 스트림 마이닝에서 정보 중요성 차별화를 위한 퍼지 윈도우 기법

        장중혁(Chang, Joong-Hyuk) 한국산학기술학회 2011 한국산학기술학회논문지 Vol.12 No.9

        구성요소가 지속적으로 생성되고 시간 흐름에 따라 변화되기도 하는 데이터 스트림의 특성을 고려하여 데이 터 스트림 구성요소의 중요성을 발생 시간에 따라 차별화하기 위한 기법들이 활발히 제안되어 왔다. 기존의 방법들은 최근에 발생된 정보에 집중된 분석 결과를 제공하는데 효과적이나 보다 유연하게 다양한 형태로 정보 중요성을 차별 화하는데 한계가 있다. 퍼지 개념에 기반한 정보 중요성 차별화는 이러한 한계를 보완하는 좋은 대안이 될 수 있다. 퍼지 개념은 기존의 뚜렷한 경계를 갖는 접근법의 문제점을 극복하고 실세계의 요구에 보다 부합되는 결과를 제공할 수 있는 방법으로 여러 데이터 마이닝 분야에서 널리 적용되어 왔다. 본 논문에서는 퍼지 개념을 적용하여 데이터 스 트림 마이닝에서 정보 중요성 차별화에 효율적으로 활용될 수 있는 퍼지 윈도우 기법을 제안한다. 퍼지 캘린더를 포 함한 기본적인 퍼지 개념에 대해서 먼저 기술하고, 다음으로 데이터 스트림 마이닝에서 퍼지 윈도우 기법을 적용한 가중치 패턴 탐색에 대한 세부 내용을 기술한다. Considering the characteristics of a data stream whose data elements are continuously generated and may change over time, there have been many techniques to differentiate the importance of data elements in a data stream by their generation time. The conventional techniques are efficient to get an analysis result focusing on the recent information in a data stream, but they have a limitation to differentiate the importance of information in various ways more flexible. An information differentiation technique based on the term of a fuzzy set can be an alternative way to compensate the limitation. A term of a fuzzy set has been widely used in various data mining fields, which can overcome the sharp boundary problem and give an analysis result reflecting the requirements in real world applications more. In this paper, a fuzzy window mechanism is proposed, which is adapting a term of a fuzzy set and is efficiently used to differentiate the importance of information in mining data streams. Basic concepts including fuzzy calendars are described first, and subsequently details on data stream mining of weighted patterns using a fuzzy window technique are described.

      • SCOPUSKCI등재

        Mining Frequent Itemsets with Normalized Weight in Continuous Data Streams

        Kim, Young-Hee,Kim, Won-Young,Kim, Ung-Mo Korea Information Processing Society 2010 Journal of information processing systems Vol.6 No.1

        A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. The continuous characteristic of streaming data necessitates the use of algorithms that require only one scan over the stream for knowledge discovery. Data mining over data streams should support the flexible trade-off between processing time and mining accuracy. In many application areas, mining frequent itemsets has been suggested to find important frequent itemsets by considering the weight of itemsets. In this paper, we present an efficient algorithm WSFI (Weighted Support Frequent Itemsets)-Mine with normalized weight over data streams. Moreover, we propose a novel tree structure, called the Weighted Support FP-Tree (WSFP-Tree), that stores compressed crucial information about frequent itemsets. Empirical results show that our algorithm outperforms comparative algorithms under the windowed streaming model.

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼