http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
개별검색 DB통합검색이 안되는 DB는 DB아이콘을 클릭하여 이용하실 수 있습니다.
통계정보 및 조사
예술 / 패션
<해외전자자료 이용권한 안내>
- 이용 대상 : RISS의 모든 해외전자자료는 교수, 강사, 대학(원)생, 연구원, 대학직원에 한하여(로그인 필수) 이용 가능
- 구독대학 소속 이용자: RISS 해외전자자료 통합검색 및 등록된 대학IP 대역 내에서 24시간 무료 이용
- 미구독대학 소속 이용자: RISS 해외전자자료 통합검색을 통한 오후 4시~익일 오전 9시 무료 이용
※ 단, EBSCO ASC/BSC(오후 5시~익일 오전 9시 무료 이용)
The purpose of this study is to find the factors that influence big data characteristics on decision satisfaction and utilization behavior, analyze the extent of their influence, and derive differences from existing studies. To summarize the results of this study, First, the study found that among the three categories that classify the characteristics of big data, qualitative attributes such as representation, purpose, interpretability, and innovation in the value innovation category greatly enhance decision confidence and decision effectiveness of decision makers who make decisions using big data. Second, the study found that, among the three categories that classify the characteristics of big data, the individuality properties belonging to the social impact category improve decision confidence and decision effectiveness of decision makers who use big data to make decisions. However, collectivity and bias characteristics have been shown to increase decision confidence, but not the effectiveness of decision making. Third, the study found that among the three categories that classify the characteristics of big data, the attributes of inclusiveness, realism, etc. in the integrity category greatly improve decision confidence and decision effectiveness of decision makers who make decisions using big data. Fourth, it was analyzed that using big data in organizational decision making has a positive impact on the behavior of big data users when the decision-making confidence and finally, decision-making effect of decision-makers increases.
Spatiotemporal data are records of the spatial changes of moving objects over time. Most data in corporate databases have a spatiotemporal nature, but they are typically treated as merely descriptive semantic data without considering their potential visual (or cartographic) representation. Businesses such as geographical CRM, location-based services, and technologies like GPS and RFID depend on the storage and analysis of spatiotemporal data. Effectively handling the data analysis process may be accomplished through spatiotemporal data warehouse and spatial OLAP. This paper proposes a multidimensional model for spatiotemporal data analysis, and cartographically represents the results of the analysis.
This study intends to link agricultural machine history data with related organizations or collect them through IoT sensors, receive input from agricultural machine users and managers, and analyze them through AI algorithms. Through this, the goal is to track and manage the history data throughout all stages of production, purchase, operation, and disposal of agricultural machinery. First, LSTM (Long Short-Term Memory) is used to estimate oil consumption and recommend maintenance from historical data of agricultural machines such as tractors and combines, and C-LSTM (Convolution Long Short-Term Memory) is used to diagnose and determine failures. Memory) to build a deep learning algorithm. Second, in order to collect historical data of agricultural machinery, IoT sensors including GPS module, gyro sensor, acceleration sensor, and temperature and humidity sensor are attached to agricultural machinery to automatically collect data. Third, event-type data such as agricultural machine production, purchase, and disposal are automatically collected from related organizations to design an interface that can integrate the entire life cycle history data and collect data through this.
It is urgent to prepare countermeasures for traffic congestion problems of Korea's metropolitan area where central functions such as economic, social, cultural, and education are excessively concentrated. Most users of public transportation in metropolitan areas including Seoul use the traffic cards. If various information is extracted from traffic big data produced by the traffic cards, they can provide basic data for transport policies, land usages, or facility plans. Therefore, in this study, we extract valuable information such as the subway passengers' frequent travel patterns from the big traffic data provided by the Seoul Metropolitan Government Big Data Campus. For this, we use a Hadoop (High-Availability Distributed Object- Oriented Platform) to preprocess the big data and store it into a Mongo database in order to analyze it by a sequential pattern data mining technique. Since we analysis the actual big data, that is, the traffic cards' data provided by the Seoul Metropolitan Government Big Data Campus, the analyzed results can be used as an important referenced data when the Seoul government makes a plan about the metropolitan traffic policies.
This study sheds light on the source data quality in big data systems. Previous studies about big data success have called for future research and further examination of the quality factors and the importance of source data. This study extracted the quality factors of source data from the user’s viewpoint and empirically tested the effects of source data quality on the usefulness and utilization of big data analytics results. Based on the previous researches and focus group evaluation, four quality factors have been established such as accuracy, completeness, timeliness and consistency. After setting up 11 hypotheses on how the quality of the source data contributes to the usefulness, utilization, and ongoing use of the big data analytics results, e-mail survey was conducted at a level of independent department using big data in domestic firms. The results of the hypothetical review identified the characteristics and impact of the source data quality in the big data systems and drew some meaningful findings about big data characteristics.
Recently, big data services have been used in various fields. In this situation, this research studied the intention to provide personal information from users, which is necessary to provide useful big data services. A survey was conducted on college students and ordinary people who have understood big data services. And path analysis was performed through Amos' structural equation. As a result of the study, it was found that privacy risks, trust in service providers, individual innovativeness, service incentives, social influence, and service design are major variables influencing the intention to provide personal information. And it was found that trust in service providers plays a mediating role in influencing the intention to provide personal information. In addition, big data services were classified into types for information acquisition and types related to purchase. Accordingly, it was further analyzed whether major variables differ in the path affecting the intention to provide personal information, and new implications were found. Companies that actually develop and provide big data services should establish different strategies by reflecting research results depending on the type of big data service provided.
Most of enterprises depend on a data modeler during developing their management information systems. In formulating business requirements for information systems, they widely and naturally use the interview method between a data modeler and a field worker. But, the discrepancy between both parties would certainly cause information loss and distortion that lead to let the systems not faithful to real business works. To improve or avoid modeler-dependant data modeling process, many automated data design CASE tools have been introduced. However, since most of traditional CASE tools just support drawing works for conceptual data design, a data modeler could not generate an ERD faithful to real business works and a user could not use them without any knowledge on database. Although some CASE tools supported conceptual data design, they still required too much preliminary database knowledge for a user. Against these traditional CASE tools, we proposed a Requirement- Oriented Entity Relationship Model for automated data design tool, called ROERM. Based on Non-Stop Methodology, ROERM adopts inner systematic modules for complete and sound ERD that is faithful to real field works, where modules are composed of interaction modules with a user, rules of schema operations and sentence translations. In addition to structure design of ROERM, we also devise detailed algorithms and perform an experiment for a case study.
Trust for the data created, processed and transferred on e-Science environments can be estimated with provenance. The information to form provenance, which says how the data was created and reached its current state, increases as data evolves. It is a heavy burden to trace and verify the massive provenance in order to trust data. On the other hand, it is another issue how to trust the verification of data with provenance. This paper proposes a fast and exact verification of inter-domain data transfer and data origin for e-Science environment based on PKI. The verification, which is called two-way verification, cuts down the tracking overhead of the data along the causality presented on Open Provenance Model with the domain specialty of e-Science environment supported by Grid Security Infrastructure (GSI). The proposed scheme is easy-applicable without an extra infrastructure, scalable irrespective of the number of provenance records, transparent and secure with cryptography as well as low-overhead.
The purpose of this study is to examine the impacts of media symbol variety on group performance in virtual teams. Symbol variety is defined as the number of ways in which information can be communicated and includes Daft and Lengel ’s multiplicity of cues and language variety. According to media richness theory and media synchronicity theory, the use of media with high symbol variety is assumed to facilitate and promote communications among virtual team members. Therefore, it is expected that the media symbol variety is positively associated with group performance in virtual teams. Furthermore, online relationship building is expected to mediate the impacts of symbol variety on the performance. To confirm the suppositions, a controlled lab experiment was conducted with 60 undergraduate students as subjects. In the experimental virtual teams, subjects were allowed to communicate with other members using text-based messenger with emoticons. Subjects in the control virtual teams were allowed to communicate using only text-based messenger. The direct impact of symbol variety on group performance in virtual teams was found insignificant. However, the online relationship was found to completely mediate the positive impact of symbol variety on group performance. The implications and limitations of this study are also discussed for future research.
ble computer's storages in mobile computing environment. We propose a new transaction recovery sc heme for a flash memory database environment which is based on a flash media file system. We imp schemes by reusing old data pages which are supposed to be inva lidated in the course of writing a new data page in the flash file system environment. In order to reuse these data pages, we exploit in our flash memory shadow pag ing (FMSP) scheme. FMSP scheme removes the additional storage overhead for keeping shadow page s and minimizes the I/O performance tional shadow paging schemes. We also propose a simulation model to show the performance of FMSP . Based on the results of the performance evaluation, we conclude that FMSP outperforms the t