http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Christine Musanase,Anthony Vodacek,Damien Hanyurwimfura,Alfred Uwitonze,Aloys Fashaho,Adrien Turamyemyirijuru 한국지능시스템학회 2023 INTERNATIONAL JOURNAL of FUZZY LOGIC and INTELLIGE Vol.23 No.2
The ability to estimate soil quality has great value for agriculture, especially for low-incomeregions with minimal agricultural and financial resources. This prediction provides users withinformation that is useful in determining whether the soil is suitable for a specific crop, such aspotato (Solanum tuberosum). Farmers in Rwanda lack information on soil quality. There arenot enough soil laboratories to perform the requisite measurements of NPK, pH, and organiccarbon, nor are there enough experts to analyze the data and provide farmers with timelyresults. The prime objective of the proposed study is to develop a predictive framework thatcan estimate soil quality for the ideal cultivation of potato (Solanum tuberosum) considering acase study of Rwanda. In this study, bootstrapping is used to augment the small soil dataset,and fuzzy logic is used to label soil data into four classes of soil suitability, with verification ofthe labeling by soil experts. Several machine learning methods are then tested on the labeleddata, resulting in the classification of suitability for the augmented dataset and an assessment oftheir performance as a way to support experts in predicting soil quality. All machine learningmethods applied were viable, with the best performance achieved using an artificial neuralnetwork. The quantified outcome showed that the adoption of a neural-network-based schemehas an average accuracy of 32% in contrast to other learning schemes. However, 70%-80%accuracy was achieved upon the adoption of fuzzy logic.
A Lexicon-based Approach for Hate Speech Detection
Njagi Dennis Gitari,Zhang Zuping,Hanyurwimfura Damien,Jun Long 보안공학연구지원센터 2015 International Journal of Multimedia and Ubiquitous Vol.10 No.4
We explore the idea of creating a classifier that can be used to detect presence of hate speech in web discourses such as web forums and blogs. In this work, hate speech problem is abstracted into three main thematic areas of race, nationality and religion. The goal of our research is to create a model classifier that uses sentiment analysis techniques and in particular subjectivity detection to not only detect that a given sentence is subjective but also to identify and rate the polarity of sentiment expressions. We begin by whittling down the document size by removing objective sentences. Then, using subjectivity and semantic features related to hate speech, we create a lexicon that is employed to build a classifier for hate speech detection. Experiments with a hate corpus show significant practical application for a real-world web discourse.
Efficient Document Similarity Detection Using Weighted Phrase Indexing
Papias Niyigena,Zhang Zuping,Mansoor Ahmed Khuhro,Damien Hanyurwimfura 보안공학연구지원센터 2016 International Journal of Multimedia and Ubiquitous Vol.11 No.5
Document similarity techniques mostly rely on single term analysis of the document in the data set. To improve the efficiency and effectiveness of the process of document similarity detection, more informative feature terms have been developed and presented by many researchers. In this paper, we present phrase weight index, which indexes documents in the data set based on important phrases. Phrasal indexing aims to reduce the ambiguity inherent to the words considered in isolation, and then improve the effectiveness in document similarity computation. The method we are presenting here in this paper inherit the term tf-idf weighting scheme in computing important phrases in the collection. It computes the weight of phrases in the document collection and according to a given threshold; the important phrases are identified and are indexed. The data dimensionality which hinders the performance of document similarity for different methods is solved by an offline index creation of important phrases for every document. The evaluation experiments indicate that the presented method is very effective on document similarity detection and its quality surpasses the traditional phrase-based approach in which the reduction of dimensionality is ignored and other methods which use single-word tf-idf.
Block-Based Scheme for Database Integrity Verification
Lancine Camara,Junyi Li,Renfa Li,Faustin Kagorora,Damien Hanyurwimfura 보안공학연구지원센터 2014 International Journal of Security and Its Applicat Vol.8 No.6
Databases play an important role today in every modern organization, verifying their integrity is needed. Watermarking can be used to protect the integrity of database. In this paper, we present a secure fragile embedding watermark technique to verify the authenticity of an outsourced numeric relational database. Our technique treats the watermark embedding as an optimization problem by securely inserting a single watermark bit in individual database partition and the optimal threshold is computed for watermark detection. The approach partitions the database in different groups of square matrix and modifies the database while preserving the field values usability constraints. The database group determinant value is used to compute the position of field to be marked. Furthermore, we evaluated our scheme on a real case study and results show its effectiveness. The proposed scheme can detect and localize the malicious modifications made to the database. The proposed technique is highly resilient to common attacks and it overcomes some limitations of previous approaches on fragile watermarking.