http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Selective bit embedding scheme for robust blind color image watermarking
Huynh-The, Thien,Hua, Cam-Hao,Tu, Nguyen Anh,Hur, Taeho,Bang, Jaehun,Kim, Dohyeong,Amin, Muhammad Bilal,Ho Kang, Byeong,Seung, Hyonwoo,Lee, Sungyoung Elsevier 2018 Information sciences Vol.426 No.-
<P>In this paper, we propose a novel robust blind color image watermarking method, namely SMLE, that allows to embed a gray-scale image as watermark into a host color image in the wavelet domain. After decomposing the gray-scale watermark to component binary images in digits ordering from least significant bit (LSB) to most significant bit (MSB), the retrieved binary bits are then embedded into wavelet blocks of two optimal color channels by using an efficient quantization technique, where the wavelet coefficient difference in each block is quantized to either two pre-defined thresholds for corresponding 0-bits and 1-bits. To optimize the watermark imperceptibility, we equally split the coefficient modified quantity on two middle-frequency sub-bands instead of only one as in existing approaches. The improvement of embedding rule increases approximately 3 dB of watermarked image quality. An adequate trade-off between robustness and imperceptibility is controlled by a factor representing the embedding strength. As for extraction process, we exploit 2D Otsu algorithm for higher accuracy of watermark detection than that of 1D Otsu. Experimental results prove the robustness of our SMLE watermarking model against common image processing operations along with its efficient retention of the imperceptibility of the watermark in the host image. Compared to state-of-the-art methods, our approach outperforms in most of robustness tests at a same high payload capacity. (C) 2017 Elsevier Inc. All rights reserved.</P>
NIC: A Robust Background Extraction Algorithm for Foreground Detection in Dynamic Scenes
Huynh-The, Thien,Banos, Oresti,Lee, Sungyoung,Kang, Byeong Ho,Kim, Eun-Soo,Le-Tien, Thuong Institute of Electrical and Electronics Engineers 2017 IEEE Transactions on Circuits and Systems for Vide Vol. No.
<P>This paper presents a robust foreground detection method capable of adapting to different motion speeds in scenes. A key contribution of this paper is the background estimation using a proposed novel algorithm, neighbor-based intensity correction (NIC), that identifies and modifies the motion pixels from the difference of the background and the current frame. Concretely, the first frame is considered as an initial background that is updated with the pixel intensity from each new frame based on the examination of neighborhood pixels. These pixels are formed into windows generated from the background and the current frame to identify whether a pixel belongs to the background or the current frame. The intensity modification procedure is based on the comparison of the standard deviation values calculated from two pixel windows. The robustness of the current background is further measured using pixel steadiness as an additional condition for the updating process. Finally, the foreground is detected by the background subtraction scheme with an optimal threshold calculated by the Otsu method. This method is benchmarked on several well-known data sets in the object detection and tracking domain, such as CAVIAR 2004, AVSS 2007, PETS 2009, PETS 2014, and CDNET 2014. We also compare the accuracy of the proposed method with other state-of-the-art methods via standard quantitative metrics under different parameter configurations. In the experiments, NIC approach outperforms several advanced methods on depressing the detected foreground confusions due to light artifact, illumination change, and camera jitter in dynamic scenes.</P>
Huynh-The, Thien,Hua, Cam-Hao,Anh Tu, Nguyen,Hur, Taeho,Bang, Jaehun,Kim, Dohyeong,Amin, Muhammad Bilal,Kang, Byeong Ho,Seung, Hyonwoo,Shin, Soo-Yong,Kim, Eun-Soo,Lee, Sungyoung Elsevier science 2018 Information sciences Vol.444 No.-
<P><B>Abstract</B></P> <P>Despite impressive achievements in image processing and artificial intelligence in the past decade, understanding video-based action remains a challenge. However, the intensive development of 3D computer vision in recent years has brought more potential research opportunities in pose-based action detection and recognition. Thanks to the advantages of depth camera devices like the Microsoft Kinect sensor, we developed an effective approach to in-depth analysis of indoor actions using skeleton information, in which skeleton-based feature extraction and topic model-based learning are two major contributions. Geometric features, i.e. joint distance, joint angle, and joint-plane distance are calculated in the spatio-temporal dimension. These features are merged into two types, called pose and transition features, and then are provided to codebook construction to convert sparse features into visual words by <I>k</I>-means clustering. An efficient hierarchical model is developed to describe the full correlation of feature - poselet - action based on Pachinko Allocation Model. This model has the potential to uncover more hidden poselets, which have been recognized as the valuable information and help to differentiate pose-sharing actions. The experimental results on several well-known datasets, such as MSR Action 3D, MSR Daily Activity 3D, Florence 3D Action, UTKinect-Action 3D, and NTU RGB+D Action Recognition, demonstrate the high recognition accuracy of the proposed method. Our method outperforms state-of-the-art methods in the field in most dataset benchmarks.</P> <P><B>Highlights</B></P> <P> <UL> <LI> 3D action recognition approach using topic modeling technique. </LI> <LI> Pose and transition feature for object posture and movement representation. </LI> <LI> A flexible hierarchical topic model to learn feature-poselet-action correlation. </LI> <LI> Method sensitivity evaluation on five well-known 3D action recognition datasets. </LI> <LI> Accuracy improvement to other existing methods which only use 3D skeleton data. </LI> </UL> </P>
An Accurate ConvNet-Empowered Modulation Classification For OFDM Systems
Thien Huynh-The,Toan-Van Nguyen,Quoc-Viet Pham,Dong-Seong Kim(김동성) 한국통신학회 2021 한국통신학회 학술대회논문집 Vol.2021 No.2
In this paper, we propose a deep learning (DL)- based method to automatically classify modulation of orthogonal frequency-division multiplexing (OFDM) signals in wireless communication systems. To this end, an efficient convolutional neural network is developed with a novel densely residual structure which incorporates skip-connection and dense connection for convolutional block-wise association. Besides the prevention of vanishing gradient, this structure has the ability to selectively learn the high-level radio features, i.e., the component correlation within a sample and the relation of multiple local samples in time. For performance evaluation, we create a synthetic dataset of OFDM signals under the channel impairment of multipath Rician fading channel and additive Gaussian noise. In the experiments, the proposed network achieves superior performance in terms of classification accuracy against several other DL models while maintaining low computational cost.
ML-HDP: A Hierarchical Bayesian Nonparametric Model for Recognizing Human Actions in Video
Tu, Nguyen Anh,Huynh-The, Thien,Khan, Kifayat Ullah,Lee, Young-Koo Institute of Electrical and Electronics Engineers 2019 IEEE transactions on circuits and systems for vide Vol.29 No.3
<P>Action recognition from videos is an important area of computer vision research due to its various applications, ranging from visual surveillance to human–computer interaction. To address action recognition problems, this paper presents a framework that jointly models multiple complex actions and motion units at different hierarchical levels. We achieve this by proposing a generative topic model, namely, multi-label hierarchical Dirichlet process (ML-HDP). The ML-HDP model formulates the co-occurrence relationship of actions and motion units, and enables highly accurate recognition. In particular, our topic model possesses the three-level representation in action understanding, where low-level local features are connected to high-level actions via mid-level atomic actions. This allows the recognition model to work discriminatively. In our ML-HDP, atomic actions are treated as latent topics and automatically discovered from data. In addition, we incorporate the notion of class labels into our model in a semi-supervised fashion to effectively learn and infer multi-labeled videos. Using discovered topics and inferred labels, which are jointly assigned to local features, we present the straightforward methods to perform three recognition tasks including action classification, joint classification and segmentation of continuous actions, and spatiotemporal action localization. In experiments, we explore the use of three different features and demonstrate the effectiveness of our proposed approach for these tasks on four public datasets: KTH, MSR-II, Hollywood2, and UCF101.</P>