http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Khagi, Bijen,Kwon, Goo-Rak Hindawi 2018 Journal of healthcare engineering Vol.2018 No.-
<P>Using deep neural networks for segmenting an MRI image of heterogeneously distributed pixels into a specific class assigning a label to each pixel is the concept of the proposed approach. This approach facilitates the application of the segmentation process on a preprocessed MRI image, with a trained network to be utilized for other test images. As labels are considered expensive assets in supervised training, fewer training images and training labels are used to obtain optimal accuracy. To validate the performance of the proposed approach, an experiment is conducted on other test images (available in the same database) that are not part of the training; the obtained result is of good visual quality in terms of segmentation and quite similar to the ground truth image. The average computed Dice similarity index for the test images is approximately 0.8, whereas the Jaccard similarity measure is approximately 0.6, which is better compared to other methods. This implies that the proposed method can be used to obtain reference images almost similar to the segmented ground truth images.</P>
Bijen Khagi,Goo-Rak Kwon 대한전자공학회 2019 IEIE Transactions on Smart Processing & Computing Vol.8 No.4
In this paper, we present the performance of a medical image classification model pretrained on natural images. In addition, another model is scratch trained from available medical magnetic resonance images in order to get a comparative analysis. We perform shallow tuning and fine-tuning of the pretrained model (AlexNet, GoogLeNet, and ResNet50) in a bunch of layers in order to find the impact of each section of layers in the classification result. We use 28 normal controls (NC) and 28 Alzheimer’s disease (AD) patients for classification, selecting 30 important slices from each patient. Once all the slices were collected, each model was trained, validated, and tested at a ratio of 6:2:2 on a random selection basis. The testing results are reported and analyzed so the final CNN model could be built with a minimal number of layers for optimal performance.
Bijen Khagi,권구락 한국멀티미디어학회 2022 The journal of multimedia information system Vol.9 No.3
A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple ‘dimensions’ depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where ‘N’ represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models’ weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.