RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
          펼치기
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • Semisupervised Tripled Dictionary Learning for Standard-Dose PET Image Prediction Using Low-Dose PET and Multimodal MRI

        Wang, Yan,Shen, Dinggang,Ma, Guangkai,An, Le,Shi, Feng,Zhang, Pei,Lalush, David S.,Wu, Xi,Pu, Yifei,Zhou, Jiliu IEEE 2017 IEEE Transactions on Biomedical Engineering Vol.64 No.3

        <P>Objective: To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods: It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semisupervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results: Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion: This paper proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance: The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients.</P>

      • Composite large margin classifiers with latent subclasses for heterogeneous biomedical data

        Chen, Guanhua,Liu, Yufeng,Shen, Dinggang,Kosorok, Michael R. Wiley Subscription Services, Inc., A Wiley Company 2016 Statistical analysis and data mining Vol.9 No.2

        <P>High‐dimensional classification problems are prevalent in a wide range of modern scientific applications. Despite a large number of candidate classification techniques available to use, practitioners often face a dilemma of choosing between linear and general nonlinear classifiers. Specifically, simple linear classifiers have good interpretability, but may have limitations in handling data with complex structures. In contrast, general nonlinear classifiers are more flexible, but may lose interpretability and have higher tendency for overfitting. In this paper, we consider data with potential latent subgroups in the classes of interest. We propose a new method, namely the composite large margin (CLM) classifier, to address the issue of classification with latent subclasses. The CLM aims to find three linear functions simultaneously: one linear function to split the data into two parts, with each part being classified by a different linear classifier. Our method has comparable prediction accuracy to a general nonlinear classifier, and it maintains the interpretability of traditional linear classifiers. We demonstrate the competitive performance of the CLM through comparisons with several existing linear and nonlinear classifiers by Monte Carlo experiments. Analysis of the Alzheimer's disease classification problem using CLM not only provides a lower classification error in discriminating cases and controls, but also identifies subclasses in controls that are more likely to develop the disease in the future.</P>

      • Identifying informative imaging biomarkers via tree structured sparse learning for AD diagnosis.

        Liu, Manhua,Zhang, Daoqiang,Shen, Dinggang Humana Press, Inc 2014 Neuroinformatics Vol.12 No.3

        <P>Neuroimaging provides a powerful tool to characterize neurodegenerative progression and therapeutic efficacy in Alzheimer's disease (AD) and its prodromal stage-mild cognitive impairment (MCI). However, since the disease pathology might cause different patterns of structural degeneration, which is not pre-known, it is still a challenging problem to identify the relevant imaging markers for facilitating disease interpretation and classification. Recently, sparse learning methods have been investigated in neuroimaging studies for selecting the relevant imaging biomarkers and have achieved very promising results on disease classification. However, in the standard sparse learning method, the spatial structure is often ignored, although it is important for identifying the informative biomarkers. In this paper, a sparse learning method with tree-structured regularization is proposed to capture patterns of pathological degeneration from fine to coarse scale, for helping identify the informative imaging biomarkers to guide the disease classification and interpretation. Specifically, we first develop a new tree construction method based on the hierarchical agglomerative clustering of voxel-wise imaging features in the whole brain, by taking into account their spatial adjacency, feature similarity and discriminability. In this way, the complexity of all possible multi-scale spatial configurations of imaging features can be reduced to a single tree of nested regions. Second, we impose the tree-structured regularization on the sparse learning to capture the imaging structures, and then use them for selecting the most relevant biomarkers. Finally, we train a support vector machine (SVM) classifier with the selected features to make the classification. We have evaluated our proposed method by using the baseline MR images of 830 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, which includes 198 AD patients, 167 progressive MCI (pMCI), 236 stable MCI (sMCI), and 229 normal controls (NC). Our experimental results show that our method can achieve accuracies of 90.2 %, 87.2 %, and 70.7 % for classifications of AD vs. NC, pMCI vs. NC, and pMCI vs. sMCI, respectively, demonstrating promising performance compared with other state-of-the-art methods.</P>

      • Detecting Anatomical Landmarks From Limited Medical Imaging Data Using Two-Stage Task-Oriented Deep Neural Networks

        Jun Zhang,Mingxia Liu,Dinggang Shen IEEE 2017 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.26 No.10

        <P>One of the major challenges in anatomical landmark detection, based on deep neural networks, is the limited availability of medical imaging data for network learning. To address this problem, we present a two-stage task-oriented deep learning method to detect large-scale anatomical landmarks simultaneously in real time, using limited training data. Specifically, our method consists of two deep convolutional neural networks (CNN), with each focusing on one specific task. Specifically, to alleviate the problem of limited training data, in the first stage, we propose a CNN based regression model using millions of image patches as input, aiming to learn inherent associations between local image patches and target anatomical landmarks. To further model the correlations among image patches, in the second stage, we develop another CNN model, which includes a) a fully convolutional network that shares the same architecture and network weights as the CNN used in the first stage and also b) several extra layers to jointly predict coordinates of multiple anatomical landmarks. Importantly, our method can jointly detect large-scale (e.g., thousands of) landmarks in real time. We have conducted various experiments for detecting 1200 brain landmarks from the 3D T1-weighted magnetic resonance images of 700 subjects, and also 7 prostate landmarks from the 3D computed tomography images of 73 subjects. The experimental results show the effectiveness of our method regarding both accuracy and efficiency in the anatomical landmark detection.</P>

      • SCISCIESCOPUS

        Relationship Induced Multi-Template Learning for Diagnosis of Alzheimer’s Disease and Mild Cognitive Impairment

        Liu, Mingxia,Zhang, Daoqiang,Shen, Dinggang Institute of Electrical and Electronics Engineers 2016 IEEE transactions on medical imaging Vol.35 No.6

        <P>As shown in the literature, methods based on multiple templates usually achieve better performance, compared with those using only a single template for processing medical images. However, most existing multi-template based methods simply average or concatenate multiple sets of features extracted from different templates, which potentially ignores important structural information contained in the multi-template data. Accordingly, in this paper, we propose a novel relationship induced multi-template learning method for automatic diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI), by explicitly modeling structural information in the multi-template data. Specifically, we first nonlinearly register each brain's magnetic resonance (MR) image separately onto multiple pre-selected templates, and then extract multiple sets of features for this MR image. Next, we develop a novel feature selection algorithm by introducing two regularization terms to model the relationships among templates and among individual subjects. Using these selected features corresponding to multiple templates, we then construct multiple support vector machine (SVM) classifiers. Finally, an ensemble classification is used to combine outputs of all SVM classifiers, for achieving the final result. We evaluate our proposed method on 459 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, including 97 AD patients, 128 normal controls (NC), 117 progressive MCI (pMCI) patients, and 117 stable MCI (sMCI) patients. The experimental results demonstrate promising classification performance, compared with several state-of-the-art methods for multi-template based AD/MCI classification.</P>

      • SCISCIESCOPUS

        Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching

        Yanrong Guo,Yaozong Gao,Dinggang Shen Institute of Electrical and Electronics Engineers 2016 IEEE transactions on medical imaging Vol.35 No.4

        <P>Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods. Index Terms-Deformable</P>

      • Incremental Learning With Selective Memory (ILSM): Towards Fast Prostate Localization for Image Guided Radiotherapy

        Yaozong Gao,Yiqiang Zhan,Dinggang Shen IEEE 2014 IEEE transactions on medical imaging Vol.33 No.2

        <P>Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ~ 0.89) and fast ( ~ 4 s), which satisfies the real-world clinical requirements of IGRT.</P>

      • Graph-guided joint prediction of class label and clinical scores for the Alzheimer’s disease

        Yu, Guan,Liu, Yufeng,Shen, Dinggang Springer Science + Business Media 2016 BRAIN STRUCTURE AND FUNCTION Vol. No.

        <P>Accurate diagnosis of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, is very important for early treatment. Over the last decade, various machine learning methods have been proposed to predict disease status and clinical scores from brain images. It is worth noting that many features extracted from brain images are correlated significantly. In this case, feature selection combined with the additional correlation information among features can effectively improve classification/regression performance. Typically, the correlation information among features can be modeled by the connectivity of an undirected graph, where each node represents one feature and each edge indicates that the two involved features are correlated significantly. In this paper, we propose a new graph-guided multi-task learning method incorporating this undirected graph information to predict multiple response variables (i.e., class label and clinical scores) jointly. Specifically, based on the sparse undirected feature graph, we utilize a new latent group Lasso penalty to encourage the correlated features to be selected together. Furthermore, this new penalty also encourages the intrinsic correlated tasks to share a common feature subset. To validate our method, we have performed many numerical studies using simulated datasets and the Alzheimer's Disease Neuroimaging Initiative dataset. Compared with the other methods, our proposed method has very promising performance.</P>

      • Integrative analysis of multi-dimensional imaging genomics data for Alzheimer's disease prediction

        Zhang, Ziming,Huang, Heng,Shen, Dinggang Frontiers Media S.A. 2014 FRONTIERS IN AGING NEUROSCIENCE Vol.6 No.-

        <P>In this paper, we explore the effects of integrating multi-dimensional imaging genomics data for Alzheimer's disease (AD) prediction using machine learning approaches. Precisely, we compare our three recent proposed feature selection methods [i.e., multiple kernel learning (MKL), high-order graph matching based feature selection (HGM-FS), sparse multimodal learning (SMML)] using four widely-used modalities [i.e., magnetic resonance imaging (MRI), positron emission tomography (PET), cerebrospinal fluid (CSF), and genetic modality single-nucleotide polymorphism (SNP)]. This study demonstrates the performance of each method using these modalities individually or integratively, and may be valuable to clinical tests in practice. Our experimental results suggest that for AD prediction, in general, (1) in terms of accuracy, PET is the best modality; (2) Even though the discriminant power of genetic SNP features is weak, adding this modality to other modalities does help improve the classification accuracy; (3) HGM-FS works best among the three feature selection methods; (4) Some of the selected features are shared by all the feature selection methods, which may have high correlation with the disease. Using all the modalities on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, the best accuracies, described as (mean ± standard deviation)%, among the three methods are (76.2 ± 11.3)% for AD vs. MCI, (94.8 ± 7.3)% for AD vs. HC, (76.5 ± 11.1)% for MCI vs. HC, and (71.0 ± 8.4)% for AD vs. MCI vs. HC, respectively.</P>

      • Multi-Tissue Decomposition of Diffusion MRI Signals via <tex> $\ell _{0}$</tex> Sparse-Group Estimation

        Yap, Pew-Thian,Zhang, Yong,Shen, Dinggang IEEE 2016 IEEE TRANSACTIONS ON IMAGE PROCESSING - Vol.25 No.9

        <P>Sparse estimation techniques are widely utilized in diffusion magnetic resonance imaging (DMRI). In this paper, we present an algorithm for solving the l(0) sparse-group estimation problem and apply it to the tissue signal separation problem in DMRI. Our algorithm solves the l(0) problem directly, unlike existing approaches that often seek to solve its relaxed approximations. We include the mathematical proofs showing that the algorithm will converge to a solution satisfying the first-order optimality condition within a finite number of iterations. We apply this algorithm to DMRI data to tease apart signal contributions from white matter, gray matter, and cerebrospinal fluid with the aim of improving the estimation of the fiber orientation distribution function (FODF). Unlike spherical deconvolution approaches that assume an invariant fiber response function (RF), our approach utilizes an RF group to span the signal subspace of each tissue type, allowing greater flexibility in accounting for possible variations of the RF throughout space and within each voxel. Our l(0) algorithm allows for the natural groupings of the RFs to be considered during signal decomposition. Experimental results confirm that our method yields estimates of FODFs and volume fractions of tissue compartments with improved robustness and accuracy. Our l(0) algorithm is general and can be applied to sparse estimation problems beyond the scope of this paper.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼