RISS 학술연구정보서비스

검색
다국어 입력

http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.

변환된 중국어를 복사하여 사용하시면 됩니다.

예시)
  • 中文 을 입력하시려면 zhongwen을 입력하시고 space를누르시면됩니다.
  • 北京 을 입력하시려면 beijing을 입력하시고 space를 누르시면 됩니다.
닫기
    인기검색어 순위 펼치기

    RISS 인기검색어

      검색결과 좁혀 보기

      선택해제
      • 좁혀본 항목 보기순서

        • 원문유무
        • 원문제공처
        • 등재정보
        • 학술지명
          펼치기
        • 주제분류
        • 발행연도
        • 저자
          펼치기

      오늘 본 자료

      • 오늘 본 자료가 없습니다.
      더보기
      • 무료
      • 기관 내 무료
      • 유료
      • A mathematical model for the two-learners problem

        ,ller, Jan Saputra,Vidaurre, Carmen,Schreuder, Martijn,Meinecke, Frank C,von B&uuml,nau, Paul,,ller, Klaus-Robert IOP 2017 Journal of neural engineering Vol.14 No.3

        <P> <I>Objective</I>. We present the first generic theoretical formulation of the co-adaptive learning problem and give a simple example of two interacting linear learning systems, a human and a machine. <I>Approach</I>. After the description of the training protocol of the two learning systems, we define a simple linear model where the two learning agents are coupled by a joint loss function. The simplicity of the model allows us to find learning rules for both human and machine that permit computing theoretical simulations. <I>Main results</I>. As seen in simulations, an astonishingly rich structure is found for this eco-system of learners. While the co-adaptive learners are shown to easily stall or get out of sync for some parameter settings, we can find a broad sweet spot of parameters where the learning system can converge quickly. It is defined by mid-range learning rates on the side of the learning machine, quite independent of the human in the loop. Despite its simplistic assumptions the theoretical study could be confirmed by a real-world experimental study where human and machine co-adapt to perform cursor control under distortion. Also in this practical setting the mid-range learning rates yield the best performance and behavioral ratings. <I>Significance</I>. The results presented in this mathematical study allow the computation of simple theoretical simulations and performance of real experimental paradigms. Additionally, they are nicely in line with previous results in the BCI literature.</P>

      • Evaluation of a Compact Hybrid Brain-Computer Interface System

        Shin, Jaeyoung,,ller, Klaus-Robert,Schmitz, Christoph H.,Kim, Do-Won,Hwang, Han-Jeong Hindawi 2017 BioMed research international Vol.2017 No.-

        <P>We realized a compact hybrid brain-computer interface (BCI) system by integrating a portable near-infrared spectroscopy (NIRS) device with an economical electroencephalography (EEG) system. The NIRS array was located on the subjects' forehead, covering the prefrontal area. The EEG electrodes were distributed over the frontal, motor/temporal, and parietal areas. The experimental paradigm involved a Stroop word-picture matching test in combination with mental arithmetic (MA) and baseline (BL) tasks, in which the subjects were asked to perform either MA or BL in response to congruent or incongruent conditions, respectively. We compared the classification accuracies of each of the modalities (NIRS or EEG) with that of the hybrid system. We showed that the hybrid system outperforms the unimodal EEG and NIRS systems by 6.2% and 2.5%, respectively. Since the proposed hybrid system is based on portable platforms, it is not confined to a laboratory environment and has the potential to be used in real-life situations, such as in neurorehabilitation.</P>

      • SCISCIESCOPUS

        Optimizing the regularization for image reconstruction of cerebral diffuse optical tomography

        Habermehl, Christina,Steinbrink, Jens,,ller, Klaus-Robert,Haufe, Stefan SPIE - International Society for Optical Engineeri 2014 JOURNAL OF BIOMEDICAL OPTICS Vol.19 No.9

        <P>Functional near-infrared spectroscopy (fNIRS) is an optical method for noninvasively determining brain activation by estimating changes in the absorption of near-infrared light. Diffuse optical tomography (DOT) extends fNIRS by applying overlapping high density measurements, and thus providing a three-dimensional imaging with an improved spatial resolution. Reconstructing brain activation images with DOT requires solving an underdetermined inverse problem with far more unknowns in the volume than in the surface measurements. All methods of solving this type of inverse problem rely on regularization and the choice of corresponding regularization or convergence criteria. While several regularization methods are available, it is unclear how well suited they are for cerebral functional DOT in a semi-infinite geometry. Furthermore, the regularization parameter is often chosen without an independent evaluation, and it may be tempting to choose the solution that matches a hypothesis and rejects the other. In this simulation study, we start out by demonstrating how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods. To independently select the regularization parameter, we propose a cross-validation procedure which achieves a reconstruction quality close to the optimum. Additionally, we compare the outcome of seven different image reconstruction methods for cerebral functional DOT. The methods selected include reconstruction procedures that are already widely used for cerebral DOT [minimum l2-norm estimate (l2MNE) and truncated singular value decomposition], recently proposed sparse reconstruction algorithms [minimum l1- and a smooth minimum l0-norm estimate (l1MNE, l0MNE, respectively)] and a depth- and noise-weighted minimum norm (wMNE). Furthermore, we expand the range of algorithms for DOT by adapting two EEG-source localization algorithms [sparse basis field expansions and linearly constrained minimum variance (LCMV) beamforming]. Independent of the applied noise level, we find that the LCMV beamformer is best for single spot activations with perfect location and focality of the results, whereas the minimum l1-norm estimate succeeds with multiple targets.</P>

      • Ensembles of adaptive spatial filters increase BCI performance: an online evaluation

        Sannelli, Claudia,Vidaurre, Carmen,,ller, Klaus-Robert,Blankertz, Benjamin IOP 2016 Journal of neural engineering Vol.13 No.4

        <P> <I>Objective</I>: In electroencephalographic (EEG) data, signals from distinct sources within the brain are widely spread by volume conduction and superimposed such that sensors receive mixtures of a multitude of signals. This reduction of spatial information strongly hampers single-trial analysis of EEG data as, for example, required for brain–computer interfacing (BCI) when using features from spontaneous brain rhythms. Spatial filtering techniques are therefore greatly needed to extract meaningful information from EEG. Our goal is to show, in online operation, that common spatial pattern patches (CSPP) are valuable to counteract this problem. <I>Approach</I>: Even though the effect of spatial mixing can be encountered by spatial filters, there is a trade-off between performance and the requirement of calibration data. Laplacian derivations do not require calibration data at all, but their performance for single-trial classification is limited. Conversely, data-driven spatial filters, such as common spatial patterns (CSP), can lead to highly distinctive features; however they require a considerable amount of training data. Recently, we showed in an offline analysis that CSPP can establish a valuable compromise. In this paper, we confirm these results in an online BCI study. In order to demonstrate the paramount feature that CSPP requires little training data, we used them in an adaptive setting with 20 participants and focused on users who did not have success with previous BCI approaches. <I>Main results</I>: The results of the study show that CSPP adapts faster and thereby allows users to achieve better feedback within a shorter time than previous approaches performed with Laplacian derivations and CSP filters. The success of the experiment highlights that CSPP has the potential to further reduce BCI inefficiency. <I>Significance</I>: CSPP are a valuable compromise between CSP and Laplacian filters. They allow users to attain better feedback within a shorter time and thus reduce BCI inefficiency to one-fourth in comparison to previous non-adaptive paradigms.</P>

      • On robust parameter estimation in brain–computer interfacing

        Samek, Wojciech,Nakajima, Shinichi,Kawanabe, Motoaki,,ller, Klaus-Robert IOP 2017 Journal of neural engineering Vol.14 No.6

        <P> <I>Objective</I>. The reliable estimation of parameters such as mean or covariance matrix from noisy and high-dimensional observations is a prerequisite for successful application of signal processing and machine learning algorithms in brain–computer interfacing (BCI). This challenging task becomes significantly more difficult if the data set contains outliers, e.g. due to subject movements, eye blinks or loose electrodes, as they may heavily bias the estimation and the subsequent statistical analysis. Although various robust estimators have been developed to tackle the outlier problem, they ignore important structural information in the data and thus may not be optimal. Typical structural elements in BCI data are the <I>trials</I> consisting of a few hundred EEG samples and indicating the start and end of a task. <I>Approach</I>. This work discusses the parameter estimation problem in BCI and introduces a novel hierarchical view on robustness which naturally comprises different types of outlierness occurring in structured data. Furthermore, the class of minimum divergence estimators is reviewed and a robust mean and covariance estimator for structured data is derived and evaluated with simulations and on a benchmark data set. <I>Main results</I>. The results show that state-of-the-art BCI algorithms benefit from robustly estimated parameters. <I>Significance</I>. Since parameter estimation is an integral part of various machine learning algorithms, the presented techniques are applicable to many problems beyond BCI.</P>

      • SCISCIESCOPUS
      • sGDML: Constructing accurate and data efficient molecular force fields using machine learning

        Chmiela, Stefan,Sauceda, Huziel E.,Poltavsky, Igor,,ller, Klaus-Robert,Tkatchenko, Alexandre Elsevier 2019 Computer physics communications Vol.240 No.-

        <P><B>Abstract</B></P> <P>We present an optimized implementation of the recently proposed symmetric gradient domain machine learning (sGDML) model. The sGDML model is able to faithfully reproduce global potential energy surfaces (PES) for molecules with a few dozen atoms from a limited number of user-provided reference molecular conformations and the associated atomic forces. Here, we introduce a Python software package to reconstruct and evaluate custom sGDML force fields (FFs), without requiring in-depth knowledge about the details of the model. A user-friendly command-line interface offers assistance through the complete process of model creation, in an effort to make this novel machine learning approach accessible to broad practitioners. Our paper serves as a documentation, but also includes a practical application example of how to reconstruct and use a PBE0+MBD FF for paracetamol. Finally, we show how to interface sGDML with the FF simulation engines ASE (Larsen et al., 2017) and i-PI (Kapil et al., 2019) to run numerical experiments, including structure optimization, classical and path integral molecular dynamics and nudged elastic band calculations.</P>

      • Objective quality assessment of stereoscopic images with vertical disparity using EEG

        Avarvand, Forooz Shahbazi,Bosse, Sebastian,,ller, Klaus-Robert,Schä,fer, Ralf,Nolte, Guido,Wiegand, Thomas,Curio, Gabriel,Samek, Wojciech IOP 2017 Journal of neural engineering Vol.14 No.4

        <P> <I>Objective</I>. Neurophysiological correlates of vertical disparity in 3D images are studied in an objective approach using EEG technique. These disparities are known to negatively affect the quality of experience and to cause visual discomfort in stereoscopic visualizations. <I>Approach</I>. We have presented four conditions to subjects: one in 2D and three conditions in 3D, one without vertical disparity and two with different vertical disparity levels. Event related potentials (ERPs) are measured for each condition and the differences between ERP components are studied. Analysis is also performed on the induced potentials in the time frequency domain. <I>Main results</I>. Results show that there is a significant increase in the amplitude of P1 components in 3D conditions in comparison to 2D. These results are consistent with previous studies which have shown that P1 amplitude increases due to the depth perception in 3D compared to 2D. However the amplitude is significantly smaller for maximum vertical disparity (3D-3) in comparison to 3D with no vertical disparity. Our results therefore suggest that the vertical disparity in 3D-3 condition decreases the perception of depth compared to other 3D conditions and the amplitude of P1 component can be used as a discriminative feature. <I>Significance</I>. The results show that the P1 component increases in amplitude due to the depth perception in the 3D stimuli compared to the 2D stimulus. On the other hand the vertical disparity in the stereoscopic images is studied here. We suggest that the amplitude of P1 component is modulated with this parameter and decreases due to the decrease in the perception of depth.</P>

      • Methods for interpreting and understanding deep neural networks

        Montavon, Gré,goire,Samek, Wojciech,,ller, Klaus-Robert Elsevier 2018 Digital signal processing Vol.73 No.-

        <P><B>Abstract</B></P> <P>This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions. It is based on a tutorial given at ICASSP 2017. As a tutorial paper, the set of methods covered here is not exhaustive, but sufficiently representative to discuss a number of questions in interpretability, technical challenges, and possible applications. The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which we provide theory, recommendations, and tricks, to make most efficient use of it on real data.</P>

      연관 검색어 추천

      이 검색어로 많이 본 자료

      활용도 높은 자료

      해외이동버튼