http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
Semi-supervised regression based on support vector machine
Seok, Kyungha The Korean Data and Information Science Society 2014 한국데이터정보과학회지 Vol.25 No.2
In many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are fairly expensive to obtain. Therefore semi-supervised learning algorithms have attracted much attentions. However, previous research mainly focuses on classication problems. In this paper, a semi-supervised regression method based on support vector regression (SVR) formulation that is proposed. The estimator is easily obtained via the dual formulation of the optimization problem. The experimental results with simulated and real data suggest superior performance of the our proposed method compared with standard SVR.
A Study on Support Vectors of Least Squares Support Vector Machine
Seok, Kyungha,Cho, Daehyun 한국통계학회 2003 Communications for statistical applications and me Vol.10 No.3
LS-SVM(Least-Squares Support Vector Machine) has been used as a promising method for regression as well as classification. Suykens et al.(2000) used only the magnitude of residuals to obtain SVs(Support Vectors). Suykens' method behaves well for homogeneous model. But in a heteroscedastic model, the method shows a poor behavior. The present paper proposes a new method to get SVs. The proposed method uses the variance of noise as well as the magnitude of residuals to obtain support vectors. Through the simulation study we justified excellence of our proposed method.
Semi-supervised regression based on support vector machine
Kyungha Seok 한국데이터정보과학회 2014 한국데이터정보과학회지 Vol.25 No.2
In many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are fairly expensive to obtain. Therefore semi-supervised learning algorithms have attracted much attentions. However, previ-ous research mainly focuses on classification problems. In this paper, a semi-supervised regression method based on support vector regression(SVR) formulation that is pro-posed. The estimator is easily obtained via the dual formulation of the optimization problem. The experimental results with simulated and real data suggest superior per-formance of the our proposed method compared with standard SVR.
Semisupervised support vector quantile regression
Seok, Kyungha The Korean Data and Information Science Society 2015 한국데이터정보과학회지 Vol.26 No.2
Unlabeled examples are easier and less expensive to be obtained than labeled examples. In this paper semisupervised approach is used to utilize such examples in an effort to enhance the predictive performance of nonlinear quantile regression problems. We propose a semisupervised quantile regression method named semisupervised support vector quantile regression, which is based on support vector machine. A generalized approximate cross validation method is used to choose the hyper-parameters that affect the performance of estimator. The experimental results confirm the successful performance of the proposed S2SVQR.
GACV for partially linear support vector regression
Shim, Jooyong,Seok, Kyungha The Korean Data and Information Science Society 2013 한국데이터정보과학회지 Vol.24 No.2
Partially linear regression is capable of providing more complete description of the linear and nonlinear relationships among random variables. In support vector regression (SVR) the hyper-parameters are known to affect the performance of regression. In this paper we propose an iterative reweighted least squares (IRWLS) procedure to solve the quadratic problem of partially linear support vector regression with a modified loss function, which enables us to use the generalized approximate cross validation function to select the hyper-parameters. Experimental results are then presented which illustrate the performance of the partially linear SVR using IRWLS procedure.
석경하,이태우,Seok, Kyungha,Lee, Taewoo 한국데이터정보과학회 2013 한국데이터정보과학회지 Vol.24 No.6
데이터베이스 마케팅과 시장예측 등의 분야에서 분류문제를 해결하기 위해 다양한 데이터마이닝 기법들이 적용되고 있다. 본 연구에서는 데일리 렌즈 고객들의 거래 데이터를 기반으로 의사결정나무, 로지스틱 회귀모형과 같은 기존의 통계적 분류기법과 최근에 개발된 배깅, 부스팅, 라소, 랜덤 포리스트 그리고 지지벡터기계의 분류 성능을 비교하고자 한다. 비교 실험을 위해 데이터 정제, 탐색, 파생변수 생성, 그리고 변수 선택과정을 거쳤다. 실험결과 정분류율 측면에서는 지지벡터기계가 다른 모형보다 근소하게 높았지만 표준편차가 크게 나왔다. 정분류율과 표준편차의 관점에서는 랜덤 포리스트가 가장 좋은 결과를 보였다. 그러나 모형의 해석, 간명성 그리고 학습에 걸리는 시간을 고려하였을 때 라소모형이 적합하다는 결론을 내렸다. To solve the classification problems, various data mining techniques have been applied to database marketing, credit scoring and market forecasting. In this paper, we compare various techniques such as bagging, boosting, LASSO, random forest and support vector machine with the daily lens transaction data. The classical techniques-decision tree, logistic regression-are used too. The experiment shows that the random forest has a little smaller misclassification rate and standard error than those of other methods. The performance of the SVM is good in the sense of misclassfication rate and bad in the sense of standard error. Taking the model interpretation and computing time into consideration, we conclude that the LASSO gives the best result.
Support Vector Machine for Linear Regression
Hwang, Changha,Seok, Kyungha The Korean Statistical Society 1999 Communications for statistical applications and me Vol.6 No.2
Support vector machine(SVM) is a new and very promising regression and classification technique developed by Vapnik and his group at AT&T Bell laboratories. This article provides a brief overview of SVM focusing on linear regression. We explain from statistical point of view why SVM might be attractive and how this could be compared with other linear regression techniques. Furthermore. we explain model selection based on VC-theory.
Censored varying coefficient regression model using Buckley-James method
Shim, Jooyong,Seok, Kyungha The Korean Data and Information Science Society 2017 한국데이터정보과학회지 Vol.28 No.5
The censored regression using the pseudo-response variable proposed by Buckley and James has been one of the most well-known models. Recently, the varying coefficient regression model has received a great deal of attention as an important tool for modeling. In this paper we propose a censored varying coefficient regression model using Buckley-James method to consider situations where the regression coefficients of the model are not constant but change as the smoothing variables change. By using the formulation of least squares support vector machine (LS-SVM), the coefficient estimators of the proposed model can be easily obtained from simple linear equations. Furthermore, a generalized cross validation function can be easily derived. In this paper, we evaluated the proposed method and demonstrated the adequacy through simulate data sets and real data sets.
Monotone support vector quantile regression
Shim, Jooyong,Seok, Kyungha,Hwang, Changha Informa UK (TaylorFrancis) 2017 Communications in Statistics Vol.46 No.10
<P>Quantile regression (QR) models have received a great deal of attention in both the theoretical and applied statistical literature. In this paper we propose support vector quantile regression (SVQR) with monotonicity restriction, which is easily obtained via the dual formulation of the optimization problem. We also provide the generalized approximate cross validation method for choosing the hyperparameters which affect the performance of the proposed SVQR. The experimental results for the synthetic and real data sets confirm the successful performance of the proposed model.</P>