
基于拟凸损失的核正则化成对学习算法的收敛速度
The Convergence Rate for Kernel-Based Regularized Pair Learning Algorithm with a Quasiconvex Loss
核正则化排序算法是目前机器学习理论领域讨论的热点问题, 而成对学习算 法是排序算法的推广. 文章给出一种基于拟凸损失的核正则化成对学习算法, 利用拟凸 分析理论对该算法进行误差分析, 给出算法的收敛速度. 分析结果表明, 算法的样本误 差与损失函数中的参数选择有关. 数值实验结果显示, 与基于最小二乘损失的排序算法相比较, 该算法有更稳健的学习性能.
Regularized ranking algorithm based on kernels has recently gained much attention in machine learning theory, and pairwise learning is the generalization of ranking problem. In this paper, a kernel-based regularized pairwise learning algorithm with a quasiconvex loss function is provided, the error estimate is given by using the quasiconvex analysis theory, and an explicit learning rate is obtained. It is shown that the sample error is influenced by the parameters in the loss function. The experiments show that our method is more robust compared with the ranking algorithm with the least square loss function.
成对学习 / 拟凸函数 / 核正则化算法 / 收敛速度. {{custom_keyword}} /
/
〈 |
|
〉 |