论文部分内容阅读
核函数及其参数的选择和优化与所分析数据的分布情况密切相关,同一数据经不同核函数映射或者同一核函数映射数据的不同特征子集都会获得不同的数据类群分度.现针对单一核函数在同一数据的不同特征子集上具有不同的预测识别效果的问题,结合信息熵理论,提出一种基于信息熵特征加权核函数的SVM数据分类方法,以避免核函数设计的盲目性及局部最优等非线性优化问题的同时提高算法的分类精度.经实验验证,基于信息熵特征加权核函数的SVM算法的识别结果与同类判别算法的识别结果相比,其分类准确率较高,稳定性和泛化性相对更好.
The selection and optimization of kernel function and its parameters are closely related to the distribution of the analyzed data, and different subsets of data will be obtained by different feature subsets of the same data mapped by different kernel functions or mapped by the same kernel function. Function has different predictive recognition effects on different feature subsets of the same data. Combined with information entropy theory, a SVM data classification method based on information entropy feature weighted kernel function is proposed to avoid the blindness and local Optimal and other nonlinear optimization problems, and improve the classification accuracy of the algorithm.Experimental results show that the SVM algorithm based on information entropy feature weighted kernel function has higher classification accuracy and stability compared with the recognition results of similar discriminant algorithms And generalization is relatively better.