收藏 分享(赏)

增量机器学习算法研究——基于模糊神经网络的增量学习.pdf

上传人:刘岱文 文档编号:10206 上传时间:2018-05-31 格式:PDF 页数:91 大小:14.94MB
下载 相关 举报
增量机器学习算法研究——基于模糊神经网络的增量学习.pdf_第1页
第1页 / 共91页
增量机器学习算法研究——基于模糊神经网络的增量学习.pdf_第2页
第2页 / 共91页
增量机器学习算法研究——基于模糊神经网络的增量学习.pdf_第3页
第3页 / 共91页
增量机器学习算法研究——基于模糊神经网络的增量学习.pdf_第4页
第4页 / 共91页
增量机器学习算法研究——基于模糊神经网络的增量学习.pdf_第5页
第5页 / 共91页
点击查看更多>>
资源描述

1、博士学位论文增量机器学习算法研究基于模糊神经网络的增量学习作 者:胡 蓉指导教师:徐蔚鸿教授南京理工大学2013年1月PhDDissertationSTUDY ON THE INCREMENTALMACHINE LEARNING ALGoRITHMSNCREMENTAL LEARNING BASEDON THE FUZZY NEURAL NETWORKByRong HuSupervised by P1ojWeihong XuNanj ing University of Science&TechnologyJanuary,2013声 明本学位论文是我在导师的指导下取得的研究成果,尽我所知,在本学

2、位论文中,除了加以标注和致谢的部分外,不包含其他人已经发表或公布过的研究成果,也不包含我为获得任何教育机构的学位或学历而使用过的材料。与我一同工作的同事对本学位论文做出的贡献均已在论文中作了明确的说明。研究生签名: 翊差 纱1乡年f月,6日学位论文使用授权声明南京理工大学有权保存本学位论文的电子和纸质文档,可以借阅或上网公布本学位论文的部分或全部内容,可以向有关部门或机构送交并授权其保存、借阅或上网公布本学位论文的部分或全部内容。对:于保密论文,按保密的有关规定和程序处理。研究生签名: 镌 伽哆年f月,毛曰博士论文 增量机器学习算法研究基于模糊神经网络的增量学习摘 要随着网络的发展,许多应用领

3、域获取新的数据变得很容易。但是对于传统的批量学习技术来说,如何从日益增加的新数据中得到有用信息是一个难题。随着数据规模的不断增加,对时间和空间的需求也会迅速增加,最终会导致学习的速度赶不上数据更新的速度。机器学习是一个解决此问题的有效方法。然而传统的机器学习是批量学习方式,需要在进行学习之前,准备好所有的数据。为了能满足在线学习的需求,需要抛弃以前的学习结果,重新训练和学习,这对时间和空间的需求都很高,因此,迫切需要研究增量学习方法,可以渐进的进行知识更新,且能修正和加强以前的知识,使得更新后的知识能适应新增加的数据。本文分别对奇异值分解和模糊神经网络的增量学习进行了深入地研究和探讨,主要工作

4、及贡献如下:1提出无协方差的增量奇异值分解传统的奇异值分解(Singular Value Decomposition:SVD)采用批量计算方法,需要在计算之前将所有数据准备好,因此无法满足在线处理需求。本文提出了一种无协方差奇异值分解(Candid Covariance Incremental Singular Value DecompositionCCISVD)方法。该方法通过当前样本估计样本协方差阵,提出了从顺序到达的样本中增量求取协方差阵的第一个特征向量的方法,从而避免了样本协方差阵的求解,从理论和直观上分析了该方法的可行性。在求解其他特征值的过程中,从当前估计的特征向量的补空间中寻找样

5、本,从而始终保证了求取的特征向量的正交性,节约了时间和空间成本。2提出免修剪连续增量学习模糊神经网络模型模糊神经网络的结构识别很耗时。为了避免产生冗余规则,通过把修剪策略引入模糊规则的增加过程来提高学习效率,本文提出一种免修剪增量连续学习算法,利用误差下降率,来定义规则对系统的输出贡献,作为规则的增长标准,从而在规则的增长过程中避免产生冗余规则。同时,由于计算规则对系统的输出贡献是根据当前输入数据,从而实现了增量学习。3提出优化修剪的增量极速学习模糊神经网络算法ELM(Extreme Learning Machine)是为训练单层前馈人工神经网络(Singular LayerFuzzy Neu

6、ral,SLFNs)的一个简单而有效的学习算法,该网络的神经元随机产生。理论和实验都表明ELM准确而快速。为了能实现在线增量学习,本文对ELM进行了扩展。该算法中,模糊规则的前件参数和初始规则数量随机产生,然后使用SVD对规则按照重要性排序,通过留一法(LeaveOneOut:LOO)选择出最佳的模糊规则数,最后模糊规则的后件参数通过基于风险最小化分析计算得出。仿真实验结果表明,与其他算法相摘要 博士论文比有较好的鲁棒性,在准确率和计算速度上都具有优势。4提出基于规则影响的自适应增量模糊神经网络模型在模糊神经网络中,一个模糊规则可能初始时比较活跃,之后慢慢变得对系统的贡献很小。本文提出一种基于

7、规则影响的增量学-3模糊神经网络(SelfAdaptive IncrementalLearning-Fuzzy Neural Network),引入模糊规则影响的概念,基于当前数据计算模糊规则对系统输出的影响,作为模糊规则增长或删除的标准。并且将规则的增长标准同系统的准确性联系起来,只有该模糊规则对系统的贡献值大于某个阈值,才考虑增加一条新规则,同时还检测已有规则库中规则对系统的影响值,如果低于某个阈值,说明该规则已经变得不再活跃,则删除该规则。无论是新增规则还是已有规则都通过扩展的卡尔曼算法更新参数。通过仿真实验表明该方法能获得比其他高代价的技术更简单的结构、更短的训练时间和较好的泛化性能。

8、5基于增量模糊神经网络和小波的人脸识别为了能提高样本质量从而提高识别准确率,本文提出一种新的提取人脸图像特征的方法。首先使用Hart小波对人脸进行分解,小波变换后的高频部分是人脸很重要的特征,这部分将作为人脸特征向量保存起来。然后使用Fisher线性鉴别分析(Fisher LinearDiscriminant,FLD)对低频子图进行再次降维。降维后的向量和保存的高频部分特征向量连合起来作为模糊神经网络的训练样本,使用本文提出的自适应增量模糊神经网络学习算法训练网络。仿真实验表明,这种经过预处理后学习的模糊神经网络,其识别率高于不使用Harr做预处理的网络。关键词:增量学习,奇异值分解,模糊神经

9、网络,Fisher线性鉴别分析,小波变换,入脸识别II博士论文 增量机器学习算法研究基于模糊神经网络的增量学习AbstractWith the rapid development of the Intemet,it becomes very easy to acquire data in manyapplicationsBut how to get useful information from the increasing data becomes a hardproblem to traditional bathing learning techniques黝the scale of da

10、ta becomingincreaseing incessantly,the need for time and space will be increased rapidly tooThe finalresult is that the speed of learning will not catch up、析也the speed of updatingMachinelearning is all effective way to solve this problemBut the traditional machine learningmethod uses batch methodAll

11、 data must be available before learning beginsIn order tomeet the demand of online learning,one may needs to abandon the former study results andretrain the network to learn more,which requires a long time and spaceTherefore,to studythe incremental method is an urgent need,which Can be gradual to up

12、date knowledge,and Callbe modified and strengthen previous knowledge,makes the updated knowledge adapt to thenew added dataThis dissertation probe into in-depth the incremental singular value decomposition andthe incremental fuzzy neural networknle main contributions of the thesis are as follows:1Pr

13、esents a Candid Covariance-free Incremental Singular Value Decompositionnle traditional approach for SVD is a batch method which requires that all the trainingdata be available before computing begins So it Cant meet the on-line learningrequirementsTMs dissertation develops a candid covariance-free

14、incremental SVDnlecovariance matrix is estimated to be using current sample data We Can analyze how tocompute the first eigenvector丽t11 the current arrival higll dimensional data,and showintuitive and theoretical explanation嘶s method generates the“observations”in acomplementary space for the computa

15、tion of the higher order eigenvectors,SO theorthogonality of the eigenvectors call be kept all alongIt reduces the cost of time andspace2Presents a Pruned-free Incremental Sequential Learning Fuzzy Neural NetworkIdentification of the structure of fuzzy neural network is time-consumingIn order toavoi

16、d producing redundancy rule and to improve learning efficiency,the prune approach maybe introduced into the rule growing processnlis dissertation presents an incrementalsequential learning algorithm with no rule pruning which uses the error rate of descent todefine the contribution of rule tO system

17、 as the rule growing criterion So during thegrowing process,no redundant rule will be generatedIt is based on current arrival data to111compme the contribution of rule to system,SO this method is an incremental method3Presents all Optimal Incremental Extreme Learning Fuzzy Neural NetworkThe Extreme

18、Learning Machine(ELM)is a simple yet effective learning algorithm fortraining SLFNs with random hidden nodesELM has been shown to be accurate aIld f瓠t bomtheoretically and experimentallyWe extended ELM to on1ine incremental mannerFirstrandomly generates a set of simple antecedents and random values

19、for the parameters of inputmembership functionsThen SVD is used to rand the fuzzy basis functionsThen the bestnumber of fuzzy rules is selected by performing a fast computation of the leaveoneoutvalidation errorFinally,the consequents parameters are determined analyticallyAcomparison is performed ag

20、ainst well known neurofuzzy methodsIt is shown that themethod proposed is robust and competitive in terms of accuracy and speed4Presents a Self Adaptive Incremental Learning Fuzzy Neural Network based on meSignificance of a neuronIn fuzzy neural network,a fuzzy rule may be active at the beginningThe

21、n it becomesless important to systemThis dissertation presents a self-adaptive sequential incrementallearning fuzzy neural network(SAILFNN)algorithms based on the influence of ruleThealgorithm uses the concept of“Significance”of a neuron and links it to the learning accuracvThe“Significance”of a neu

22、ron is defined by its contribution to the network output over thecurrent input data received Only the significancevalue of a neuron is larger than athreshold value,a new neuron will be considered to be added At廿le s孤ne timeall t11eexisting neurans are checked,if there is a neuron whose significancev

23、alue is less than apredefined value,this neuron will be removedThen,the extended kalman filter is llSed toupdate the parametersThe results of simulation experiments indicate that the SAILFNNalgorithm Can provide comparable generalization performance with a considerably reducednetwork size and traini

24、ng time5Propose an Algorithom of Face Recognition based on Incremental Learning FuzzyNeural Network and Harr waveletIn order to improve the quality of samples thereby to enhance the accuracy of recognition,this dissertation proposes a novel way for facial feature extraction based on i11crl:mentaLlle

25、arning fuzzy neural networkFirst,the Harr wavelet is applied to the decomposition oftypical human faceThe hi曲一frequency which is an important part of facial feature ispresewed as a part of facial featureThe lowfrequency is reduced the dimensions byapplying Fisher Linear Discriminate(FLD)The part of

26、preserved highfrequency combinedwith the part of low-frequency after being reduced dimensions is used as input sample of aTV博士墼 垄量垫墨堂圣簦鲨婴壅墨主蔓塑塑丝旦丝堕塑量兰翌-_-_-_-_-_-_-_-_一fuzzy neural networkThe self-adaptive incremental leaning algorithm proposed in thisdissertation is applied to train the networkThe

27、result of simulation experiments shows thatthe trained fuzzy neural network after using Harr for preprocessing carl achieve higheraccuracy than the fuzzy neural network with no Harr being usedKey words:incremental learning,singular value decomposition,fuzzy neural network,Fisher Linear Discriminant

28、Analysis,wavelet transform,Face RecognitionV博士论文 增量机器学习算法研究基于模糊神经网络的增量学习目 录摘里要。IAbstractIII1绪论。111课题的研究背景及其意义11。2增量奇异值分解和增量学习模糊神经网络的研究现状。3121增量SVD研究现状31。22模糊神经网络概述5123模糊神经网络增量学习研究现状713本文研究工作概述914本文的内容安排122无协方差增量奇异值分解1321引言1 322奇异值分解SVD概述1423增量奇异值分解1 5231 AAl第一个特征向量计算15232求解特征值算法的直观解释1 7233其他特征向量求解1 8234特征值相等情况1 924增量奇异值分解及其算法总结1 925实验结果及分析2026本章小结233免修剪的增量连续学习模糊神经网络2431引言2432 PFISLFNN结构2533 PFISLFNN学习算法26331规则产生标准26332参数调整29333完整的PFISLFNN算法2934实验结果与分析30341实验1:Hermite函数逼近30342实验2:非线性动态系统识别32VII

展开阅读全文
相关资源
相关搜索
资源标签

当前位置:首页 > 网络技术 > 热门技术

本站链接:文库   一言   我酷   合作


客服QQ:2549714901微博号:文库网官方知乎号:文库网

经营许可证编号: 粤ICP备2021046453号世界地图

文库网官网©版权所有2025营业执照举报