|
Twitter
|
Facebook
|
Google+
|
VKontakte
|
LinkedIn
|
Viadeo
|
English
|
Français
|
Español
|
العربية
|
 
International Journal of Innovation and Applied Studies
ISSN: 2028-9324     CODEN: IJIABO     OCLC Number: 828807274     ZDB-ID: 2703985-7
 
 
Thursday 27 June 2019

About IJIAS

News

Submission

Downloads

Archives

Custom Search

Contact

Connect with IJIAS

  Now IJIAS is indexed in EBSCO, ResearchGate, ProQuest, Chemical Abstracts Service, Index Copernicus, IET Inspec Direct, Ulrichs Web, Google Scholar, CAS Abstracts, J-Gate, UDL Library, CiteSeerX, WorldCat, Scirus, Research Bible and getCited, etc.  
 
 
 

Convergence of Offline Gradient Method with Smoothing L1/2 Regularization for Two-layer of Neural Network


Volume 9, Issue 3, November 2014, Pages 1056–1063

 Convergence of Offline Gradient Method with Smoothing L1/2 Regularization for Two-layer of Neural Network

Khidir Shaib Mohamed1 and Yousif Shoaib Mohammed2

1 School of Mathematical Sciences, Dalian University of Technology, Dalian 116024, PR China
2 Department of Physics, College of Science & Art, Qassim University, Oklat Al- Skoor, P.O.Box: 111, Saudi Arabia

Original language: English

Received 8 October 2014

Copyright © 2014 ISSR Journals. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract


In this paper, we study the convergence of offline gradient method with smoothing L_(1/2) regularization penalty for training multi-output feed forward neural networks. The monotonicity of the error function and weight boundedness for the offline gradient with smoothing L_(1/2) regularization. the usual L_(1/2) regularization term involves absolute value and is not differentiable at the origin. The key point of this paper is modify the usual L_(1/2) regularization term by smoothing it at the origin are presented, the convergence results are proved, which will be very meaningful for theoretical research or applications on multi

Author Keywords: feed forward neural network, offline gradient method, smoothing L_(1/2) regularization, boundedness, convergence.


How to Cite this Article


Khidir Shaib Mohamed and Yousif Shoaib Mohammed, “Convergence of Offline Gradient Method with Smoothing L1/2 Regularization for Two-layer of Neural Network,” International Journal of Innovation and Applied Studies, vol. 9, no. 3, pp. 1056–1063, November 2014.