[Submitted on 25 Nov 2024]
View a PDF of the paper titled Effective Non-Random Extreme Learning Machine, by Daniela De Canditiis and Fabiano Veglianti
Abstract:The Extreme Learning Machine (ELM) is a growing statistical technique widely applied to regression problems. In essence, ELMs are single-layer neural networks where the hidden layer weights are randomly sampled from a specific distribution, while the output layer weights are learned from the data. Two of the key challenges with this approach are the architecture design, specifically determining the optimal number of neurons in the hidden layer, and the method’s sensitivity to the random initialization of hidden layer weights.
This paper introduces a new and enhanced learning algorithm for regression tasks, the Effective Non-Random ELM (ENR-ELM), which simplifies the architecture design and eliminates the need for random hidden layer weight selection. The proposed method incorporates concepts from signal processing, such as basis functions and projections, into the ELM framework. We introduce two versions of the ENR-ELM: the approximated ENR-ELM and the incremental ENR-ELM. Experimental results on both synthetic and real datasets demonstrate that our method overcomes the problems of traditional ELM while maintaining comparable predictive performance.
Submission history
From: Daniela De Canditiis [view email]
[v1]
Mon, 25 Nov 2024 09:42:42 UTC (2,561 KB)
Source link
lol