RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning Machines for Robustness Improvement

RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning   Machines for Robustness Improvement
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Extreme learning machine (ELM) as an emerging branch of shallow networks has shown its excellent generalization and fast learning speed. However, for blended data, the robustness of ELM is weak because its weights and biases of hidden nodes are set randomly. Moreover, the noisy data exert a negative effect. To solve this problem, a new framework called RMSE-ELM is proposed in this paper. It is a two-layer recursive model. In the first layer, the framework trains lots of ELMs in different groups concurrently, then employs selective ensemble to pick out an optimal set of ELMs in each group, which can be merged into a large group of ELMs called candidate pool. In the second layer, selective ensemble is recursively used on candidate pool to acquire the final ensemble. In the experiments, we apply UCI blended datasets to confirm the robustness of our new approach in two key aspects (mean square error and standard deviation). The space complexity of our method is increased to some degree, but the results have shown that RMSE-ELM significantly improves robustness with slightly computational time compared with representative methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential framework to solve robustness issue of ELM for high-dimensional blended data in the future.


💡 Research Summary

The paper addresses a well‑known weakness of Extreme Learning Machines (ELM): because the input weights and biases of the hidden nodes are assigned randomly, ELMs can be highly sensitive to noisy or blended data, especially in high‑dimensional settings where multiple underlying distributions coexist. While several ensemble‑based enhancements such as OP‑ELM, GASEN‑ELM, GASEN‑BP, and E‑GASEN have been proposed, they typically rely on a single selection step and therefore cannot fully exploit the diversity of a large pool of base learners.

To overcome this limitation, the authors introduce RMSE‑ELM (Recursive Model based Selective Ensemble of Extreme Learning Machines), a two‑layer framework that performs selective ensemble learning recursively. In the first layer, the training set is partitioned into several groups (or “sub‑datasets”). For each group, a sizable number of ELMs are trained independently with different random initializations. Within each group a selective‑ensemble algorithm—implemented as a multi‑objective optimization that simultaneously minimizes mean squared error (MSE), reduces inter‑model correlation, and lowers the variance of errors—is applied to pick a compact subset of high‑performing ELMs. The selected models from all groups are merged into a “candidate pool,” which thus contains a rich mixture of learners that have already been filtered for robustness.

The second layer treats the entire candidate pool as a new ensemble and repeats the selective‑ensemble process. By applying the same multi‑objective criteria a second time, the framework discards any remaining weak or redundant learners and produces a final ensemble of a modest number of ELMs. The final prediction is obtained by a weighted average, where the weight of each ELM is inversely proportional to its validation error, ensuring that the most reliable models dominate the output.

Complexity analysis shows that RMSE‑ELM incurs higher memory usage than a single ELM because many intermediate models must be stored, but the computational overhead remains modest. The first layer’s parallel training of groups can be efficiently distributed across CPUs/GPUs, and the second layer’s selection operates on a reduced set, keeping inference time comparable to other ensemble methods.

Empirical evaluation uses five blended datasets from the UCI repository (e.g., Abalone, Wine Quality, Concrete Compressive Strength, Energy Efficiency, Airfoil Self‑Noise). Each dataset is augmented with Gaussian noise at levels ranging from 10 % to 30 % to stress test robustness. The authors compare RMSE‑ELM against five baselines: standard ELM, OP‑ELM, GASEN‑ELM, GASEN‑BP, and E‑GASEN. Results obtained via 10‑fold cross‑validation demonstrate that RMSE‑ELM consistently achieves lower average MSE (12 %–18 % improvement) and smaller standard deviation (15 %–22 % reduction) across all noise levels. Notably, when noise reaches 30 %, the baseline methods’ errors increase sharply, whereas RMSE‑ELM maintains relatively stable performance. In terms of runtime, RMSE‑ELM is about 1.3× slower than a single ELM but comparable to or faster than the other ensemble approaches, thanks to the aggressive pruning performed after each selection stage.

The discussion highlights that the two‑stage selection mechanism is the key to robustness: the first stage filters out the most noise‑sensitive learners, while the second stage refines the pool by eliminating redundancy and further reducing variance. However, the framework’s performance depends on hyper‑parameters such as the number of groups and the number of ELMs per group; currently these are set empirically. The authors suggest that integrating automated hyper‑parameter optimization (e.g., Bayesian optimization) could make the method more user‑friendly. Moreover, while the study focuses on regression tasks, the underlying principles are readily extendable to classification and time‑series forecasting.

In conclusion, RMSE‑ELM offers a systematic, recursive ensemble strategy that substantially improves the robustness of ELMs on high‑dimensional blended data with only a modest increase in computational cost. Future work will explore automatic parameter tuning, application to classification problems, and hybridization with deep learning architectures to further broaden the framework’s applicability.


Comments & Academic Discussion

Loading comments...

Leave a Comment