ParalESN:实现储层计算中的并行信息处理能力
ParalESN: Enabling parallel information processing in Reservoir Computing
January 29, 2026
作者: Matteo Pinna, Giacomo Lagomarsini, Andrea Ceni, Claudio Gallicchio
cs.AI
摘要
储层计算(RC)已成为时序处理的高效范式,但其可扩展性仍受两大因素严重制约:(i)时序数据必须顺序处理的要求;(ii)高维储层内存占用量过大。本研究通过结构化算子和状态空间建模的视角重新审视RC,提出并行回声状态网络(ParalESN)以突破这些限制。ParalESN基于复数空间中的对角线性递归构建高效高维储层,实现时序数据的并行处理。理论分析表明,ParalESN在保持回声状态特性和传统回声状态网络普适性保证的同时,能够将任意线性储层等价表示为复数对角形式。实验验证显示,ParalESN在时间序列基准测试中达到与传统RC相当的预测精度,且计算效率显著提升。在一维像素级分类任务中,ParalESN在实现与全可训练神经网络相媲美的精度的同时,将计算成本和能耗降低数个数量级。总体而言,ParalESN为在深度学习领域集成RC提供了一条具有前景的可扩展原则化路径。
English
Reservoir Computing (RC) has established itself as an efficient paradigm for temporal processing. However, its scalability remains severely constrained by (i) the necessity of processing temporal data sequentially and (ii) the prohibitive memory footprint of high-dimensional reservoirs. In this work, we revisit RC through the lens of structured operators and state space modeling to address these limitations, introducing Parallel Echo State Network (ParalESN). ParalESN enables the construction of high-dimensional and efficient reservoirs based on diagonal linear recurrence in the complex space, enabling parallel processing of temporal data. We provide a theoretical analysis demonstrating that ParalESN preserves the Echo State Property and the universality guarantees of traditional Echo State Networks while admitting an equivalent representation of arbitrary linear reservoirs in the complex diagonal form. Empirically, ParalESN matches the predictive accuracy of traditional RC on time series benchmarks, while delivering substantial computational savings. On 1-D pixel-level classification tasks, ParalESN achieves competitive accuracy with fully trainable neural networks while reducing computational costs and energy consumption by orders of magnitude. Overall, ParalESN offers a promising, scalable, and principled pathway for integrating RC within the deep learning landscape.