Now showing 1 - 8 of 8
No Thumbnail Available
Publication

Reservoir Computing Universality With Stochastic Inputs

2019 , Gonon, Lukas , Ortega, Juan-Pablo

The universal approximation properties with respect to Lp-type criteria of three important families of reservoir computers with stochastic discrete-time semi-infinite inputs are shown. First, it is proved that linear reservoir systems with either polynomial or neural network readout maps are universal. More importantly, it is proved that the same property holds for two families with linear readouts, namely, trigonometric state-affine systems and echo state networks, which are the most widely used reservoir systems in applications. The linearity in the readouts is a key feature in supervised machine learning applications. It guarantees that these systems can be used in high-dimensional situations and in the presence of large datasets. The Lp criteria used in this paper allow the formulation of universality results that do not necessarily impose almost sure uniform boundedness in the inputs or the fading memory property in the filter that needs to be approximated.

No Thumbnail Available
Publication

The Universality Problem in Dynamic Machine Learning with Applications to Realized Covolatilities Forecasting

2019-06-07 , Gonon, Lukas , Grigoryeva, Lyudmila , Ortega, Juan-Pablo

No Thumbnail Available
Publication

Discrete-time signatures and randomness in reservoir computing

, Cuchiero, Christa , Gonon, Lukas , Grigoryeva, Lyudmila , Ortega Lahuerta, Juan-Pablo , Teichmann, Josef

A new explanation of geometric nature of the reservoir computing phenomenon is presented. Reservoir computing is understood in the literature as the possibility of approximating input/output systems with randomly chosen recurrent neural systems and a trained linear readout layer. Light is shed on this phenomenon by constructing what is called strongly universal reservoir systems as random projections of a family of state-space systems that generate Volterra series expansions. This procedure yields a state-affine reservoir system with randomly generated coefficients in a dimension that is logarithmically reduced with respect to the original system. This reservoir system is able to approximate any element in the fading memory filters class just by training a different linear readout for each different filter. Explicit expressions for the probability distributions needed in the generation of the projected reservoir system are stated and bounds for the committed approximation error are provided.

No Thumbnail Available
Publication

Deep Hedging

2019-07-11 , Buehler, Hans , Gonon, Lukas , Teichmann, Josef , Wood, Ben

No Thumbnail Available
Publication

Approximation bounds for random neural networks and reservoir systems

2023-02 , Gonon, Lukas , Grigoryeva, Lyudmila , Ortega Lahuerta, Juan-Pablo

No Thumbnail Available
Publication

Reservoir kernels and Volterra series

2022-12-30 , Gonon, Lukas , Grigoryeva, Lyudmila , Ortega Lahuerta, Juan-Pablo

No Thumbnail Available
Publication

Risk bounds for reservoir computing

2019-09-19 , Gonon, Lukas , Grigoryeva, Lyudmila , Ortega, Juan-Pablo

No Thumbnail Available
Publication

Expressive Power of Randomized Signature

2021 , Cuchiero, Christa , Gonon, Lukas , Grigoryeva, Lyudmila , Ortega Lahuerta, Juan-Pablo , Teichmann, Josef

We consider the question whether the time evolution of controlled differential equations on general state spaces can be arbitrarily well approximated by (regularized) regressions on features generated themselves through randomly chosen dynamical systems of moderately high dimension. On the one hand this is motivated by paradigms of reservoir computing, on the other hand by ideas from rough path theory and compressed sensing. Appropriately interpreted this yields provable approximation and generalization results for generic dynamical systems by regressions on states of random, otherwise untrained dynamical systems, which usually are approximated by recurrent or LSTM networks. The results have important implications for transfer learning and energy efficiency of training.