Neural Network Theory Lecture Note 1 - Approximation Theorem
Pre-requisites:
- It will be helpful to know some basic concept about functional analysis, e.g. linear functional, normed space, Hahn-Banach Theorem; anyway they will be explained in my note
- Basic form of single-layer neural network.
After the happy summer vacation, I have to attend the master program at ETH Zurich. This is my lecture note for my first course, Neural Network Theory, at ETH.
Lecturer: Prof. Helmut Elbrächter
This first note is about the capacity of a single-layer neural network in approximating an arbitrary function whose domain is in ${\rm \pmb{R}}^d$.
To view this note, please download:
NNT_note1_tongyulu.pdf
本文作者: lucainiaoge
本文链接: https://lucainiaoge.github.io.git/2021/09/22/Neural_network_theory_course_1/
版权声明: 本作品采用 Creative Commons authorship - noncommercial use - same way sharing 4.0 international license agreement 进行许可。转载请注明出处!
![]()
