THE FUTURE IS HERE

Why Neural Networks can learn (almost) anything

A video about neural networks, how they work, and why they’re useful.

My twitter: https://twitter.com/max_romana

SOURCES
Neural network playground: https://playground.tensorflow.org/

Universal Function Approximation:
Proof: https://cognitivemedium.com/magic_paper/assets/Hornik.pdf
Covering ReLUs: https://proceedings.neurips.cc/paper/2017/hash/32cbf687880eb1674a07bf717761dd3a-Abstract.html
Covering discontinuous functions: https://arxiv.org/pdf/2012.03016.pdf

Turing Completeness:
Networks of infinite size are turing complete: Neural Computability I & II (behind a paywall unfourtunately, but is cited in following paper)
RNNs are turing complete: https://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf
Transformers are turing complete: https://arxiv.org/abs/2103.05247

More on backpropagation:
https://www.youtube.com/watch?v=Ilg3gGewQ5U

More on the mandelbrot set:
https://www.youtube.com/watch?v=NGMRB4O922I

Additional Sources:
Neat explanation of universal function approximation proof: https://www.youtube.com/watch?v=Ijqkc7OLenI
Where I got the hard coded parameters: https://towardsdatascience.com/can-neural-networks-really-learn-any-function-65e106617fc6

Reviewers:
Andrew Carr https://twitter.com/andrew_n_carr
Connor Christopherson

TIMESTAMPS
(0:00) Intro
(0:27) Functions
(2:31) Neurons
(4:25) Activation Functions
(6:36) NNs can learn anything
(8:31) NNs can’t learn anything
(9:35) …but they can learn a lot

MUSIC
https://www.youtube.com/watch?v=SmkUY_B9fGg