THE FUTURE IS HERE

Stanford CS224N: NLP with Deep Learning | Winter 2021 | Lecture 1 – Intro & Word Vectors

For more information about Stanford’s Artificial Intelligence professional and graduate programs visit: https://stanford.io/3w46jar

This lecture covers:
1. The course (10min)
2. Human language and word meaning (15 min)
3. Word2vec algorithm introduction (15 min)
4. Word2vec objective function gradients (25 min)
5. Optimization basics (5min)
6. Looking at word vectors (10 min or less)

Key learning: The (really surprising!) result that word meaning can be representing rather well by a large vector of real numbers.

This course will teach:
1. The foundations of the effective modern methods for deep learning applied to NLP. Basics first, then key methods used in NLP: recurrent networks, attention, transformers, etc.
2. A big picture understanding of human languages and the difficulties in understanding and producing them
3. An understanding of an ability to build systems (in Pytorch) for some of the major problems in NLP. Word meaning, dependency parsing, machine translation, question answering.

To learn more about this course visit: https://online.stanford.edu/courses/cs224n-natural-language-processing-deep-learning
To follow along with the course schedule and syllabus visit: http://web.stanford.edu/class/cs224n/

Professor Christopher Manning
Thomas M. Siebel Professor in Machine Learning, Professor of Linguistics and of Computer Science
Director, Stanford Artificial Intelligence Laboratory (SAIL)