Share it with your friends Like

Thanks! Share it with your friends!

Close

Transfer Learning in Natural Language Processing (NLP): Open questions, current trends, limits, and future directions. Slides: https://tinyurl.com/FutureOfNLP
A walk through interesting papers and research directions in late 2019/early-2020 on:
– model size and computational efficiency,
– out-of-domain generalization and model evaluation,
– fine-tuning and sample efficiency,
– common sense and inductive biases.
by Thomas Wolf (Science lead at HuggingFace)

HuggingFace on Twitter: https://twitter.com/huggingface
Thomas Wolf on Twitter: https://twitter.com/Thom_Wolf

Buy/Stream:

Comments

Niklas Muennighoff says:

A V2 of this on Apr 22 2021 would be very awesome

Florian says:

Great video, I hope you going to make an update this year.

Salva Carrión says:

Outstanding presentation! 👏👏👏 (I can't understand why this doesn't have more views 🤷🏻‍♂️)
Ps.: It would be awesome if HuggingFace could make one video a year, highlighting current trends and findings in NLP, as well as promising research directions.

FumaNet says:

57:55 Would training a model on a book for children-like corpus work for reducing this bias? The obvious is often stated in those books, specifically written to teach us that sheeps in fact are white and cows moo and so on (although they usually include pictures). Perhaps adding some sort of hierarchy, so the "basic" notions are the ones that matter the most (sheeps are usually white, as learnt from the book for children). Just a thought.

FumaNet says:

This was amazing: great explanation, in-depth enough, very up to date. Seeing the guy talking (instead of just hearing his voice) was very helpful: he's quite good looking and "talks with his hands" (moves his hands while explaining to better deliver his points), making it easier for me to stay focused. More please!

Thusitha Chandrapala says:

Really nice series! thanks a lot.

Rahul Panicker says:

The progress in NLP has been incredible with the introduction of transformer architecture. Here’s a quick article on the advancements of NLP in regards to T5, let me know what you think! https://www.engati.com/blog/decoding-text-to-text-transformers

The Last Hacker says:

We need MORE CONTENT!

Dude Alex says:

Now GPT-3 ⚠️

Bryan Chen says:

Awesome video. hope we could have some basic videos to introduce the usage of transformer library for example to build a classifier / semantic representation etc. for beginners like me.

Luca Campanella says:

Thanks! Please keep on producing this kind of material.

Niklas Muennighoff says:

Thanks for the research – You mention at 1:02:30, that BERT will think the president of the US is the 2018 president – But is this type of knowledge really encoded in the model? I thought all it has are the Params, i.e. weights, so how does it know?

Franck Dernoncourt says:

Great presentation, well worth watching in its enterity. Thanks for sharing!

Akshay Bhardwaj says:

This is gold

zyxwvutsrqponmlkh says:

Are you controlling the courser with a third appendage?

Chan Liah says:

Great work!

Merve Noyan says:

After watching this video, my skin is glowing and I feel more fresh, better selection on papers to read. Please keep making videos!

Claudio Bottari says:

This was awesome, I hope this will become a monthly appointment with NLP SOTA… 😉

Giovanni Bonetta says:

thank you. This is gold

nhx nhx says:

Awesome, thank you so much

federico betti says:

Great! I've never seen a video like this one on yt: up to date, precise, well explained. It opens lot of questions and presents many problems that I did not know about. Thanks, keep going!

Tristan Wibberley says:

think of all those emails where someone put in a hugging emoticon to comfort for a sad event but the font on the recipient's device is the most happiest and delighted emoticon of them all. Use ascii emoticons, people.

kay fresh says:

So much important detail.

Write a comment

*

DARPA SUPERHIT 2021 Play Now!Close

DARPA SUPERHIT 2021

(StoneBridge Mix)

Play Now!

×