Text By the Bay 2015: Richard Socher, Deep Learning for Natural Language Processing

Share it with your friends Like

Thanks! Share it with your friends!

Close

Scale By the Bay 2019 is held on November 13-15 in sunny Oakland, California, on the shores of Lake Merritt: https://scale.bythebay.io. Join us!
—–

In this talk, I will describe deep learning algorithms that learn representations for language that are useful for solving a variety of complex language problems. I will focus on 3 tasks: Fine-Grained sentiment analysis; Question answering to win trivia competitions (like Whatson’s Jeopardy system but with one neural network); Multimodal sentence-image embeddings (with a fun demo!) to find images that visualize sentences. I will also show some demos of how deepNLP can be made easy to use with MetaMind.io’s software.

Richard Socher is the CTO and founder of MetaMind, a startup that seeks to improve artificial intelligence and make it widely accessible. He obtained his PhD from Stanford working on deep learning with Chris Manning and Andrew Ng. He is interested in developing new AI models that perform well across multiple different tasks in natural language processing and computer vision. He was awarded the 2011 Yahoo! Key Scientific Challenges Award, the Distinguished Application Paper Award at ICML 2011, a Microsoft Research PhD Fellowship in 2012 and a 2013 ‘Magic Grant’ from the Brown Institute for Media Innovation and the 2014 GigaOM Structure Award.

Comments

Mana Ammaii says:

At starting of video you mentioned that Deep learning needs large data.can please suggest me the minimum size that we should use to work with the Deep learning ?

Mana Ammaii says:

Are you feeding dataset to the Neural networks as the training data ? I thought we give only raw data & it converts that into training ,test , Dev datasets.
correct me if I'm wrong ? you showed trained datasets for sentiment analysis,questioning,picture to sentence problems in the video.
my doubt is do we need to train dataset by hand like we use to do for old text classifier models.(navies,svm)

Tahir Raza says:

remarkable presentation, lots of knowledge
really worth watching!

Rajarshee Mitra says:

Brilliant ! the intuitions are nice, esp. the text and image mapping into vector space model !

Katy Lee says:

An interesting talk!

James Bowery says:

It would be interesting to see how this performs on Matt Mahoney's Large Text Compression Benchmark. http://mattmahoney.net/dc/textrules.html

Write a comment

*

Area 51
Ringing

Answer