## Mathematics for Machine Learning Full Course || Calculus for Machine Learning – Part -2

Thanks! Share it with your friends!

This course offers a brief introduction to the multivariate calculus required to build many common machine learning techniques. We start at the very beginning with a refresher on the “rise over run” formulation of a slope, before converting this to the formal definition of the gradient of a function. We then start to build up a set of tools for making calculus easier and faster. Next, we learn how to calculate vectors that point up hill on multidimensional surfaces and even put this into action using an interactive game. We take a look at how we can use calculus to build approximations to functions, as well as helping us to quantify how accurate we should expect those approximations to be. We also spend some time talking about where calculus comes up in the training of neural networks, before finally showing you how it is applied in linear regression models. This course is intended to offer an intuitive understanding of calculus, as well as the language necessary to look concepts up yourselves when you get stuck. Hopefully, without going into too much detail, you’ll still come away with the confidence to dive into some more focused machine learning courses in future.
Who is this class for: This class is for people who would like to learn more about machine learning techniques, but don’t currently have the fundamental mathematics in place to go into much detail. This course will include some exercises that require you to work with code. If you’ve not had much experience with code before DON’T PANIC, we will give you lots of guidance as you go.
Topic Covered:
Fuctions
Definition of a derivative
Differentiation examples & special cases
differentiate some functions
Time saving rules
Product rule
Chain rule
Variables, constants & context
Differentiate with respect to anything
Jacobians – vectors of derivatives
Jacobian applied
The Sandpit
The Sandpit -2
The Hessian
Multivariate chain rule
Neural Networks
Simple neural networks
Power series
Visualising Taylor Series
Power series derivation
Power series details
Multivariable Taylor Series
Linearisation
Multivariate Taylor
Lagrange multipliers
Constrained optimisation
Simple linear regression
Non-linear regression
******************************************************************
This course is created by Imperial College London
If you like this video and course explanation feel free to take the
complete course and get certificate from: https://www.coursera.org/specializations/mathematics-machine-learning

This video is provided here for research and educational purposes in the field of Mathematics. No copyright infringement intended. If you are content owner would like to remove this video from YouTube, Please contact me through email: ict_hanif@yahoo.com
*******************************************************************

Geek's Lesson says:

******************

Visit Website : https://datasciencedata.com/

Explore other top courses:

*************************************

Mathematics for Machine Learning: Part 1, 2, 3 : https://www.youtube.com/playlist?list=PLmAuaUS7wSOP-iTNDivR0ANKuTUhEzMe4

Practical Deep Learning all courses: https://www.youtube.com/playlist?list=PLmAuaUS7wSOM_EVfYQweNtBjEZ-5VvijC

How to pick ML model – Pro tips: https://www.youtube.com/playlist?list=PLmAuaUS7wSOM21UpbzS58_d3qs-mc_fXR

saipavankumar muppalaneni says:

why c^2 + d^2 =1

fassstar says:

There's an error in the Newton-Raphson method equation (time mark 2:29:00).

It's written as f(x_i+1) = f(x_i) – f(x_i)/f'(x_i) ,

when it's supposed to be x_(i+1) = x_i – f(x_i)/f'(x_i),

i.e., we update the x value by the former x value minus the said ratio, and not the value of the function by the previous value of the function minus the ratio.

fifa2k22 says:

2:51:30

pietro pozzati says:

very good course!!!
The only suggestion feel to advance is this one: "Please! don't write purple string on the blue background! it will be almost unreadable!"
But taking apart the purple text, it is a superb course!

Arian Rahman says:

22:40 (ignore my comment , just keeping track of video progress )

Taurean Dials says:

There's a difference between mathematics and English for non English languages. The speaker diverges from the intended audience and submits to his own contrarian vices structure.

yop says:

at time 1:11:55 there is an error. The Jacobian of (-1,1) is (0.27, – 0.27) and not (- 0.27, 0.27).

Neth BT says:

I dunno why I'm here but I'm severely dyslexic in numbers and complex calculations. I can't even solve a simple Algebraic equation 😁

Dan One says:

Are you writing left-side-right (upside- down)? Or is it a movie editing trick?

Howard Daley says:

Dude, if my calcus looked like you, I'd never miss a class.

golong son says:

Lot of fluff and very little substance. Oh these British they just talk talk and hype with out any substance

bo reddude says:

has it ever occurred to you that light colored letters in a light blue background may not be easy to read?

nick wu says:

great example of derivative.

Richard Andrews says:

This might be a very dumb question. But how do we build the initial curve ( Training data… how does the training data form the curve ) over which the approximation is worked on????

James Dunkelfelder says:

The second presenter’s writing is so bad and confusing completely ruined this so promising video.

Nate says:

24:33 is suppose to be minus

Area 51
Ringing