Share it with your friends Like

Thanks! Share it with your friends!


When reading up on artificial neural networks, you may have come across the term “bias.” It’s sometimes just referred to as bias. Other times you may see it referenced as bias nodes, bias neurons, or bias units within a neural network. We’re going to break this bias down and see what it’s all about.

We’ll first start out by discussing the most obvious question of, well, what is bias in an artificial neural network? We’ll then see, within a network, how bias is implemented. Then, to hit the point home, we’ll explore a simple example to illustrate the impact that bias has when introduced to a neural network.

Checkout posts for this video:



👉 Check out the blog post and other resources for this video:

🤖 Available for members of the deeplizard hivemind:

🧠 Support collective intelligence, join the deeplizard hivemind:

🤜 Support collective intelligence, create a quiz question for this video:

🚀 Boost collective intelligence by sharing this video on social media!

❤️🦎 Special thanks to the following polymaths of the deeplizard hivemind:

👀 Follow deeplizard:
Our vlog:

🎓 Other deeplizard courses:
Reinforcement Learning –
NN Programming –
DL Fundamentals –
Keras –
TensorFlow.js –
Data Science –
Trading –

🛒 Check out products deeplizard recommends on Amazon:

📕 Get a FREE 30-day Audible trial and 2 FREE audio books using deeplizard’s link:

🎵 deeplizard uses music by Kevin MacLeod

❤️ Please use the knowledge gained from deeplizard content for good, not evil.



deeplizard says:

Machine Learning / Deep Learning Tutorials for Programmers playlist:

Keras Machine Learning / Deep Learning Tutorial playlist:

Data Science for Programming Beginners playlsit:

Tu Yuyang says:

I feel dizzy when I watch this video

Szabolcs Ambrus says:

"With an activation output of 0, this neuron is considered to not be activated. Or not firing."

Does a not activated/not firing neuron still pass 0 as an output to the next layer?

Arindam Paul says:

Great explanation !!

Kranken heit says:

Late one here :).
Quick question
You mentioned that the bias will be readjusted at every backprop step along with the weights, with the exception that we will calculate thw gradientd w.r.t the weights and biases individially.
Now the question, wouldn't it make sense to add the bias as a weight with it's neuron equal to one? With the exception that the weights of the previous layer are not connected to this bias neuron.
I hope i was clear somehow.
Thanks alotttttt 🙂


Is bias same as threshold? if not then what is the difference between them because bias determines if a neuron is activated or not, so that seems to be same as threshold.

Ad Sd says:

Lol I thought my KSP was running in the background. Do you play it?

KingDav3 says:

This video seems to be really biased…

fredericfc says:

This is hurting my head 🤕

madison forsyth says:

the music in the background is so. distracting. omg. i have to turn captions on and mute it. why am i even on youtube???

Nirbhay Pandya says:

Question – As SGD also updates the biases while training, are they updated? using backpropagation just like the weights?

2. Since bias changes affect the activation output which in turn also depends on the weights, do bias updates conflict with weight updates?

Thank you lizzy!

Nirbhay Pandya says:

How can she be always awesome at the explanations? Thank you so much 🙂

Sobhan Haggı says:

But what actually are weights? Do we pick them randomly or there is a formula for that?

Saanvi Sharma says:

2:54 bias(b) should be added to each neuron. But, here they've added single 'b' did they forgotten to insert bracket

Ziqiang Cheng says:

Just a suggestion, the recording volume can be turned up a notch, the ads are really loud compare to ur voice haha

Joseph M'BIMBI-BENE says:

some graphical example, like a lign separating 2 "data clouds", and how not having bias makes some configuration of the 2 clouds not separable would have made the bias more understandable, and the video clearer

Dour Wolf Games says:

How do I adjust the bias? I'm pretty sure that its during backpropagation after retrieving the negative cost gradient, but I don't know what the adjustments to the bias are based off of. Does it have something to do with the changes to weights? I'm still very much learning and I may be incorrect. =)

Yan Haeffner says:

I can't watch this video without thinking about building a rocket… Soundtrack related.

Helena Pereira says:

very helpful! Thank you

Daniel Schaefer says:

Great video! Just one comment though: the moving background is pretty distracting!

D S says:

All I could say is THANK YOU!

Mira G says:

bias does not give the opportunity to measure the significance level of the data and see which features are considered important since the bias may help in firing the next neuron without the subject to weights values

Lankanatha Ekanayake says:

how about we adjust weights for activate output neuron instead of adding additional bias parameter?

Sanwal Yousaf says:

brilliant tutorial

Write a comment


DARPA SUPERHIT 2021 Play Now!Close


(StoneBridge Mix)

Play Now!