Data Science Asked by Luka Milivojevic on January 2, 2021
I just want to clear one doubt – we use gradient descent to optimize the weights and biases of the neural network, and we use backpropagation for the step that requires calculating partial derivatives of the loss function, or am I misinterpreting something?
Yes you are correct. Gradient descent (or various flavors of it) is the mechanism by which you find a local minima of your loss space using some learning rate. Backpropagation calculates the gradient of the error function with respect to the NN's weights and biases.
Correct answer by Oliver Foster on January 2, 2021
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP