# Backward propagation of errors (Back propagation)

Last edited: 2024-01-20

This is the specific implementation of gradient decent applied to neural networks . There are two stages of this calculation:

For this to work all the perceptrons need to have differentiable activation function .