# Backward propagation of errors (Back propagation)
Last edited: 2026-02-05
This is the specific implementation of gradient descent applied to neural networks . There are two stages of this calculation:
- Forward pass through the neural network as you evaluate it on test data .
- Backward propagation of that error as you use the chain rule for differentiation on each of the perceptrons .
For this to work all the perceptrons need to have differentiable activation function .