A neuron in a neural network is a mathematical function

where is a (potentially nonlinear) scalar activation function. The weight vector and the (scalar) bias are learned through the backpropagation of errors during optimization against an objective.

In practice, the activation of a whole layer of neurons is typically computed at once, in which case we instead have: