Multi-layer perceptrons (MLPs) are a subclass of fully-connected feedforward neural network. They are defined as any fully-connected feedforward network with at least one hidden layer and in which all hidden layers use nonlinear activation functions.
As perceptrons, they have generally been associated with the Heaviside step function. However, any nonlinear activation function will still qualify for the definition.
MLPs were generally considered to be all but impracticable until Rumelhart, Hinton, and Williams (1986) popularized the concept of Backpropagation of errors.
Cybenko’s theorem states that for any continuous function