A single layer perceptron (SLP) is a type of simple feedforward neural network, typically with a single layer of neurons (i.e., a single-layer perceptron, or SLP) with a Heaviside step activation function.

Since there are no hidden layers, there is no need to use backpropagation for training. Instead, the parameters (weights) of a perceptron are learned using a simple update rule called the perceptron learning rule, based on the error between the predicted and actual output.

Single-layer perceptrons became much less important after Rumelhart, Hinton, and Williams (1986) popularized backpropagation. This made multi-layer perceptrons, and eventually a much wider range of neural networks, much more practicable.