For gradient-based optimization methods, the learning rate plays an indirect role in regularization because it can affect the model’s tendency to over- or under-fit. While not generally considered to be regularization strategies, techniques such as early stopping and learning rate scheduling can absolutely impact the model’s bias and variance.