In sequential ensemble models, the output of each base learner represents a reduction in training losses. At the extreme end, if training loss becomes zero, the learner has memorized the training data. Hence, the maximum number of such base learners represents a hyperparameter that can be tuned to prevent overfitting. Although not technically a regularizer like L1 and L2 regularization, it is usually considered a key tool for regularizing such models. It also directly affects the worst-case time complexity of training.
For AdaBoost, it is common to start with a small number of learners (50-100) and then to experiment with larger numbers. The design of gradient boosting permits for early stopping, and this is typically how ensemble size is regulated for that algorithm.
Tree count is only a matter of computational cost for parallel ensembles like Random Forest. This is because these models average over the set of all base learners; if anything, additional learners may have a regularizing effect for such models.
Limiting tree depth and (for gradient boosting) using a small learning rate can counteract the overfitting risk of additional trees, and vice versa.