Randomized hyperparameter search is a variant of grid search in which only a random selection of possible hyperparameters are tried. Hence it can be more tractable for large hyperparameter spaces than grid search, though the speed gain is a trade-off with risk of missing good combinations.

The method is completely naive; it does not iteratively move towards better parameters. It can be made somewhat more intelligent using successive halving, though that approach introduces even more variance to this already high-variance method.

Basically, this method is still a waste of resources unless training is so fast that you don’t care. In the real world, you really want to use Bayesian optimization.