We consider nonparametric least squares estimation of a shape (e.g., monotonic/convex) constrained regression function, both with univariate and multivariate predictors. We discuss the characterization, computation and consistency of the least squares estimator (LSE) in these problems. An appealing property of the LSE is that it is completely tuning parameter-free.
To quantify the accuracy of the LSE we consider the behavior of its risk, under the squared error loss. We derive worst case risk (upper) bounds in these problems and highlight the adaptive behavior of the LSE. In particular, we show that the LSE automatically adapts to "sparsity" in the underlying true regression function. Another interesting feature of the LSE in the multi-dimensional examples is that it adapts to the intrinsic dimension of the problem.