Improved bounds for Square-Root Lasso and Square-Root Slope

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)
4 Downloads (Pure)

Abstract

Extending the results of Bellec, Lecué and Tsybakov [1] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is (s/n)log(p/s), up to some constant, under some mild conditions on the design matrix. Here, n is the sample size, p is the dimension and s is the sparsity parameter. We also prove optimality for the estimation error in the lq-norm, with q∈[1,2] for the Square-Root Lasso, and in the l2 and sorted l1 norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity s of the true parameter. Next, we prove that any estimator depending on s which attains the minimax rate admits an adaptive to s version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [1] where the case of known variance is treated. Our results are non-asymptotic.
Original languageEnglish
Pages (from-to)741-766
JournalElectronic Journal of Statistics
Volume12
Issue number1
DOIs
Publication statusPublished - 27 Feb 2018
Externally publishedYes

Keywords

  • Sparse linear regression
  • minimax rates
  • high-dimensional statistics
  • adaptivity
  • square-root estimators

Cite this

@article{27e7d75d05d04fddb3da971197d59a84,
title = "Improved bounds for Square-Root Lasso and Square-Root Slope",
abstract = "Extending the results of Bellec, Lecu{\'e} and Tsybakov [1] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is (s/n)log(p/s), up to some constant, under some mild conditions on the design matrix. Here, n is the sample size, p is the dimension and s is the sparsity parameter. We also prove optimality for the estimation error in the lq-norm, with q∈[1,2] for the Square-Root Lasso, and in the l2 and sorted l1 norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity s of the true parameter. Next, we prove that any estimator depending on s which attains the minimax rate admits an adaptive to s version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [1] where the case of known variance is treated. Our results are non-asymptotic.",
keywords = "Sparse linear regression, minimax rates, high-dimensional statistics, adaptivity, square-root estimators",
author = "Alexis Derumigny",
year = "2018",
month = "2",
day = "27",
doi = "10.1214/18-EJS1410",
language = "English",
volume = "12",
pages = "741--766",
journal = "Electronic Journal of Statistics",
issn = "1935-7524",
publisher = "Institute of Mathematical Statistics",
number = "1",

}

Improved bounds for Square-Root Lasso and Square-Root Slope. / Derumigny, Alexis.

In: Electronic Journal of Statistics, Vol. 12, No. 1, 27.02.2018, p. 741-766.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Improved bounds for Square-Root Lasso and Square-Root Slope

AU - Derumigny, Alexis

PY - 2018/2/27

Y1 - 2018/2/27

N2 - Extending the results of Bellec, Lecué and Tsybakov [1] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is (s/n)log(p/s), up to some constant, under some mild conditions on the design matrix. Here, n is the sample size, p is the dimension and s is the sparsity parameter. We also prove optimality for the estimation error in the lq-norm, with q∈[1,2] for the Square-Root Lasso, and in the l2 and sorted l1 norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity s of the true parameter. Next, we prove that any estimator depending on s which attains the minimax rate admits an adaptive to s version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [1] where the case of known variance is treated. Our results are non-asymptotic.

AB - Extending the results of Bellec, Lecué and Tsybakov [1] to the setting of sparse high-dimensional linear regression with unknown variance, we show that two estimators, the Square-Root Lasso and the Square-Root Slope can achieve the optimal minimax prediction rate, which is (s/n)log(p/s), up to some constant, under some mild conditions on the design matrix. Here, n is the sample size, p is the dimension and s is the sparsity parameter. We also prove optimality for the estimation error in the lq-norm, with q∈[1,2] for the Square-Root Lasso, and in the l2 and sorted l1 norms for the Square-Root Slope. Both estimators are adaptive to the unknown variance of the noise. The Square-Root Slope is also adaptive to the sparsity s of the true parameter. Next, we prove that any estimator depending on s which attains the minimax rate admits an adaptive to s version still attaining the same rate. We apply this result to the Square-root Lasso. Moreover, for both estimators, we obtain valid rates for a wide range of confidence levels, and improved concentration properties as in [1] where the case of known variance is treated. Our results are non-asymptotic.

KW - Sparse linear regression

KW - minimax rates

KW - high-dimensional statistics

KW - adaptivity

KW - square-root estimators

U2 - 10.1214/18-EJS1410

DO - 10.1214/18-EJS1410

M3 - Article

VL - 12

SP - 741

EP - 766

JO - Electronic Journal of Statistics

JF - Electronic Journal of Statistics

SN - 1935-7524

IS - 1

ER -