主题:Estimation of Linear Functionals in High-Dimensional Linear Models: From Sparsity to Nonsparsity 时间:2024年04月24日10:00-11:30 地点:腾讯会议:438-014-7163 主持人:姜荣 教授 报告人简介: 赵俊龙,北京师范大学统计学院教授。主要从事统计学和机器学习相关研究,包括:高维数据分析、统计机器学习、稳健统计等。在统计学各类期刊发表SCI论文近五十篇,部分结果发表在统计学国际顶级期刊JRSSB,AOS、JASA,Biometrika等。主持多项国家自然科学基金项目,参与国家自然科学基金重点项目。任中国现场统计学会高维数据分会、北京大数据学会等多个学术分会理事或常务理事。 讲座简介: High dimensional linear models are commonly used in practice. In many applications, one is interested in linear transformations $\beta^\top x$ of regression coefficients $\beta\in \mR^p$, where $x$ is a specific point and is not required to be identically distributed as the training data. One common approach is the plug-in technique which first estimates $\beta$, then plugs the estimator in the linear transformation for prediction. Despite its popularity, estimation of $\beta$ can be difficult for high dimensional problems. Commonly used assumptions in the literature include that the signal of coefficients $\beta$ is sparse and predictors are weakly correlated. These assumptions, however, may not be easily verified, and can be violated in practice. When $\beta$ is non-sparse or predictors are strongly correlated, estimation of $\beta$ can be very difficult. In this paper, we propose a novel pointwise estimator for linear transformations of $\beta$. This new estimator greatly relaxes the common assumptions for high dimensional problems, and is adaptive to the degree of sparsity of $\beta$ and strength of correlations among the predictors. In particular, $\beta$ can be sparse or non-sparse and predictors can be strongly or weakly correlated. The proposed method is simple for implementation. Numerical and theoretical results demonstrate the competitive advantages of the proposed method for a wide range of problems.