High-Dimensional Learning with Concave Penalty
High dimensionality refers to the assumption that the number of fitting parameters is (much) larger than the sample size in statistical learning. Due to the emerging use of increasingly sophisticated learning models in modern data-driven applications, high dimensionality has become a looming challenge to the existing statistical theories and techniques; in solving a high-dimensional learning problem, the traditional statistical theories and tools frequently fail as a result of overfitting. This talk will introduce a recently developed regularization scheme, the folded concave penalty (FCP), as a remedy to the issue of high dimensionality. On FCP-based learning, there remain open questions (i) whether fast computable local solutions may ensure the statistical performance, and (ii) whether that statistical performance can be non-contingent on the specific designs of the computing procedures. My answers to both questions are affirmative. This talk will present theoretical evidence and real-world applications to showcase the efficacy of the FCP.