[1]F. Facchinei, G. Scutari and S. Sagratella, “Parallel Selective Algorithms for Nonconvex Big Data Optimization,” in IEEE Transactions on Signal Processing, vol. 63, no. 7, pp. 1874-1889, April1, 2015, doi: 10.1109/TSP.2015.2399858. [2]R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. Series B (Methodol.), pp. 267–288, 1996. [3] M. Yuan and Y. Lin, “Model selection and estimation in regression with grouped variables,” J. Roy. Statist. Soc. Series B (Statist. Methodol.), vol. 68, no. 1, pp. 49–67, 2006. [4] S. K. Shevade and S. S. Keerthi, “A simple and efficient algorithm for gene selection using sparse logistic regression,” Bioinformatics, vol. 19, no. 17, pp. 2246–2253, 2003. [5]L. Meier, S. Van De Geer, and P. Bühlmann, “The group lasso for logistic regression,” J. Roy. Statist. Soc. Series B (Statist. Methodol.), vol. 70, no. 1, pp. 53–71, 2008. [6]G.-X. Yuan, K.-W. Chang, C.-J. Hsieh, and C.-J. Lin, “A comparison of optimization methods and software for large-scale l1-regularized linear classification,” J. Mach. Learn. Res., vol. 11, pp. 3183–3234, 2010.
根据上述假设,有如下三种普遍情况: (1)gl?(x)的近似:gl?(x)由两个凸函数之差组成(DC:difference of convex 结构),即gl?(x)=gl+?(x)?gl??(x), 此时得到gl?的上界逼近: (2)gl?(x)的近似:gl?(x)由函数乘积结构组成(PF:product of functions),即gl?(x)=f1?(x)f2?(x), 且f1?和f2?为凸函数且非负,此时可以转换成DC形式如下: 同DC,有其上界逼近为: (3)U(x)的近似: 认为其有PF的结构,即U(x)=h1?(x)h2?(x), 且h1?和h2?为凸函数且非负, 此时其凸逼近有: 其中τ>0,且H(y)为正定矩阵。
[7]Yu Z , Gong Y , Gong S , et al. Joint Task Offloading and Resource Allocation in UAV-Enabled Mobile Edge Computing[J]. IEEE Internet of Things Journal, 2020, PP(99):1-1.