论文部分内容阅读
We consider the scenario where two variables need to be optimized simultaneously. The minimization over one variable has an analytical solution, while it is intractable for the other. Under the Lagrangian dual framework, we propose two iterative optimization algorithms, which make partial minimization and gradient descent alternatingly over two variables. The first algorithm asserts that the iteration result converges to a KKT point under proper stepsize rules, which only needs the augmented Lagrangian function to be convex over partial variable. The second algorithm provides the local attraction property around the KKT point. Our algorithms provide a general solution to parallel and distributed optimization with summable objective functions. Simulation results on parallel and distributed logistic regression classification are present, which show faster convergence rate with less computational complexity compared with other methods.
We consider the scenario where two variables need to be optimized simultaneously. The minimization over one variable has an analytical solution, while it is intractable for the other. Under the Lagrangian dual framework, we propose two iterative optimization algorithms, which make partial minimization and gradient The first algorithm asserts that the iteration result converges to a KKT point under proper stepsize rules, which only needs the augmented Lagrangian function to be convex over partial variable. The second algorithm provides the local attraction property around the KKT point . Our algorithms provide a general solution to parallel and distributed optimization with summable objective functions. Simulation results on parallel and distributed logistic regression classification are present, which show express convergence rate with less computational complexity compared with other methods.