Computational Science Asked on August 29, 2020
For finite dimensional symmetric positive semidefinite matrices $A$ and $B$, I would like to solve
begin{align}&min |X – A|_1
&text{subject to}
&X preceq B
&0 preceq Xend{align}
Here $|cdot|_1$ is the nuclear norm/trace norm defined by $|X|_1 = text{tr}(sqrt{X^*X})$, where $X^*$ is the conjugate transpose of $X$ and $Apreceq B$ means that $B-A$ is positive semidefinite.
I have not been able to solve this analytically and am now considering a computational solution to help. Is this a semidefinite program and if so, how can one bring it to the standard form? If not, what is the best way to solve this computationally? I am familiar with MATLAB and Python.
This is easy to formulate in CVX, under MATLAB. A CVXPY solution, under Python, is similar.
CVX code:
cvx_begin sdp
variable X(n,n) hermitian semidefinite
minimize(norm_nuc(X-A))
X <= B
cvx_end
or alternatively
cvx_begin
variable X(n,n) hermitian semidefinite
minimize(norm_nuc(X-A))
B - X == semidefinite(n)
cvx_end
Edit 2: CVX is very fussy about semidefinite constraints only being processed as such if the matrix being constrained to be psd is exactly hermitian (symmetric, if real). Therefore, the safe thing to do is to hermitianize (symmetrize) B
before appearance in the semoidefinite constraint. I.e., B = 0.5*(B+B');
, which will eliminate roundoff-level non-hermitianess (asymmetry), which can cause CVX to have a conniption.
You can see how CVX reformulates this under the hood by looking at the code for norm_nuc
. You can also see the under the hood reformulation CVX applies as follows. It is the dual problem formulation, equation 6.19 (further explained in equation 6.20), in the "Sum of singular values" subsection of section 6.2.4 "Singular value optimization" of the Mosek Modeling Cookbook. Edit 1: As you can see there, this can indeed be formulated as a (linear, convex) semidefinite program.
if you have more detailed questions on CVX, you can ask at http://ask.cvxr.com/ (after reading the CVX Users' Guide and FAQ).
Edit 3: As a bonus, here is how to do it in YALMIP, under MATLAB. If you want extra practice, you can try implementing the reformulation of nuclear norm by equation 6.19 of the above-linked Mosek Modeling Cookbook, and verify you get the same optimal objective value (within tolerance) as you get by allowing YALMIP or CVX to do the (re)formulation for you.
X = sdpvar(3,3,'hermitian','complex') % note that unlike CVX, square matrices are symmetric (hermitian) by default in YALMIP, but I had to explicitly specify it, because 'complex' must be the 4th argument
optimize(0 <= X <= B,norm(X - A, 'nuc')) % Wow, a double-sided semidefinite constraint - I've never done that before. Also note that YALMIP is always in the equivalent of CVX's sdp mode.
It turns out that CVX, when in sdp mode, also allows double-sided semidefinite constraints (those come in handy for constraining 2-norm condition number)
cvx_begin sdp
variable X(n,n) hermitian
minimize(norm_nuc(X-A))
0 <= X <= B
cvx_end
Correct answer by Mark L. Stone on August 29, 2020
Get help from others!
Recent Answers
Recent Questions
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP