Performance guarantees for Schatten-p quasi-norm minimization in recovery of low-rank matrices

6
Fast communication Performance guarantees for Schatten-p quasi-norm minimization in recovery of low-rank matrices $ Mohammadreza Malek-Mohammadi a,n , Massoud Babaie-Zadeh a , Mikael Skoglund b a Electrical Engineering Department, Sharif University of Technology, Tehran 1458889694, Iran b Communication Theory Lab, KTH- Royal Institute of Technology, Stockholm 10044, Sweden article info Article history: Received 26 October 2014 Received in revised form 25 February 2015 Accepted 27 February 2015 Available online 9 March 2015 Keywords: Affine rank minimization (ARM) Nuclear norm minimization (NNM) Restricted isometry property (RIP) Schatten-p quasi-norm minimization (pSNM) abstract We address some theoretical guarantees for Schatten-p quasi-norm minimization (p Að0; 1) in recovering low-rank matrices from compressed linear measurements. Firstly, using null space properties of the measurement operator, we provide a sufficient condition for exact recovery of low-rank matrices. This condition guarantees unique recovery of matrices of ranks equal or larger than what is guaranteed by nuclear norm minimization. Secondly, this sufficient condition leads to a theorem proving that all restricted isometry property (RIP) based sufficient conditions for p quasi-norm mini- mization generalize to Schatten-p quasi-norm minimization. Based on this theorem, we provide a few RIP-based recovery conditions. & 2015 Elsevier B.V. All rights reserved. 1. Introduction Matrix rank minimization constrained to a set of underdetermined linear equations, known as the affine rank minimization (ARM), has numerous applications in signal processing and control theory [1,2]. An important special case of this optimization problem is matrix comple- tion (MC) in which one aims to recover a matrix from partially observed entries [2]. Applications of ARM and MC include collaborative filtering [2], machine learning [3], quantum state tomography [4], ultrasonic tomography [5], spectrum sensing [6], direction-of-arrival estimation [7], and RADAR [8], among others. Rank minimization under affine equality constraints is generally formulated as min X rankðXÞ subject to AðXÞ¼ b; ð1Þ where X A R n1n2 , A: R n1n2 -R m is a given linear operator (measurement operator), and b A R m is the vector of measurements. In case of incomplete measurements, m is less than n 1 n 2 , or, usually, m{n 1 n 2 . Problem (1) is generally NP-hard [1], yet there are many efficient algo- rithms to solve relaxed or approximated versions of it. Nuclear norm minimization (NNM), proposed in [1], replaces the rank with its tightest convex relaxation which leads to min X J X J n subject to AðXÞ¼ b; ð2Þ where J X J n 9 P r i ¼ 1 σ i ðXÞ denotes the matrix nuclear norm in which σ i ðXÞ is the ith largest singular value of X and r is the rank of the matrix X. It has been proven that, Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/sigpro Signal Processing http://dx.doi.org/10.1016/j.sigpro.2015.02.025 0165-1684/& 2015 Elsevier B.V. All rights reserved. This work was supported in part by Iran National Science Foundation under Contract 91004600. The work of the first author was supported in part by a travel scholarship from Ericsson Research during his visit at the Communication Theory Lab., KTH- Royal Institute of Technology. n Corresponding author. Tel.: þ98 912 5123401. E-mail addresses: [email protected] (M. Malek-Mohammadi), [email protected] (M. Babaie-Zadeh), [email protected] (M. Skoglund). Signal Processing 114 (2015) 225230

Transcript of Performance guarantees for Schatten-p quasi-norm minimization in recovery of low-rank matrices

Contents lists available at ScienceDirect

Signal Processing

Signal Processing 114 (2015) 225–230

http://d0165-16

☆ Thisunder Cpart byCommu

n CorrE-m

mbzadeskoglun

journal homepage: www.elsevier.com/locate/sigpro

Fast communication

Performance guarantees for Schatten-p quasi-normminimization in recovery of low-rank matrices$

Mohammadreza Malek-Mohammadi a,n, Massoud Babaie-Zadeh a,Mikael Skoglund b

a Electrical Engineering Department, Sharif University of Technology, Tehran 1458889694, Iranb Communication Theory Lab, KTH- Royal Institute of Technology, Stockholm 10044, Sweden

a r t i c l e i n f o

Article history:Received 26 October 2014Received in revised form25 February 2015Accepted 27 February 2015Available online 9 March 2015

Keywords:Affine rank minimization (ARM)Nuclear norm minimization (NNM)Restricted isometry property (RIP)Schatten-p quasi-norm minimization(pSNM)

x.doi.org/10.1016/j.sigpro.2015.02.02584/& 2015 Elsevier B.V. All rights reserved.

work was supported in part by Iran Nationaontract 91004600. The work of the first autha travel scholarship from Ericsson Research dnication Theory Lab., KTH- Royal Institute ofesponding author. Tel.: þ98 912 5123401.ail addresses: [email protected] (M. [email protected] (M. Babaie-Zadeh),[email protected] (M. Skoglund).

a b s t r a c t

We address some theoretical guarantees for Schatten-p quasi-norm minimization(pA ð0;1�) in recovering low-rank matrices from compressed linear measurements. Firstly,using null space properties of the measurement operator, we provide a sufficientcondition for exact recovery of low-rank matrices. This condition guarantees uniquerecovery of matrices of ranks equal or larger than what is guaranteed by nuclear normminimization. Secondly, this sufficient condition leads to a theorem proving that allrestricted isometry property (RIP) based sufficient conditions for ℓp quasi-norm mini-mization generalize to Schatten-p quasi-norm minimization. Based on this theorem, weprovide a few RIP-based recovery conditions.

& 2015 Elsevier B.V. All rights reserved.

1. Introduction

Matrix rank minimization constrained to a set ofunderdetermined linear equations, known as the affinerank minimization (ARM), has numerous applications insignal processing and control theory [1,2]. An importantspecial case of this optimization problem is matrix comple-tion (MC) in which one aims to recover a matrix frompartially observed entries [2]. Applications of ARM and MCinclude collaborative filtering [2], machine learning [3],quantum state tomography [4], ultrasonic tomography [5],

l Science Foundationor was supported inuring his visit at theTechnology.

-Mohammadi),

spectrum sensing [6], direction-of-arrival estimation [7],and RADAR [8], among others.

Rank minimization under affine equality constraints isgenerally formulated as

minX

rankðXÞ subject to AðXÞ ¼ b; ð1Þ

where XARn1�n2 , A:Rn1�n2-Rm is a given linear operator(measurement operator), and bARm is the vector ofmeasurements. In case of incomplete measurements, mis less than n1n2, or, usually, m{n1n2. Problem (1) isgenerally NP-hard [1], yet there are many efficient algo-rithms to solve relaxed or approximated versions of it.Nuclear norm minimization (NNM), proposed in [1],replaces the rank with its tightest convex relaxation whichleads to

minX

JXJn subject to AðXÞ ¼ b; ð2Þ

where JXJn9Pr

i ¼ 1 σiðXÞ denotes the matrix nuclearnorm in which σiðXÞ is the ith largest singular value of Xand r is the rank of the matrix X. It has been proven that,

M. Malek-Mohammadi et al. / Signal Processing 114 (2015) 225–230226

under some sufficient conditions, (1) and (2) share thesame unique solution; see, e.g., [2,9].

The nuclear norm of a matrix is equal to the ℓ1 norm ofa vector formed by the singular values of the same matrix.Consequently, inspired by experimental observations andtheoretical guarantees showing superiority of ℓp quasi-norm minimization to ℓ1 minimization in compressivesampling (CS) [10], another approach in [11,12] replacesthe rank function with the Schatten-p quasi-norm result-ing in

minX

‖X‖pp subject to AðXÞ ¼ b; ð3Þ

where JXJp9Pr

i ¼ 1 σpi ðXÞ

� �1=p for some pA ð0;1Þ denotesthe Schatten-p quasi-norm. While the above problem isnonconvex, it is observed that numerically efficient imple-mentations of (3) outperforms NNM [11–13].

In practice, there is often some noise in measurements,so measurement model is updated to AðXÞþe¼ b, where eis the vector of measurement noise. To robustly recover aminimum-rank solution, equality constraints are relaxedto JAðXÞ�bJ2rϵ, where J � J2 denotes the ℓ2 norm of avector and ϵZ JeJ2 is some constant [2]. Therefore, (3) ismodified to

minX

‖X‖pp subject to JAðXÞ�bJ2rϵ: ð4Þ

Though there are several theoretical studies concerningℓp quasi-norm minimization in the CS literature (see, forexample, [14–17]), only a few papers deal with perfor-mance guarantees of Schatten-p quasi-norm minimization(pSNM). In [18], authors propose a necessary and sufficientcondition for exact recovery of low-rank matrices usingnull space properties of A. However, the sufficient condi-tion is not sharp and seems to be stronger than that ofNNM. In contrast, it is well known that finding the globalsolution of ℓp quasi-norm minimization in CS scenario issuperior to ℓ1 minimization [14–16]. Therefore, when oneconsiders the strong parallels between CS and ARM (see[1] for a comprehensive discussion) and superior experi-mental performance of pSNM in comparison to NNM, he/she expects weaker recovery conditions. We will showthat this intuition is indeed the case by providing a sharpsufficient condition which is equivalent to the necessarycondition in [18], and proving that, using (3), one canuniquely find matrices with equal or larger ranks thanthose of recoverable by NNM.

In addition, we further exploit this sufficient conditionand extend a result from [18] to prove that all restrictedisometry property (RIP) based results for recovery ofsparse vectors using ℓp quasi-norm minimization general-ize to Scahtten-p quasi-norm minimization with nochange. In particular, extending some results of [15], wewill show that if δ2ro0:4531, then all low-rank orapproximately low-rank matrices with at most r dominantsingular values can be recovered accurately from noisymeasurements via (4). This generalization also proves that,for some sufficiently small p40, if δ2rþ2o1, then, pro-gram (4) recovers all matrices with at most r large singularvalues from noisy measurements accurately. Furthermore,another RIP-based sufficient condition will be presented

which is sharper than a threshold in [15] for sufficientlysmall values of p.

The rest of this paper is organized as follows. Afterintroducing some notations, in Section 2, we will presentour performance analysis. Section 3 is devoted to theproofs of the main results which is followed by conclusion.

Notations: A vector is called k-sparse if it has k nonzerocomponents. x↓ denotes a vector obtained by sortingelements of x in terms of magnitude in descending order,and xðkÞ designates a vector consisted of k largest elements(in magnitude) of x. Let ⟨x; y⟩9xTy be the inner product ofx and y and JxJ29 ⟨x;x⟩1=2 stands for the Euclidean-norm.ℓp quasi-norm of x for pAð0;1Þ is defined asJxJp9

Pix

pi

� �1=p, where xi is the ith entry of x. For anymatrix XARn1�n2 , define n9minðn1;n2Þ. It is alwaysassumed that singular values of matrices are sorted indescending order, and σðXÞ ¼ ðσ1ðXÞ;…; σnðXÞÞT is the vec-

tor of singular values of X. JXJF9ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPn

i ¼ 1 σ2i ðXÞ

qdenotes

the Frobenius norm. Furthermore, let X¼U diagðσðXÞÞVT

denote the singular value decomposition (SVD) of X,where UARn1�n and VARn2�n. XðrÞ ¼Udiagðσ1ðXÞ;…; σr

ðXÞ;0;…;0ÞVT represents a matrix obtained by keepingthe r largest singular values in the SVD of X and settingothers to 0. For a linear operator A:Rn1�n2-Rm, letN ðAÞ9fXARn1�n2 :AðXÞ ¼ 0;Xa0g ¼ nullðAÞ⧹f0g. For aset S, jSj denotes its cardinality.

2. Main results

2.1. A null space condition

In [18], exploiting null space properties of A, a neces-sary and sufficient condition for successful reconstructionof minimum-rank solutions via (3) is derived, yet there is agap between these conditions. In this paper, we close thisgap by introducing the following lemma, which is mainlybased on a result from [19], and prove that the necessarycondition in [18] is also sufficient. Moreover, we will showthat, using pSNM, one can uniquely recover all matriceswith equal or larger rank than those of uniquely recover-able by NNM.

Lemma 1. All matrices XARn1�n2 of rank at most r can beuniquely recovered by (3), provided that, 8WAN ðAÞ,Xri ¼ 1

σpi ðWÞoXn

i ¼ rþ1

σpi ðWÞ:

It is worth mentioning that the sufficient condition inLemma 1 is weaker than the corresponding sufficientcondition in [18] which, according to our notations, isformulated as

X2ri ¼ 1

σpi ðWÞoXn

i ¼ 2rþ1

σpi ðWÞ:

SincePr

i ¼ 1 σpi ðWÞrP2r

i ¼ 1 σpi ðWÞ and

Pni ¼ rþ1 σ

pi ðWÞZPn

i ¼ 2rþ1 σpi ðWÞ, the sufficient condition in Lemma 1 is less

restrictive than [18, Theorem 3]. Based on the above

M. Malek-Mohammadi et al. / Signal Processing 114 (2015) 225–230 227

sufficient condition, we have the following propositionwhich is a routine extension of [14, Theorem 5].

Proposition 1. Let rnpðAÞ and rn1ðAÞ denote the maximumranks such that all matrices X with rankðXÞrrnpðAÞ andrankðXÞrrn1ðAÞ can be uniquely recovered by (3) and (2),respectively. Then rnpðAÞZrn1ðAÞ for any pA ð0;1Þ.

2.2. RIP-based conditions

Inspired by the strong parallels between CS and ARM,[18] simplifies generalization of some results on ℓ1 normminimization to nuclear norm minimization. Remarkably,it shows that all RIP-based conditions for stable and robustrecovery of sparse vectors through ℓ1 norm minimizationdirectly generalize to nuclear norm minimization. Further-more, [18] proves a similar equivalence between RIP-basedconditions for recovery of sparse vectors via ℓp quasi-normminimization and recovery of low-rank matrices usingpSNM. Nevertheless, the established equivalence in [18,Lemma 14] is not as strong as one might expect. Inessence, it shows an equivalence between RIP conditionsfor recovery of 2k-sparse vectors and RIP conditions forreconstruction of rank k matrices. However, it is natural tohave the equivalence between sparsity and rank of thesame order. Utilizing Lemma 1, we make the order ofsparsity and rank in the aforementioned equivalence equalto k. To that end, first, formulation of ℓp quasi-normminimization as well as the definitions of RIP for vectorand matrix cases is recalled.

In ℓp quasi-norm minimization, the program

minx

‖x‖pp subject to JAx�bv J2rϵ ð5Þ

is used to estimate a sparse vector xARmv from noisymeasurements bv ¼Axþev in which AARnv�mv andbvARnv are known and ev is noise vector with Jev J2rϵ.

Definition 1 (Candès [20]). For matrix A and all integerskrmv, the restricted isometry constant (RIC) of order k isthe smallest constant δkðAÞ such that

ð1�δkðAÞÞ‖x‖22r‖Ax‖22r ð1þδkðAÞÞ‖x‖22holds for all vectors x with sparsity at most k.

Definition 2 (Oymak et al. [18]). For linear operator A andall integers rrn, the RIC of order r is the smallest constantδrðAÞ such that

ð1�δrðAÞÞ‖X‖2F r‖AðXÞ‖22rð1þδrðAÞÞ‖X‖2Fholds for all matrices X with rank at most r.

The following theorem formally shows how the resultsare extended to pSNM.

Theorem 1. Let x0ARmv be any arbitrary vector,bv ¼ Ax0þev, and xn denote a solution to (5) to recover x0.Likewise, let X0ARn1�n2 be any arbitrary matrix,b¼AðX0Þþe, and Xn denote a solution to (4) to recoverX0. Assume that RIP condition f ðδk1 ðAÞ;…; δku ðAÞÞoδ0, forsome function f, is sufficient to have

Jx0�xn Jprg1ðx↓0; ϵÞ;

Jx0�xn J2rg2ðx↓0; ϵÞ;

for some functions g1 and g2. Then, under the same RIPcondition f ðδk1 ðAÞ;…; δku ðAÞÞoδ0, we have

JX0�Xn Jprg1ðσðX0Þ; ϵÞ;JX0�Xn JFrg2ðσðX0Þ; ϵÞ:

One of the best uniform thresholds on δ2k for finding k-sparse vectors using ℓp quasi-norm minimization is givenin [15]. This threshold works uniformly for any pAð0;1�and covers exact recovery conditions as well as robust andaccurate reconstruction of sparse and nearly-sparse vec-tors from noisy measurements. Theorem 1 simply gener-alizes the results in [15] to low-rank matrix recovery bymeans of the following proposition and corollary. To havea more organized presentation, we use the inequalityγ2tZð1þδ2tÞ=ð1�δ2tÞ, where γ2t is the asymmetric RICdefined in [15], to state our results in terms of δ2t (theRIC defined herein).

Proposition 2. Let X0ARn1�n2 be any arbitrary matrix andAðX0Þþe¼ b, where bARm is known and e is noise withJeJ2rϵ. Suppose that Xn is a solution to (4) to recover X0

for some pAð0;1�. If

δ2to2ð

ffiffiffi2

p�1Þðt=rÞ1=p�1=2

2ðffiffiffi2

p�1Þðt=rÞ1=p�1=2þ1

ð6Þ

holds for some integer tZr, then

JX0�Xn JprC1 JX0�XðrÞ0 JpþD1r1=p�1=2ϵ;

JX0�Xn JFrC2t1=2�1=p JX0�XðrÞ0 JpþD2ϵ:

The constants C1, C2, D1, D2 depend only on p, δ2t ; t=r and aregiven in [15, Theorem 3.1]. In particular, when ϵ¼ 0 andrankðX0Þrr, (6) implies that X0 is a unique solution to (3).

Two important special cases of the above sufficientcondition are summarized in the following corollary.

Corollary 1. The sufficient condition of Proposition 2 impliesthe following sufficient conditions too:

δ2ro0:4531 for any pAð0;1�, � knowing r and δ2rþ2o1, it is possible to find some p0

such that inequality (6) holds for all 0opop0.

Theorem 1 also generalizes other recent RIP-basedconditions in ℓp quasi-norm minimization (e.g., the con-ditions in [21,22]). In addition to the above conditions,below, we introduce another sufficient condition whichguarantees robust and accurate reconstruction of low-rankmatrices.

Theorem 2. Under assumptions of Proposition 2, if

δ2toðt=rÞ2=p�1�1ðt=rÞ2=p�1þ1

ð7Þ

holds for some integer tZr, then

JX0�Xn JprC 01 JX0�XðrÞ

0 JpþD01r

1=p�1=2ϵ;

JX0�Xn JFrC 02t

1=2�1=p JX0�XðrÞ0 JpþD0

2ϵ;

Fig. 1. Recovery thresholds from Proposition 2 and Theorem 2 as afunction of p when r is fixed to 5.

M. Malek-Mohammadi et al. / Signal Processing 114 (2015) 225–230228

where the constants C01, C

02, D

01, D

02 depend only on p; δ2t ; t=r.

In particular, when ϵ¼ 0 and rankðX0Þrr, (7) implies thatX0 is a unique solution to (3).

Despite the fact that a uniform recovery thresholdcannot be obtained from Theorem 2, substituting t withrþ1 in (7), we get

δ2rþ2oð1þ1=rÞ2=p�1�1ð1þ1=rÞ2=p�1þ1

: ð8Þ

Fixing r and δ2rþ2, let p0 denote the maximum value suchthat all pAð0; p0Þ satisfy (6) for t ¼ rþ1. Respectively, let p00denote the maximum value such that all pAð0; p00Þ satisfy(8). Neglecting the constant terms, since, with the decreaseof p, the power of ð1þ1=rÞ in (8) grows twice that of in (6),it is expected that (8) guarantees accurate recovery forp00 �

ffiffiffiffiffip0

pwhen thresholds in the right-hand side of (6) and

(8) tend to 1. Fig. 1 shows δ2rþ2 thresholds derived fromProposition 2 and Theorem 2 as a function of p for r¼5. Asit is clear, the threshold given in Theorem 2 becomessharper than that of given in Proposition 2 after passingp� 0:22. Furthermore, it reaches to 1 at p� 0:05, while theone from Proposition 2 approaches to 1 at p� 0:025. Recallthat δ2ro1 is a sufficient condition for the success of theoriginal rank minimization problem in (1) [1]. Conse-quently, the above result shows that, for a larger range ofp's, pSNM is almost optimal since δ2rþ2o1 guarantees itssuccess.

3. Proofs of results

3.1. Preliminaries

We begin with a definition and a few lemmas.

Definition 3 (Horn and Johnson [23]). A functionΦðxÞ:Rn-R is called symmetric gauge if it is a norm onRn and invariant under arbitrary permutations and signchanges of x elements.

Lemma 2 (Zhang and Qiu [19, Corollary 2.3]). Let Φ be asymmetric gauge function and f : ½0;1Þ-½0;1Þ be a concave

function with f ð0Þ ¼ 0. Then for A;BARn1�n2

Φ f σðAÞð Þ� f σðBÞð Þð ÞrΦ f σðA�BÞð Þð Þ;where f ðxÞ ¼ ðf ðx1Þ;…; f ðxnÞÞT .

Lemma 3. Let A;BARn1�n2 . For any pAð0;1�Xni ¼ 1

σpi ðA�BÞZXni ¼ 1

jσpi ðAÞ�σpi ðBÞj: ð9Þ

Proof. It is obvious that ΦðxÞ ¼ Pni ¼ 1 jxij and f ðxÞ ¼

xp; pAð0;1Þ, satisfy conditions of Lemma 2. Thus, (9) is animmediate result for pAð0;1Þ. Moreover, (9) holds for p¼1[23]. □

Lemma 4. Let W¼U diagðσðWÞÞVT denote the SVD of W. Iffor some X0, JX0þWJpr JX0 Jp, then with X1 ¼�UdiagðσðX0ÞÞVT , we have JX1þWJpr JX1 Jp.

Proof. The proof easily follows from [18, Lemma 2] byreplacing J � Jn with J � Jp and applying inequality (9). □

3.2. Proofs

Proof of Lemma 1. If AðXÞ ¼ b, then all feasible solutionsto (3) can be represented as XþW for some WAN ðAÞ.Consequently, to prove that X is a unique solution to (3),we need to show that ‖XþW‖pp4‖X‖pp for all WAN ðAÞ.Applying Lemma 3, it can be written that

‖XþW‖pp ¼Xni ¼ 1

σpi ðXþWÞ

ZXni ¼ 1

σpi ðXÞ�σpi ðWÞ�� ��

¼Xri ¼ 1

σpi ðXÞ�σpi ðWÞ�� ��þ Xn

i ¼ rþ1

σpi ðWÞ

ZXri ¼ 1

σpi ðXÞ�Xri ¼ 1

σpi ðWÞþXn

i ¼ rþ1

σpi ðWÞ

4Xri ¼ 1

σpi ðXÞ ¼ ‖X‖pp;

which confirms that X is the unique solution. □

Proof of Theorem 1. The proof is a direct consequence ofintegrating Lemma 4 of this paper and Theorem 1 andLemma 5 of [18]. □

Proof of Theorem 2. For the sake of simplicity, we provethis theorem for the vector case and by virtue of Theorem1 matrix case will follow. Let xn denote a solution to (5)and v¼ xn�x0, where x0 is the arbitrary vector we want torecover. Furthermore, let S0 � f1;…;nvg with jS0jrr. Wepartition Sc0 ¼ f1;…;nvg\S0 to S1; S2;… with jSij ¼ t probablyexcept for the last set. As a result, vSi ; iZ0 denote a vectorobtained by keeping entries of v indexed by Si and settingall other elements to 0.Our proof is the same as in [15, Theorem 3.1] except the

way in which JvS0 J2 and JvS1 J2 are bounded. Hence, weuse the same notation and only focus on the bounding and

M. Malek-Mohammadi et al. / Signal Processing 114 (2015) 225–230 229

omit other details. By applying the RIP definition, we get

‖vS0 þvS1‖22r

11�δ2t

‖AðvS0 þvS1 Þ‖22

¼ 11�δ2t

A v�XiZ2

vSi

!;A v�

XiZ2

vSi

!* +

¼ 11�δ2t

‖Av‖22þ2XiZ2

⟨Av; �AvSi

"⟩

þXi;jZ2

⟨AvSi ;AvSj ⟩

35: ð10Þ

Now, we find upper bounds for the terms in (10).Considering the second term in (10), it can be written that

⟨Av; �AvSi ⟩rffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

pJAvJ2 JvSi J2: ð11Þ

Since ⟨vSi ; vSj ⟩¼ 0 for ia j, [20, Lemma 2.1] implies that

⟨AvSi ;AvSj ⟩rδ2t JvSi J2 JvSj J2; 8 ia j: ð12Þ

Also

⟨AvSi ;AvSi ⟩r ð1þδ2tÞ‖vSi‖22: ð13ÞPutting (11)–(13) in (10) and letting Σ ¼PiZ2 JvSi J2, weget

‖vS0‖22þ‖vS1‖

22

r 11�δ2t

24‖Av‖22þ2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

pJAvJ2Σ

þδ2tXi;jZ 2ia j

JvSi J2 JvSj J2þ 1þδ2tð ÞXiZ2

‖vSi‖22

35

¼ 11�δ2t

24‖Av‖22þ2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

pJAvJ2Σþδ2tΣ

2

þXiZ2

‖vSi‖22

35

r 11�δ2t

‖Av‖22þ2ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

pJAvJ2Σ

hþðδ2tþ1ÞΣ2� ð14Þ

where, for the last inequality, we useP

iZ2‖vSi‖22rPiZ2 JvSi J2

� �2. Inequality (14) can be reduced to

JvS0 J2r1ffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1�δ2tp JAvJ2þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

h i;

JvS1 J2r1ffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1�δ2tp JAvJ2þ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

h i:

The rest of the proof is similar to [15, Theorem 3.1] withnew parameters λ¼ 2=

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1�δ2t

pand μ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t

p=ffiffiffiffiffiffiffiffiffiffiffiffiffiffi

1�δ2tp

ðr=tÞ1=p�1=2. Therefore, in this proof, from μo1,we get

δ2toðt=rÞ2=p�1�1ðt=rÞ2=p�1þ1

;

and, after some simple algebraic manipulations, we obtain

Jx0�xn JprC01 Jx0�xðrÞ

0 JpþD01r

1=p�1=2ϵ;

Jx0�xn J2rC02t

1=2�1=p Jx0�xðrÞ0 JpþD0

2ϵ;

with constants

C01 ¼

22=p�1ð1þμpÞ1=pð1�μpÞ1=p

; D01 ¼

22=p�1λ

ð1�μpÞ1=p;

C02 ¼ 1þ2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t1�δ2t

s !22=p�1

ð1�μpÞ1=p;

D02 ¼ 2λþ 1þ2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffi1þδ2t1�δ2t

s !21=p�1λ

ð1�μpÞ1=p: &

4. Conclusion

In the affine rank minimization problem, it is experi-mentally verified that Schatten-p quasi-norm minimiza-tion is superior to nuclear norm minimization. In thispaper, we established a theoretical background for thisobservation and proved that, under a weaker sufficientcondition than that of nuclear norm minimization, globalminimization of the Schatten-p quasi-norm subject tocompressed affine measurements leads to unique recoveryof low-rank matrices. To show that this approach is robustto noise and being approximately low-rank, we general-ized some-RIP based results in ℓq quasi-norm minimiza-tion to Schatten-p quasi-norm minimization.

References

[1] B. Recht, M. Fazel, P.A. Parrilo, Guaranteed minimum-rank solutionsof linear matrix equations via nuclear norm minimization, SIAM Rev.55 (2010) 471–501.

[2] E.J. Candés, Y. Plan, Matrix completion with noise, Proc. IEEE 98 (6)(2010) 925–936.

[3] Y. Amit, M. Fink, N. Srebro, S. Ullman, Uncovering shared structuresin multiclass classification, in: Proceedings of the 24th InternationalConference on Machine Learning, 2007.

[4] D. Gross, Y.K. Liu, S.T. Flammia, S. Becker, J. Eisert, Quantum statetomography via compressed sensing, Phys. Rev. Lett. 105 (15) (2010)150401.

[5] R. Parhizkar, A. Karbasi, S. Oh, M. Vetterli, Calibration using matrixcompletion with application to ultrasound tomography, IEEE Trans.Signal Process. 61 (20) (2013) 4923–4933.

[6] A. Koochakzadeh, M. Malek-Mohammadi, M. Babaie-Zadeh,M. Skoglund, Multi-antenna assisted spectrum sensing in spatiallycorrelated noise environments, Signal Process. 108 (0) (2015) 69–76.

[7] M. Malek-Mohammadi, M. Jansson, A. Owrang, A. Koochakzadeh,M. Babaie-Zadeh, DOA estimation in partially correlated noise usinglow-rank/sparse matrix decomposition, in: IEEE Sensor Array andMultichannel Signal Processing Workshop, 2014.

[8] D. Kalogerias, A. Petropulu, Matrix completion in colocated mimoradar: recoverability, bounds & theoretical guarantees, IEEE Trans.Signal Process. 62 (2) (2014) 309–321.

[9] B. Recht, W. Xu, B. Hassibi, Null space conditions and thresholds forrank minimization, Math. Program. 127 (1) (2011) 175–202.

[10] R.G. Baraniuk, Compressive sensing, IEEE Signal Process. Mag. 24 (4)(2007) 118–124.

[11] K. Mohan, M. Fazel, Iterative reweighted algorithms for matrix rankminimization, J. Mach. Learn. Res. 13 (2012) 3253–3285.

[12] G. Marjanovic, V. Solo, On lq optimization and matrix completion,IEEE Trans. Signal Process. 60 (11) (2012) 5714–5724.

[13] A. Majumdar, R. Ward, Some empirical advances in matrix comple-tion, Signal Process. 91 (5) (2011) 1334–1338.

[14] R. Gribonval, M. Nielsen, Highly sparse representations from dic-tionaries are unique and independent of the sparseness measure,Appl. Comput. Harmon. Anal. 22 (2007) 335–355.

[15] S. Foucart, M.-J. Lai, Sparsest solutions of underdetermined linearsystems via ℓq�minimization for 0oqr1, Appl. Comput. Harmon.Anal. 26 (3) (2009) 397–407.

[16] R. Chartrand, V. Staneva, Restricted isometry properties and non-convex compressive sensing, Inverse Probl. 24 (3) (2008).

M. Malek-Mohammadi et al. / Signal Processing 114 (2015) 225–230230

[17] M. Wang, W. Xu, A. Tang, On the performance of sparse recovery viaℓq�minimization (rpr1), IEEE Trans. Inf. Theory 57 (11) (2011)7255–7278.

[18] S. Oymak, K. Mohan, M. Fazel, B. Hassibi, A simplified approach torecovery conditions for low-rank matrices, in: Proceedings of theIEEE International Symposium on Information Theory (ISIT), 2011,pp. 2318–2322.

[19] Y. Zhang, L. Qiu, From subadditive inequalities of singular values totriangle inequalities of canonical angles, SIAM J. Matrix Anal. Appl.31 (2010) 1606–1620.

[20] E. Candès, The restricted isometry property and its implications forcompressed sensing, C. R. Acad. Sci. Paris, Ser. I 346 (2008) 589–592.

[21] R. Wu, D. Chen, The improved bounds of restricted isometryconstant for recovery via ℓp minimization, IEEE Trans. Inf. Theory59 (9) (2013).

[22] Y. Hsia, R. Sheu, On RIC bounds of compressed sensing matrices forapproximating sparse solutions using ℓq quasi norms, Arxiv preprintarXiv:1312.3379.

[23] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press,Cambridge, 1990.