p-norms and sparse signals¶
l1 , l2 and max norms¶
There are some simple and useful results on relationships between different \(p\)-norms listed in this section. We also discuss some interesting properties of \(l_1\)-norm specifically.
Let \(v \in \CC^N\). Let the entries in \(v\) be represented as
where \(r_i = | v_i |\) with the convention that \(\theta_i = 0\) whenever \(r_i = 0\).
The sign vector for \(v\) denoted by \(\sgn(v)\) is defined as
where
For any \(v \in \CC^N\) :
Note that whenever \(v_i = 0\), corresponding \(0\) entry in \(\sgn(v)\) has no effect on the sum.
Suppose \(v \in \CC^N\). Then
For the lower bound, we go as follows
This gives us
We can write \(l_1\) norm as
By Cauchy-Schwartz inequality we have
Since \(\sgn(v)\) can have at most \(N\) non-zero values, each with magnitude 1,
Thus, we get
Let \(v \in \CC^N\). Then
Thus
Let \(v \in \CC^N\). Let \(1 \leq p, q \leq \infty\). Then
Let \(\OneVec \in \CC^N\) be the vector of all ones i.e. \(\OneVec = (1, \dots, 1)\). Let \(v \in \CC^N\) be some arbitrary vector. Let \(| v |\) denote the vector of absolute values of entries in \(v\). i.e. \(|v|_i = |v_i| \Forall 1 \leq i \leq N\). Then
Finally since \(\OneVec\) consists only of real entries, hence its transpose and Hermitian transpose are same.
Let \(\OneMat \in \CC^{N \times N}\) be a square matrix of all ones. Let \(v \in \CC^N\) be some arbitrary vector. Then
We know that
Thus,
We used the fact that \(\| v \|_1 = \OneVec^T | v |\).
\(k\)-th largest (magnitude) entry in a vector \(x \in \CC^N\) denoted by \(x_{(k)}\) obeys
Let \(n_1, n_2, \dots, n_N\) be a permutation of \(\{ 1, 2, \dots, N \}\) such that
Thus, the \(k\)-th largest entry in \(x\) is \(x_{n_k}\). It is clear that
Obviously
Similarly
Thus
Sparse signals¶
In this section we explore some useful properties of \(\Sigma_K\), the set of \(K\)-sparse signals in standard basis for \(\CC^N\).
We recall that
We established before that this set is a union of \(\binom{N}{K}\) subspaces of \(\CC^N\) each of which is is constructed by an index set \(\Lambda \subset \{1, \dots, N \}\) with \(| \Lambda | = K\) choosing \(K\) specific dimensions of \(\CC^N\).
We first present some lemmas which connect the \(l_1\), \(l_2\) and \(l_{\infty}\) norms of vectors in \(\Sigma_K\).
Suppose \(u \in \Sigma_K\). Then
We can write \(l_1\) norm as
\[\| u \|_1 = \langle u, \sgn (u) \rangle.\]By Cauchy-Schwartz inequality we have
\[\langle u, \sgn (u) \rangle \leq \| u \|_2 \| \sgn (u) \|_2\]Since \(u \in \Sigma_K\), \(\sgn(u)\) can have at most \(K\) non-zero values each with magnitude 1. Thus, we have
\[\| \sgn (u) \|_2^2 \leq K \implies \| \sgn (u) \|_2 \leq \sqrt{K}\]Thus we get the lower bound
\[\| u \|_1 \leq \| u \|_2 \sqrt{K} \implies \frac{\| u \|_1}{\sqrt{K}} \leq \| u \|_2.\]Now \(| u_i | \leq \max(| u_i |) = \| u \|_{\infty}\). So we have
\[\| u \|_2^2 = \sum_{i= 1}^{N} | u_i |^2 \leq K \| u \|_{\infty}^2\]since there are only \(K\) non-zero terms in the expansion of \(\| u \|_2^2\).
This establishes the upper bound:
\[\| u \|_2 \leq \sqrt{K} \| u \|_{\infty}\]