Volume 21 Issue 1
Mar.  2023
Article Contents

Jue Lyu, Dong-Jie Bi, Bo Liu, Guo Yi, Xue-Peng Zheng, Xi-Feng Li, Li-Biao Peng, Yong-Le Xie, Yi-Ming Zhang, Ying-Li Bai. Compressive near-field millimeter wave imaging algorithm based on Gini index and total variation mixed regularization[J]. Journal of Electronic Science and Technology, 2023, 21(1): 100191. doi: 10.1016/j.jnlest.2023.100191
Citation: Jue Lyu, Dong-Jie Bi, Bo Liu, Guo Yi, Xue-Peng Zheng, Xi-Feng Li, Li-Biao Peng, Yong-Le Xie, Yi-Ming Zhang, Ying-Li Bai. Compressive near-field millimeter wave imaging algorithm based on Gini index and total variation mixed regularization[J]. Journal of Electronic Science and Technology, 2023, 21(1): 100191. doi: 10.1016/j.jnlest.2023.100191

Compressive near-field millimeter wave imaging algorithm based on Gini index and total variation mixed regularization

doi: 10.1016/j.jnlest.2023.100191
More Information
  • Author Bio:

    Jue Lyu was born in Guizhou, China in 1989. He received the B.S. degree from University of Electronic Science and Technology of China (UESTC), Chengdu, China in 2011. He is currently pursuing the Ph.D. degree in instrument science and technology with the School of Automation Engineering, UESTC. His research interests include millimeter wave imaging, compressed sensing, and signal processing

    Dong-Jie Bi was born in Shandong, China in 1985. He received the B.S., M.S., and Ph.D. degrees from UESTC in 2009, 2011, and 2015, respectively. He is currently a senior engineer with the School of Automation Engineering, UESTC. His research interests include millimeter wave imaging, compressed sensing, and signal processing

    Bo Liu was born in Heilongjiang, China in 1989. He received the B.S. and Ph.D. degrees from Dalian University of Technology (DUT), Dalian, China in 2012 and 2018, respectively. He is currently a senior engineer with Shanghai Spaceflight Precision Machinery Institute, Shanghai, China. His research interests include test and inspection technology in aerospace

    Guo Yi was born in Hunan, China in 1986. He received the B.S. and M.S. degrees from Harbin Institute of Technology (HIT), Harbin, China in 2008 and 2010, respectively. He is currently a senior engineer with Shanghai Spaceflight Precision Machinery Institute. His research interests include test and inspection technology in aerospace

    Xue-Peng Zheng was born in Henan, China in 1994. He received the B.S. and M.S. degrees from Nanchang Hangkong University (NCHU), Nanchang, China in 2009 and 2011, respectively. He is currently an engineer with Shanghai Spaceflight Precision Machinery Institute. His main research direction is intelligent nondestructive testing

    Xi-Feng Li was born in Shaanxi, China in 1981. He received the B.S., M.S., and Ph.D. degrees from UESTC in 2005, 2008, and 2014, respectively. He is currently an associate professor with the School of Automation Engineering, UESTC. His research interests include statistical signal processing, machine learning, and information theory

    Li-Biao Peng was born in Hunan, China in 1993. He received the B.S. and Ph.D. degrees from UESTC in 2015 and 2022, respectively. He is currently a postdoctoral researcher with the School of Automation Engineering, UESTC. His research interests include neural networks, system dynamics, and machine learning

    Yong-Le Xie’s photograph and biography are not available at the time of publication

    Yi-Ming Zhang received the B.E. degree from UESTC in 2021. He is currently a Ph.D. candidate with the Electrical Engineering Department, Northwestern University, Evanston, USA. His research interests include millimeter wave imaging, wireless communications, and machine learning

    Ying-Li Bai was born in Hebei, China in 1984. He received the B.S. and M.S. degrees from UESTC in 2008 and 2011, respectively. He is currently an engineer with the Institute of Telecommunication Satellite, China Academy of Space Technology (CAST), Beijing, China. His current research interest focuses on signal processing

  • Authors’ information: xieyongle@uestc.edu.cn
  • Received Date: 2020-07-28
  • Accepted Date: 2023-03-29
  • Rev Recd Date: 2023-03-22
  • Available Online: 2023-04-08
  • Publish Date: 2023-03-25

  • 通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(3)  / Tables(2)

    Article Metrics

    Article views(36) PDF downloads(8) Cited by()

    Related
    Proportional views

    Compressive near-field millimeter wave imaging algorithm based on Gini index and total variation mixed regularization

    doi: 10.1016/j.jnlest.2023.100191

    Abstract: A compressive near-field millimeter wave (MMW) imaging algorithm is proposed. From the compressed sensing (CS) theory, the compressive near-field MMW imaging process can be considered to reconstruct an image from the under-sampled sparse data. The Gini index (GI) has been founded that it is the only sparsity measure that has all sparsity attributes that are called Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies. By combining the total variation (TV) operator, the GI-TV mixed regularization introduced compressive near-field MMW imaging model is proposed. In addition, the corresponding algorithm based on a primal-dual framework is also proposed. Experimental results demonstrate that the proposed GI-TV mixed regularization algorithm has superior convergence and stability performance compared with the widely used l1-TV mixed regularization algorithm.

    Jue Lyu, Dong-Jie Bi, Bo Liu, Guo Yi, Xue-Peng Zheng, Xi-Feng Li, Li-Biao Peng, Yong-Le Xie, Yi-Ming Zhang, Ying-Li Bai. Compressive near-field millimeter wave imaging algorithm based on Gini index and total variation mixed regularization[J]. Journal of Electronic Science and Technology, 2023, 21(1): 100191. doi: 10.1016/j.jnlest.2023.100191
    Citation: Jue Lyu, Dong-Jie Bi, Bo Liu, Guo Yi, Xue-Peng Zheng, Xi-Feng Li, Li-Biao Peng, Yong-Le Xie, Yi-Ming Zhang, Ying-Li Bai. Compressive near-field millimeter wave imaging algorithm based on Gini index and total variation mixed regularization[J]. Journal of Electronic Science and Technology, 2023, 21(1): 100191. doi: 10.1016/j.jnlest.2023.100191
      • In the non-destructive testing and evaluation, the millimeter wave (MMW) has drawn significant attention because of its many advantages [1]. With a millimeter-level wavelength, MMW can easily penetrate composite materials and obtain a high-resolution image by interacting with the internal structure of materials. In the past, compressed sensing (CS) [2,3] has demonstrated surprising application potential in many areas, e.g., radar imaging [47] and medical imaging [810]. CS techniques enable MMW imaging systems to reconstruct an image from highly limited amounts of data via sampling from the specimen under test (SUT), which significantly reduces the time cost of data acquisition. To solve the CS-based MMW imaging problem, regularization, which represents the signal sparsity, is required. The compressive MMW image reconstruction model typically combines the l1-norm and total variation (TV) operator for sparsity regularization [4,11,12]. The l1-norm is a widely applied measure of sparsity, but it does not satisfy all of the intuitive and desirable attributes of sparsity measures. These attributes, known as Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies, are necessary for a comprehensive measure of data sparsity [13].

        The Gini index (GI) was originally proposed in economics to measure the inequality of wealth [14,15]. From a signal processing perspective, the inequality in wealth is equivalent to the efficiency of representation or sparsity. In addition, GI is the only sparsity measure that has all attributes mentioned before [13,16]. GI is normalized for any vector, and the GI value is uncorrelated to the vector size; thus, it allows us to measure the sparsity of different-sized vectors. The total energy of the signal is scale-invariant and independent. As a result, it is ideal for measuring signal sparsity in various transform domains. In addition, GI employs the GI-norm, which can be considered as a superior replacement of the l1-norm; thus, we use GI for sparsity regularization in this study.

        The TV operator is also a widely used sparsity regularization technique in signal and image processing [1720]. In this paper, the GI-TV mixed regularization with a compressive MMW image reconstruction model is proposed. And the corresponding algorithm based on a primal-dual architecture is also proposed to reconstruct the MMW image. Experimental results demonstrate that the proposed compressive near-field MMW imaging algorithm can reconstruct MMW images by under-sampling data at different rates. To demonstrate the effectiveness of the proposed algorithm, we compared the proposed GI-TV mixed regularization algorithm and the l1-TV mixed regularization algorithm under different under-sampling rates, and prove the effectiveness of the proposed algorithm through experimental results.

        The remainder of this paper is organized as follows. We describe the compressive near-field MMW imaging model and GI-TV mixed sparsity regularization in Sections 2 and 3, respectively. We then describe the proposed algorithm to reconstruct an image from GI-TV mixed regularization with the compressive MMW imaging model in Section 4. The results of experiments conducted to verify the effectiveness of the proposed algorithm are discussed in Section 5.

      • Target SUT, which was assumed to be pointwise characterized by the reflectivity function $f\,(x{\rm{,}}\;y{\rm{,}}\;{z_h})$, was scanned by a near-field MMW sampling antenna through a gridded plane with the vertical gap ${z_h}$. Here, it was assumed that the attenuation of the MMW signal in the transmission process does not significantly impact the image focusing process in the near-field. And multiple reflections, dispersion, or polarization changes were not considered in the image reconstruction, because we assumed that the sampled data were a single reflection from the target [21]. Here, assume ${k'_x}{\rm{,}}\;{k'_y} \in ( - 2{k_c}{\rm{,}}\;2{k_c})$ are the Fourier transform variables corresponding to $ x' $ and $ y' $, respectively, where ${k_c} = {\omega _c}/c$ is the wave number with the temporal angular frequency ${\omega _c}$ and light speed c. Therefore, the backscattered complex-valued reflection coefficients $s(x'{\rm{,}}\;y'{\rm{,}}\;{\omega _c}){\kern 1pt}$ are expressed as follows:

        where $ \Delta x = x' - x $ and $ \Delta y = y' - y $. With the relationship between the spherical wave and plane wave, the exponential term in (1) can be decomposed as follows:

        where ${k_z} = \sqrt {4k_c^2 - {{k'}_{ x}^{2}} - {{k'}_{\rm{ y}}^{2}}}$. By substituting (2) into (1) and unifying the coordinates, the data acquisition process can be defined as follows:

        where $ {F_{{\rm{2D}}}} $ is the 2-dimensional (2D) fast Fourier transform (FFT), and $F_{{\rm{2D}}}^{\,- 1}$ is the inverse transform of FFT. The reverse process of (3) is expressed as follows:

        For the compressive MMW imaging system, the under-sampled data were collected by antennas located at several points over a 2D plane randomly. The under-sampled data $ {\bf{s}} \in {\mathbb{C}^M} $ can be represented as follows:

        where ${\bf{f}} \in {\mathbb{C}^N}$ is the MMW image, $\mathbb{C}$ stands for complex domain, and M is the total number of elements, as well as N. ${\Cambriabfont\text{Φ}} = {\bf{U}}F_{{\rm{2D}}}^{ \,- 1}\{ \mathcal{E}[{F_{{\rm{2D}}}}\{ \cdot \} ]\} :{\mathbb{C}^N} \mapsto {\mathbb{C}^M}(M \ll N)$ is the MMW measurement operator, $ {\bf{U}}:{\mathbb{C}^N} \mapsto {\mathbb{C}^M} $ is the random under-sampling operator, and $ \mathcal{E} $ is the phase compensation term in (3). In addition, $ {\bf{n}} \in {\mathbb{C}^M} $ is the complex Gaussian noise with the zero variance. According to the CS theory, MMW image reconstruction from the under-sampled data is an ill-posed linear inverse problem that can be treated as finding the sparse solution of a regularization function penalized least square problem:

        where $ {\left\| \cdot \right\|_2} $ is the l2-norm, $ R({\bf{f}}) $ is the regularization term, and $\mathcal{C}$ is the elements interval for the MMW image, which restricts all elements in the nonnegative range $[{f_{{\rm{min}}}}{\rm{,}}\;{f_{{\rm{max}}}}]$.

      • Given a vector ${\bf{v}} = [{v_1}{\rm{,}}\;{v_2}{\rm{,}}\; \cdots {\rm{,}}\;{v_N}]$ , with its elements re-ordered and represented by ${v_{(n)}}$ for $n = 1{\rm{,}}\;2{\rm{,}}\; \cdots {\rm{,}}\;N$, where $|{v_{(1)}}|\, \le \,|{v_{(2)}}| \le \cdots \le \,|{v_{(N)}}|$, the definition of GI is given as follows:

        where $ {\left\| \cdot \right\|_1} $ is the l1-norm.

        From (7), it is obvious that the elements of the signal have few relatively dominant values when GI is large, and have no apparent differences when GI is small. In other words, for a compressive MMW imaging problem, finding the sparse solution is equivalent to maximizing the GI value. Consequently, we define the GI-norm as

        where ${\bf{w}} = [{w_1}{\rm{,}}\;{w_2}{\rm{,}}\; \cdots {\rm{,}}\;{w_N}] = \left[ {\dfrac{{2N - 2{i_1} + 1}}{{N{{\left\| {\bf{v}} \right\|}_1}}}{\rm{,}}\;\dfrac{{2N - 2{i_2} + 1}}{{N{{\left\| {\bf{v}} \right\|}_1}}}{\rm{,}}\; \cdots {\rm{,}}\;\dfrac{{2N - 2{i_N} + 1}}{{N{{\left\| {\bf{v}} \right\|}_1}}}} \right]$, $ {i_n} $ is the index of the element ${v_{(n)}}$ in the vector $[{v_{(1)}}{\rm{,}}\;{v_{(2)}}{\rm{,}}\; \cdots {\rm{,}}\;{v_{(N)}}]$, and $ \odot $ is the Hadamard product. It is obvious that the GI-norm can be considered as a specifically weighted l1-norm, and it is significantly more convenient than the original GI, because (6) is a minimization problem. Here, the dual norm of the GI-norm is given in the following form:

        where ${\mathcal{B}_{{\rm{GIN}}}} = \{ {{\bf{u}}_1} \in {\mathbb{C}^N}:|{{{{\bf{u}}_1}(n)} \mathord{\left/ {\vphantom {{{{\bf{u}}_1}(n)} {{w_n}}}} \right. } {{w_n}}}| \le 1{\rm{,}}\;n = 1{\rm{,}}\;2{\rm{,}}\; \cdots {\rm{,}}\;N\}$, and $ {{\bf{u}}_1} $ is the dual variable of $ {\bf{v}} $.

        The TV operator is a widely used edge-preserving regularization technique that represents the sparsity of an image in the gradient field:

        where $ {\rm{Grad}}( \cdot ):{\mathbb{C}^N} \mapsto {\mathbb{C}^N} \times {\mathbb{C}^N} $ is the discrete 2D gradient operator. The dual form of the TV operator is given as follows:

        where $ {\rm{Div}}( \cdot ):{\mathbb{C}^N} \times {\mathbb{C}^N} \mapsto {\mathbb{C}^N} $ is the discrete divergence operator, and ${\mathcal{B}_{{\rm{TV}}}} = \{ {{\bf{u}}_2} \in {\mathbb{C}^N} \times {\mathbb{C}^N}:||{{\bf{u}}_2}(n)|{|_2} \le 1{\rm{,}}\; n = 1{\rm{,}}\;2{\rm{,}}\; \cdots {\rm{,}}\;N\}$, and $ {{\bf{u}}_2} $ is the dual variable of $ {\bf{f}} $.

        With the definitions of the GI-norm and TV operator, the GI-TV mixed sparsity regularization function takes the following form:

        where ${\Cambriabfont\text{ψ}}$ is the specific sparse transformation operator, and $ \alpha $ and $ \beta $ are parameters used to balance the performance of the GI-norm and TV operator. The complexity of a structure with multiple functions is high; thus, solving (6) based on the GI-TV mixed sparsity regularization is difficult. As a result, this motivates us to establish a specific algorithm (Section 4).

      • To solve (6), we propose a primal-dual framework-based algorithm that can optimize (6) effectively. To avoid calculating the inverse form of the measurement operator ${\Cambriabfont\text{Φ}}$, we utilize the majorization minimization (MM) method [22,23] to convert the minimization problem in (6) to an MM-approximated problem:

        where ${{\bf{b}}^{(k)}} = {{{\hat {\bf{f}}}}^{(k)}} - {{{{\Cambriabfont\text{Φ}}^{\rm{T}}}({\Cambriabfont\text{Φ}}{{{{\hat {\bf{f}}}}}^{(k)}} - {\bf{s}})} \big/ \mu }$, $ {{{\hat {\bf{f}}}}^{(k)}} $ is the reconstructed image in the kth iteration, ${\Cambriabfont\text{Φ}}^{\rm{T}}$ is the reverse process of ${\Cambriabfont\text{Φ}}$, and $ \mu $ is the step size constant.

        The Moreau envelope [2426] is a convex approximation method defined as follows:

        In the proposed algorithm, the GI-norm and TV operator have approximation formulas generated by the Moreau envelope in a single iteration:

        With the Moreau approximations of the GI-norm and TV operator, and by combining the dual forms (9) and (11), the GI-TV mixed regularization can be approximated in two ways:

        where ${ {\Cambriabfont\text{ψ}}^{{*}}}$ is the inverse transform operator of ${\Cambriabfont\text{ψ}}$. Therefore, we can split the minimization problem of (13) into two subproblems:

        and the kth primal solutions of (19) and (20) can be calculated explicitly as follows:

        where $ {\mathcal{P}_\mathcal{C}}( \cdot ) $ is the projection on the set $ \mathcal{C} $.

        With the gradient projection method [19], the dual solutions of (19) and (20) can be calculated iteratively as follows:

        where $ {\mathcal{P}_{{\mathcal{B}_{{\rm{GIN}}}}}}( \cdot ) $ and $ {\mathcal{P}_{{\mathcal{B}_{{\rm{TV}}}}}}( \cdot ) $ are the projections on sets $ {\mathcal{B}_{{\rm{GIN}}}} $ and $ {\mathcal{B}_{{\rm{TV}}}} $, respectively, and $ {L_1} $ and $ {L_2} $ are constants that make the dual forms of (19) and (20) satisfy the Lipschitz continuous gradient.

        Since (12) can also be represented in the dual form, the explicit solution of (13) can be calculated with the dual solution obtained from (23) and (24) iteratively as follows:

        The procedures of the proposed algorithm are presented in Algorithm 1 (Table 1). In Algorithm 1, we employ a Nesterov-like update strategy [27] to accelerate the convergence rate of the proposed algorithm, which renews the dual solutions $ {\bf{y}}_1^{(k)} $ and $ {\bf{y}}_2^{(k)} $ with the parameters $ {\bf{d}}_1^{(k)} $, $ {\bf{d}}_2^{(k)} $, $ {{\bf{r}}^{(k)}} $, and $ {t_k} $ in each iteration, where $ {t_k} $ is the auxiliary variable for the kth iteration, which is used to accelerate the convergence of the algorithm.

        Inputs: $ {\mathbf{s}} \in {\mathbb{C}^M} $, ${\Cambriabfont\text{Φ}}:{\mathbb{C}^N} \mapsto {\mathbb{C}^M}$, $ \alpha > 0 $, $ \beta > 0 $, $ {\lambda _1} > 0 $, $ {\lambda _2} > 0 $, $ \mu > 0 $, $ {L_1} > 0 $, $ {L_2} > 0 $, $\varepsilon \in (0{\rm{,}}\;1)$, and $ {N_{{\text{iter}}}} \in {\mathbb{N}_ + } $, where $ {N_{{\text{iter}}}} $ is the maximum number of iterations, and $ {\mathbb{N}_ + } $ denotes the positive integer domain;
        Initialization: $ k = 0 $, $ {{{\hat {\bf{f}}}}^{(k)}} = 0 $, ${ {\mathbf{r} }^{(k)} } = { {{\hat {\bf{f}}} }^{(k + 1)} } = { {\Cambriabfont\text{Φ} }^{\text{T} } }{\mathbf{s} }$, ${\mathbf{y} }_1^{(k)} = {\Cambriabfont\text{ψ}}{ {\mathbf{r} }^{(k)} }$, $ {\mathbf{y}}_2^{(k)} = {\text{Grad(}}{{\mathbf{r}}^{(k)}}{\text{)}} $, and $ {t_k} = 1 $;
        while $ k < {N_{{\text{iter}}}} $ and $\left\| { { { {{\hat {\bf{f}}} } }^{(k + 1)} } - { { {{\hat {\bf{f}}} } }^{(k)} } } \right\|_2^2 > \varepsilon \left\| { { { {{\hat {\bf{f}}} } }^{(k)} } } \right\|_2^2$ do
        ${ {\mathbf{b} }^{(k)} } = { {\mathbf{r} }^{(k)} } - \dfrac{1}{\mu }{ {\Cambriabfont\text{Φ} }^{\text{T} } }({\Cambriabfont\text{Φ} }{ {\mathbf{r} }^{(k)} } - {\mathbf{s} })$;
        ${\mathbf{m} }_1^{(k)} = {\mathcal{P}_\mathcal{C} }\left( {\dfrac{ {\mu { {\mathbf{b} }^{(k)} } + \beta {\lambda _1}{ { {{\hat {\bf{f}}} } }^{(k)} } - \alpha { {\Cambriabfont\text{ψ} }^*}{\mathbf{y} }_1^{(k)} } }{ {\mu + \beta {\lambda _1} } } } \right)$;
        ${\mathbf{m} }_2^{(k)} = {\mathcal{P}_\mathcal{C} }\left( {\dfrac{ {\mu { {\mathbf{b} }^{(k)} } + \alpha {\lambda _2}{ { {{\hat {\bf{f}}} } }^{(k)} } + \beta {\text{Div} }\left({\mathbf{y} }_2^{(k)}\right) } }{ {\mu + \alpha {\lambda _2} } } } \right)$;
        ${\mathbf{d} }_1^{(k)} = {\mathcal{P}_{ {\mathcal{B}_{ {\text{GIN} } } } } }\left( { {\mathbf{y} }_1^{(k)} + \dfrac{\alpha }{ {(\mu + \beta {\lambda _1}){L_1} } }{\Cambriabfont\text{ψ} }{\mathbf{ m} }_1^{(k)} } \right)$;
        ${ {\bf{d} } }_{2}^{(k)} = {\mathcal{P}_{\cal{B}_{\text{TV} } } }\left({ {\bf{y} } }_{2}^{(k)}+\dfrac{\beta }{(\mu +\alpha {\lambda }_{2}){L}_{2} }\text{Grad}\left({{\bf{m}}}_{2}^{(k)}\right)\right)$;
        ${ {{\hat {\bf{f}}} }^{(k+1)} }\, = {\mathcal{P}_\mathcal{C} }\left( { { {\mathbf{b} }^{(k)} } - \dfrac{\alpha }{\mu }{ {\Cambriabfont\text{ψ} }^{*} }{\mathbf{d} }_1^{(k)} + \dfrac{\beta }{\mu }{\text{Div} }\left({\mathbf{d} }_2^{(k)}\right)} \right)$;
        ${t_{k + 1} } = 0.5\left(1 + \sqrt {1 + 4t_k^2} \right)$;
        ${ {\mathbf{r} }^{(k + 1)} } = { {{\hat {\bf{f}}} }^{(k{ { + } }1)} }\, + \dfrac{ { {t_k} - 1} }{ { {t_{k + 1} } } }\left({ {{\hat {\bf{f}}} }^{(k{ { + } }1)} }\, - { {{\hat {\bf{f}}} }^{(k)} }\right)\,$;
        ${\mathbf{y} }_1^{(k + 1)} = {\Cambriabfont\text{ψ}}{ {\mathbf{r} }^{(k + 1)} }$;
        ${\mathbf{y} }_2^{(k + 1)} = {\text{Grad} }\left({ {\mathbf{r} }^{(k + 1)} }\right)$;
        $ k \leftarrow k + 1 $;
        end while

        Table 1.  Algorithm 1: Compressive near-field MMW imaging algorithm based on GI-TV mixed regularization.

      • In our experiments, all SUTs were placed on a plastic board, as shown in Fig. 1 (a). The MMW antenna probe was located at 40 mm above SUTs and operated at 150 GHz. The antenna scanned SUTs with a uniform grid of 0.5 mm in a 64 mm × 64 mm area. Fig. 1 (b) shows the image reconstructed from fully-sampled MMW data, which was used as a reference for image quality metrics. To obtain the best performance with the proposed algorithm, we set $ \alpha = 0.35 $, $ \beta = 0.01 $, $ {\lambda _1} = 1 $, $ {\lambda _2} = 1 $, $ \mu = 1 $, $ {L_1} = {{{\alpha ^2}} \mathord{\left/ {\vphantom {{{\alpha ^2}} {{{(\mu + \beta {\lambda _1})}^2}}}} \right. } {{{(\mu + \beta {\lambda _1})}^2}}} $, $ {L_2} = {{8{\beta ^2}} \mathord{\left/ {\vphantom {{8{\beta ^2}} {{{(\mu + \alpha {\lambda _2})}^2}}}} \right. } {{{(\mu + \alpha {\lambda _2})}^2}}} $, and $ \varepsilon = {10^{ - 4}} $. Here, the Symlet wavelet transform, which has eight vanishing moments, was applied as the sparse transform operator ${\Cambriabfont\text{ψ}}$. In addition, for comparison, we evaluated and compared the l1-TV mixed regularization algorithm [11] with the proposed algorithm. And the structural similarity index measure (SSIM) [28] was adopted to measure the quality of the reconstructed image. SSIM takes a value in the range of 0 to 1, where a larger score indicates the higher image quality. Note that all algorithms were implemented in MATLAB (R2019b) by using a personal computer with an Intel i5 processor (2.9 GHz and an 8-GB random access memory).

        Figure 1.  Photographs of (a) experimental SUTs and (b) reconstructed image from fully-sampled MMW data.

        Fig. 2 shows the images reconstructed by the proposed algorithm with under-sampling rates of 10%, 20%, and 30%. It can be seen that the proposed algorithm can recover the MMW image successfully with a low under-sampling rate, and the details of the images are also significantly improved with the increase of the under-sampling rate. In addition, we compare SSIM versus the number of iterations of the proposed and l1-TV mixed regularization algorithms with different under-sampling rates, as shown in Fig. 3. The results demonstrate that the proposed algorithm has much smoother and steadier curves than the l1-TV mixed regularization algorithm, which demonstrates superior convergence and stability performance.

        Figure 2.  Images reconstructed by the proposed algorithm with different under-sampling rates: (a) 10%, (b) 20%, and (c) 30%.

        Figure 3.  SSIM of the proposed and l1-TV mixed regularization algorithms versus the number of iterations with different under-sampling rates: (a) 10%, (b) 20%, and (c) 30%.

        Relative to the peak signal-to-noise ratio (PSNR), Table 2 shows the performance of the proposed and l1-TV mixed regularization algorithms under the break condition $ \varepsilon = {10^{ - 4}} $ with under-sampling rates of 10%, 20%, and 30%. As can be seen, the proposed algorithm demonstrates much faster convergence speed and superior performance.

        Under-sampling rates (%)Proposed algorithml1-TV mixed regularization introduced algorithm
        SSIMPSNRTime (s/iteration)SSIMPSNRTime (s/iteration)
        100.569525.12532.0112/1600.548224.42852.3317/170
        200.699327.79491.1197/810.664226.51932.9914/234
        300.775229.44130.5168/340.768329.23480.6237/44

        Table 2.  Performance comparison of proposed and l1-TV mixed regularization algorithms under break condition $ \varepsilon = {10^{ - 4}} $ at different under-sampling rates.

      • In this paper, we have proposed a compressive near-field MMW imaging model based on the GI-TV mixed regularization, which is a combination of the GI-norm and TV operator. A corresponding algorithm has also been structured under a primal-dual framework with a Nesterov-like update strategy. Experimental results demonstrate that the proposed algorithm can reconstruct MMW images from the data with a low under-sampling rate, and superior convergence and stability are obtained by the proposed algorithm compared with the l1-TV mixed regularization algorithm. Due to the lack of consideration of the sparsity measure of the TV operator in this paper, future work will consider the combination of the TV operator and sparsity measure criterion, and then propose new models and algorithms.

      • The authors declare no conflicts of interest.

    Reference (28)

    Catalog

      /

      DownLoad:  Full-Size Img  PowerPoint
      Return
      Return