Mingfeng Jiang
Zhejiang Sci-Tech University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mingfeng Jiang.
IEEE Transactions on Biomedical Engineering | 2008
Guofa Shou; Ling Xia; Mingfeng Jiang; Qing Wei; Feng Liu; Stuart Crozier
The reconstruction of epicardial potentials (EPs) from body surface potentials (BSPs) can be characterized as an ill-posed inverse problem which generally requires a regularized numerical solution. Two kinds of errors/noise: geometric errors and measurement errors exist in the ECG inverse problem and make the solution of such problem more difficulty. In particular, geometric errors will directly affect the calculation of transfer matrix A in the linear system equation AX = B. In this paper, we have applied the truncated total least squares (TTLS) method to reconstruct EPs from BSPs. This method accounts for the noise/errors on both sides of the system equation and treats geometric errors in a new fashion. The algorithm is tested using a realistically shaped heart-lung-torso model with inhomogeneous conductivities. The h-adaptive boundary element method [h-BEM, a BEM mesh adaptation scheme which starts from preset meshes and then refines (adds/removes) grid with fixed order of interpolation function and prescribed numerical accuracy] is used for the forward modeling and the TTLS is applied for inverse solutions and its performance is also compared with conventional regularization approaches such as Tikhonov and truncated single value decomposition (TSVD) with zeroth-, first-, and second-order. The simulation results demonstrate that TTLS can obtain similar results in the situation of measurement noise only but performs better than Tikhonov and TSVD methods where geometric errors are involved, and that the zeroth-order regularization is the optimal choice for the ECG inverse problem. This investigation suggests that TTLS is able to robustly reconstruct EPs from BSPs and is a promising alternative method for the solution of ECG inverse problems.
Physics in Medicine and Biology | 2011
Guofa F Shou; Ling Xia; Feng Liu; Mingfeng Jiang; Stuart Crozier
The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.
Magnetic Resonance Imaging | 2013
Mingfeng Jiang; Jin Jin; Feng Liu; Yeyang Yu; Ling Xia; Yaming Wang; Stuart Crozier
Parallel imaging and compressed sensing have been arguably the most successful and widely used techniques for fast magnetic resonance imaging (MRI). Recent studies have shown that the combination of these two techniques is useful for solving the inverse problem of recovering the image from highly under-sampled k-space data. In sparsity-enforced sensitivity encoding (SENSE) reconstruction, the optimization problem involves data fidelity (L2-norm) constraint and a number of L1-norm regularization terms (i.e. total variation or TV, and L1 norm). This makes the optimization problem difficult to solve due to the non-smooth nature of the regularization terms. In this paper, to effectively solve the sparsity-regularized SENSE reconstruction, we utilize a new optimization method, called fast composite splitting algorithm (FCSA), which was developed for compressed sensing MRI. By using a combination of variable splitting and operator splitting techniques, the FCSA algorithm decouples the large optimization problem into TV and L1 sub-problems, which are then, solved efficiently using existing fast methods. The operator splitting separates the smooth terms from the non-smooth terms, so that both terms are treated in an efficient manner. The final solution to the SENSE reconstruction is obtained by weighted solutions to the sub-problems through an iterative optimization procedure. The FCSA-based parallel MRI technique is tested on MR brain image reconstructions at various acceleration rates and with different sampling trajectories. The results indicate that, for sparsity-regularized SENSE reconstruction, the FCSA-based method is capable of achieving significant improvements in reconstruction accuracy when compared with the state-of-the-art reconstruction method.
Physics in Medicine and Biology | 2011
Mingfeng Jiang; Yaming Wang; Ling Xia; Guofa Shou; Feng Liu; Stuart Crozier
Non-invasively reconstructing the transmembrane potentials (TMPs) from body surface potentials (BSPs) constitutes one form of the inverse ECG problem that can be treated as a regression problem with multi-inputs and multi-outputs, and which can be solved using the support vector regression (SVR) method. In developing an effective SVR model, feature extraction is an important task for pre-processing the original input data. This paper proposes the application of principal component analysis (PCA) and kernel principal component analysis (KPCA) to the SVR method for feature extraction. Also, the genetic algorithm and simplex optimization method is invoked to determine the hyper-parameters of the SVR. Based on the realistic heart-torso model, the equivalent double-layer source method is applied to generate the data set for training and testing the SVR model. The experimental results show that the SVR method with feature extraction (PCA-SVR and KPCA-SVR) can perform better than that without the extract feature extraction (single SVR) in terms of the reconstruction of the TMPs on epi- and endocardial surfaces. Moreover, compared with the PCA-SVR, the KPCA-SVR features good approximation and generalization ability when reconstructing the TMPs.
international conference of the ieee engineering in medicine and biology society | 2006
Guofa Shou; Mingfeng Jiang; Ling Xia; Qing Wei; Feng Liu; Stuart Crozier
Calculating the potentials on the hearts epicardial surface from the body surface potentials constitutes one form of inverse problems in electrocardiography (ECG). Since these problems are ill-posed, one approach is to use zero-order Tikhonov regularization, where the squared norms of both the residual and the solution are minimized, with a relative weight determined by the regularization parameter. In this paper, we used three different methods to choose the regularization parameter in the inverse solutions of ECG. The three methods include the L-curve, the generalized cross validation (GCV) and the discrepancy principle (DP). Among them, the GCV method has received less attention in solutions to ECG inverse problems than the other methods. Since the DP approach needs knowledge of norm of noises, we used a model function to estimate the noise. The performance of various methods was compared using a concentric sphere model and a real geometry heart-torso model with a distribution of current dipoles placed inside the heart model as the source. Gaussian measurement noises were added to the body surface potentials. The results show that the three methods all produce good inverse solutions with little noise; but, as the noise increases, the DP approach produces better results than the L-curve and GCV methods, particularly in the real geometry model. Both the GCV and L-curve methods perform well in low to medium noise situations
international conference on intelligent computing for sustainable energy and environment | 2010
Guofa Shou; Ling Xia; Mingfeng Jiang
Electrocardiographic mapping (ECGM) is to estimate the cardiac activities from the measured body surface potentials (BSPs), in which the epicardial potentials (EPs) is often reconstructed. One of the challenges in ECGM problem is its ill-posedness, and regularization techniques are needed to obtain the clinically reasonable solutions. The total variation (TV) method has been validated in keeping the sharp edges and has found some preliminary applications in ECG inverse problem. In this study, we applied and compared two algorithms: lagged diffusivity (LD) fixed point iteration and primal dual-interior point method (PD-IPM), to implement TV regularization method in ECGM problem. With a realistic heart-lung-torso model, the TV methods are tested and compared to the L2-norm regularization methods in zero- and first-order. The simulation results demonstrate that the TV method can generate better EPs compared to the zero-order Tikhonov method. Compared to the first-order Tikhonov method, the TVs results are much sharper. For the two algorithms in TV method, the LD algorithm seems more robust than the PD-IPM in ECGM problem, though the PD-IPM converges faster.
Physics in Medicine and Biology | 2008
Mingfeng Jiang; Ling Xia; Guofa Shou; Feng Liu; Stuart Crozier
In this paper, two hybrid regularization frameworks, LSQR-Tik and Tik-LSQR, which integrate the properties of the direct regularization method (Tikhonov) and the iterative regularization method (LSQR), have been proposed and investigated for solving ECG inverse problems. The LSQR-Tik method is based on the Lanczos process, which yields a sequence of small bidiagonal systems to approximate the original ill-posed problem and then the Tikhonov regularization method is applied to stabilize the projected problem. The Tik-LSQR method is formulated as an iterative LSQR inverse, augmented with a Tikhonov-like prior information term. The performances of these two hybrid methods are evaluated using a realistic heart-torso model simulation protocol, in which the heart surface source method is employed to calculate the simulated epicardial potentials (EPs) from the action potentials (APs), and then the acquired EPs are used to calculate simulated body surface potentials (BSPs). The results show that the regularized solutions obtained by the LSQR-Tik method are approximate to those of the Tikhonov method, the computational cost of the LSQR-Tik method, however, is much less than that of the Tikhonov method. Moreover, the Tik-LSQR scheme can reconstruct the epcicardial potential distribution more accurately, specifically for the BSPs with large noisy cases. This investigation suggests that hybrid regularization methods may be more effective than separate regularization approaches for ECG inverse problems.
IEEE Transactions on Biomedical Engineering | 2009
Guofa Shou; Ling Xia; Mingfeng Jiang; Qing Wei; Feng Liu; Stuart Crozier
The boundary element method (BEM) is a commonly used numerical approach to solve biomedical electromagnetic volume conductor models such as ECG and EEG problems, in which only the interfaces between various tissue regions need to be modeled. The quality of the boundary element discretization affects the accuracy of the numerical solution, and the construction of high-quality meshes is time-consuming and always problem-dependent. Adaptive BEM (aBEM) has been developed and validated as an effective method to tackle such problems in electromagnetic and mechanical fields, but has not been extensively investigated in the ECG problem. In this paper, the h aBEM, which produces refined meshes through adaptive adjustment of the elementspsila connection, is investigated for the ECG forward problem. Two different refinement schemes: adding one new node (SH1) and adding three new nodes (SH3), are applied for the h aBEM calculation. In order to save the computational time, the h-hierarchical aBEM is also used through the introduction of the h-hierarchical shape functions for SH3. The algorithms were evaluated with a single-layer homogeneous sphere model with assumed dipole sources and a geometrically realistic heart-torso model. The simulations showed that h aBEM can produce better mesh results and is more accurate and effective than the traditional BEM for the ECG problem. While with the same refinement scheme SH3, the h-hierarchical aBEM can save the computational costs about 9% compared to the implementation of standard h aBEM.
international conference of the ieee engineering in medicine and biology society | 2006
Mingfeng Jiang; Ling Xia; Guofa Shou
Reconstruction of the epicardial potentials from the body surface potentials constitutes one form of the ill-posed inverse problem of electrocardiography (ECG). In this paper, we investigate the use of genetic algorithms (GAS) for regularizing ill-posed ECG inverse problem. The result shows that, GAS cannot be used to regularized ill-posed problem without additional constraints, but combined with other methods or additional information about solutions, GAS is an efficient optimization technique for solving the ill-posed inverse problem. We adopt the Tikhonov regularized solutions as the additional information to construct the initial populations. This investigation suggests that the GAS may provide a useful tool for ECG inverse problem studies
Computational and Mathematical Methods in Medicine | 2013
Mingfeng Jiang; Shanshan Jiang; Yaming Wang; Wenqing Huang; Heng Zhang
The typical inverse ECG problem is to noninvasively reconstruct the transmembrane potentials (TMPs) from body surface potentials (BSPs). In the study, the inverse ECG problem can be treated as a regression problem with multi-inputs (body surface potentials) and multi-outputs (transmembrane potentials), which can be solved by the support vector regression (SVR) method. In order to obtain an effective SVR model with optimal regression accuracy and generalization performance, the hyperparameters of SVR must be set carefully. Three different optimization methods, that is, genetic algorithm (GA), differential evolution (DE) algorithm, and particle swarm optimization (PSO), are proposed to determine optimal hyperparameters of the SVR model. In this paper, we attempt to investigate which one is the most effective way in reconstructing the cardiac TMPs from BSPs, and a full comparison of their performances is also provided. The experimental results show that these three optimization methods are well performed in finding the proper parameters of SVR and can yield good generalization performance in solving the inverse ECG problem. Moreover, compared with DE and GA, PSO algorithm is more efficient in parameters optimization and performs better in solving the inverse ECG problem, leading to a more accurate reconstruction of the TMPs.