On epicardial potential reconstruction using regularization schemes with the L1-norm data term

17
On epicardial potential reconstruction using regularization schemes with the L1-norm data term This article has been downloaded from IOPscience. Please scroll down to see the full text article. 2011 Phys. Med. Biol. 56 57 (http://iopscience.iop.org/0031-9155/56/1/004) Download details: IP Address: 138.26.16.5 The article was downloaded on 07/03/2011 at 05:08 Please note that terms and conditions apply. View the table of contents for this issue, or go to the journal homepage for more Home Search Collections Journals About Contact us My IOPscience

Transcript of On epicardial potential reconstruction using regularization schemes with the L1-norm data term

On epicardial potential reconstruction using regularization schemes with the L1-norm data

term

This article has been downloaded from IOPscience. Please scroll down to see the full text article.

2011 Phys. Med. Biol. 56 57

(http://iopscience.iop.org/0031-9155/56/1/004)

Download details:

IP Address: 138.26.16.5

The article was downloaded on 07/03/2011 at 05:08

Please note that terms and conditions apply.

View the table of contents for this issue, or go to the journal homepage for more

Home Search Collections Journals About Contact us My IOPscience

IOP PUBLISHING PHYSICS IN MEDICINE AND BIOLOGY

Phys. Med. Biol. 56 (2011) 57–72 doi:10.1088/0031-9155/56/1/004

On epicardial potential reconstruction usingregularization schemes with the L1-norm dataterm

Guofa Shou1, Ling Xia1,4, Feng Liu2, Mingfeng Jiang3 andStuart Crozier2

1 Key Laboratory for Biomedical Engineering of Ministry of Education, Department ofBiomedical Engineering, Zhejiang University, Hangzhou 310027, People’s Republic of China2 The School of Information Technology & Electrical Engineering, The University ofQueensland, St Lucia, Brisbane, Queensland 4072, Australia3 The College of Electronics and Informatics, Zhejiang Sci-Tech University, Hangzhou 310018,People’s Republic of China

E-mail: [email protected]

Received 6 September 2010, in final form 18 October 2010Published 30 November 2010Online at stacks.iop.org/PMB/56/57

AbstractThe electrocardiographic (ECG) inverse problem is ill-posed and usually solvedby regularization schemes. These regularization methods, such as the Tikhonovmethod, are often based on the L2-norm data and constraint terms. However,L2-norm-based methods inherently provide smoothed inverse solutions thatare sensitive to measurement errors, and also lack the capability of localizingand distinguishing multiple proximal cardiac electrical sources. This paperpresents alternative regularization schemes employing the L1-norm data termfor the reconstruction of epicardial potentials (EPs) from measured bodysurface potentials (BSPs). During numerical implementation, the iterativelyreweighted norm algorithm was applied to solve the L1-norm-related schemes,and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms ofthe normal derivative constraint (labelled as L1TV and L1L2)) were comparedwith the L2-norm data terms (Tikhonov with zero-order and normal derivativeconstraints, labelled as ZOT and FOT, and the total variation method labelled asL2TV). The studies demonstrated that, with averaged measurement noise, theinverse solutions provided by the L1L2 and FOT algorithms have less relativeerror values. However, when larger noise occurred in some electrodes (forexample, signal lost during measurement), the L1TV and L1L2 methods canobtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting

4 Author to whom any correspondence should be addressed.

0031-9155/11/010057+16$33.00 © 2011 Institute of Physics and Engineering in Medicine Printed in the UK 57

58 G Shou et al

that the new regularization scheme is promising for providing practical ECGinverse solutions.

(Some figures in this article are in colour only in the electronic version)

1. Introduction

The purpose of the electrocardiographic (ECG) inverse problem is to non-invasively detectthe electrophysiological activity of the human heart from measured body surface potentials(BSPs). Compared with the standard 12-lead ECG, which is commonly used in diagnosticpractice, the BSP-based ECG inverse solution is capable of providing more information aboutthe cardiac functionality. In recent years, several studies have demonstrated that this techniqueis capable of imaging a number of abnormalities in the cardiac electrophysiological activitiesin both simulation and experimental environments (Ramanathan et al 2004, 2006, Barley et al2009a, 2009b, Farina and Dossel 2009, Ghosh and Rudy 2009, Wang et al 2010).

In the ECG inverse problem, the reconstructed cardiac electrophysiological activitiesinclude dipoles (Barley et al 2009a, 2009b), cardiac potential distribution (such as epicardialpotentials (EPs) (Martin and Pilkington 1972, Ramanathan et al 2004, 2006, Shou et al2008, Ghosh and Rudy 2009) and transmembrane potentials (Farina and Dossel 2009, Wangand Arthur 2009, Wang et al 2010), and cardiac activation/recovery isochrones (Aydın andSerinagaoglu 2009, Ruud et al 2009, van Dam et al 2009). Among them, the EP-targeted ECGinverse problem has widely been applied and validated in many clinical cases (Ramanathanet al 2004, 2006, Ghosh and Rudy 2009).

In this study, we will focus on the reconstruction of the EPs from which the activationisochrones and other activities can be readily calculated. Under the quasi-statisticalassumption, the ECG inverse problem with the EPs is governed by the Laplace equationwith Cauchy boundary conditions:⎧⎨

⎩∇ · σ∇� = 0 in �

σ∇� · n = 0 on �T

� = �T on �T

find �E on �E, (1)

where � is the quasi-static potential, �E and �T are the potentials on the epicardial surface�E and body surface �T, that encloses the volume conductor �, σ is the tissue-dependentconductivity tensor. This boundary value problem can be solved using various numericalapproaches, such as the finite element method (FEM) (Farina et al 2009) and the meshlessmethod (Wang et al 2010). In this study, we considered the boundary element method (BEM)(Potse et al 2009, Shou et al 2009). The BEM implementation of the Laplace equation resultsin a final relationship between the �E and �T as

A�E = �T , (2)

where A is the transfer matrix associated with the volume conductor model properties includingthe geometry and the conductivities of the medium.

The system equation (2) is generally ill-posed; thus, small noises/errors in the measuredBSPs �T and the inaccuracy in the construction of the transfer matrix A will lead to largeperturbations in the reconstructed EPs. Therefore, the regularization process is usually usedto approximate the ECG inverse solution, which can be expressed as a least-squares problem(Tikhonov and Arsenin 1977, Hansen 1998):

f (�E) = 12‖A�E − �T ‖2

2 + λC(�E). (3)

On EP reconstruction using regularization schemes with the L1-norm data term 59

In the above expression, the first part is the data-fitting/residual term, while the second partis the constraint/penalty term used to constrain the solutions. The regularization parameter isλ > 0, which controls the trade-off between the two terms. In section 2, these two terms willbe described in detail.

In the literature, and when using different spatio-temporal constraints, a variety ofregularization methods have been proposed for the ECG inverse problem (Dossel 2000).These methods include the direct regularization methods ((Khoury 1994, Throne and Olson2000), truncated total least-squares (TTLS) methods (Shou et al 2008)), iterative methods(generalized minimal residual (GMRES); (Ramanathan et al 2003), Kalman filters (Aydınand Serinagaoglu 2009), least-squares QR (LSQR) (Jiang et al 2007) and level-set (Ruudet al 2009) and statistical methods such as the Bayesian estimation (Serinagaoglu et al 2005).Generally, all of these regularization schemes employ the L2-norm constraint terms, whichinherently provide smoothed solutions and therefore offer compromised accuracy in localizingand distinguishing proximal multiple cardiac sources. In many cases, the L2-norm data termregularization schemes are very sensitive to any measurement error. Non-quadratic (especiallyL1-norm) regularization techniques hold a promise to overcome the above-mentioned issuesappearing in the L2-norm-based solutions. In the literatures, the non-quadratic (especially L1-norm) constraint regularization technique, also known as the total variation (TV) regularizationin image science, has been proposed and applied in ill-posed inverse problems in the areasof image science (Rudin et al 1992, Strong and Chan 2003, Rodriguez and Wohlberg 2009),electromagnetic property imaging (Borsic 2002, Dai and Adler 2008, Borsic and Adler 2009,Borsic et al 2010) and bioelectric source imaging (Mocanu et al 2005, Bai et al 2007, Dingand He 2008, Ghosh and Rudy 2009, Milanic et al 2009), to recover the discontinuities andthe much sharper features of the investigated problem. For the ECG inverse problem, theL1-norm constraint term (not in the data term) has also been previously investigated (Mocanuet al 2005, Ghosh and Rudy 2009, Milanic et al 2009).

Although the L1-norm data term regularization scheme has been successfully appliedin the above-mentioned areas, it has not been explored for the ECG studies. In this paper,the regularization schemes with the L1-norm data term were proposed and applied for thereconstruction of the EPs from the measured BSPs. Considering the various combinationsof norms (L1- and L2-norms) and terms (data and penalty terms), two L1-norm data term-based schemes (L1L2 and L1TV) were compared with three L2-norm data term-based ones(Tikhonov with zero order (ZOT) and first order (FOT), and TV method (L2TV)), using bothsimulated and measured BSPs within an experimental protocol (MacLeod and Johnson 1993).

2. Theory and method

2.1. The constraint term

In (3), the constraint/penalty term C(�E) can be defined in different norms (L1- or L2-norm).The L2-norm-based method usually refers to the classical Tikhonov regularization (Tikhonovand Arsenin 1977), in which the penalty term is formulated as

C(�E) = 12‖R�E‖2

L2, (4)

where R is the regularization matrix, which is an application-dependent parameter and shouldbe properly determined by the a priori information on the investigated problem, and canbe chosen to be an identity matrix, a positive diagonal matrix approximating the first- orsecond-order differential operators(Hansen 1998).

When the matrix R is expressed as the identity matrix, it usually refers to the ZOT. In theECG inverse problem with the EPs, the R value has been expressed using the normal derivative

60 G Shou et al

operator of the EPs (∂�E/∂n) (Khoury 1994, Throne and Olson 2000, Ghosh and Rudy 2009),which can be directly determined from the solution in (1). The use of ∂�E/∂n as a first-orderscheme is labelled as the FOT in this paper. The L2-norm guarantees that the regularizationfunctional C(�E) is differentiable and easy to be implemented for the minimization problemas

�E = (AT A + λRT R)−1AT �T . (5)

However, it is known that the use of the L2-norm-based constraint has a spatial smoothingeffect on the solution, and therefore some locally distributed diagnostic information mightbecome blurred. Such an effect can be tackled in the L1-norm form.

The L1-norm constraint term is expressed as

C(�E) = ‖R�E‖2L1

. (6)

Compared to the L2-norm penalty (see (4)), the TV functional is a differential operator,and the L1-norm scheme measures the total amplitude of the oscillations of the function:

C(�E) = TV(�E) =∫

�E

|∇�E| d�. (7)

Instead of using the spatial gradients of the EPs, the ∂�E/∂n is considered as the TVregularization (Ghosh and Rudy 2009):

TV(�E) = ‖∂�E/∂n‖L1 . (8)

This scheme refers to the L2TV. In the TV regularization scheme, the absolute value penaltyleads to a non-differentiability problem when |∇�E| = 0. In this case, the numericalimplementation of the TV method becomes difficult and will be dealt with in section 2.3.

2.2. The data-fitting term

The data-fitting term will be the focus of this ECG inverse problem study. In the classicalTikhonov regularization scheme or TV method, the data-fitting term is L2-norm based, and isappropriate for handling Gaussian noise; however, the L2-norm data term is highly sensitiveto data outliers (Clason et al 2010a). There are certain successes in the implementation ofsecond-order Tikhonov regularization for the ECG inverse problem involving geometricalnoise and other errors (Dossel 2000, Jiang et al 2009). This research, however, attemptedto explore alternative schemes to tackle the weakness of L2 data fitting; non-smooth L1 datafitting has been proposed in the regularization function (Alliney 1997, Nikolova 2002, Chanand Esedoglu 2005). This approach is motivated by the non-Gaussian nature of the noise, andstatistically L1-norm data fitting is more robust with respect to outliers than the L2-norm datafitting (Clason et al 2010a). Consequently, the L1 data fitting has received growing interestin imaging science fields (Strong and Chan 2003, Rodriguez and Wohlberg 2009, Chan et al2010, Clason et al 2010b), and recently a number of superior solutions have been found in EITcompared to the L2 data fitting (Dai and Adler 2008, Borsic and Adler 2009). The advantageof the L1-norm data-fitting solution is also attractive to the ECG inverse solution, especiallywhen both measurement and geometrical errors are involved. Therefore, the L1-norm data-fitting term-based regularization scheme is applied here for the ECG inverse problem. It isformulated as

min ‖A�E − �T ‖L1+ λC(�E). (9)

When the L1-norm data term is combined with the L2 or L1 constraint terms, it refers to theL1L2 and L1TV schemes in which the matrix R is chosen to be the same as that in the L2TVscheme.

On EP reconstruction using regularization schemes with the L1-norm data term 61

2.3. Solver for L1-norm regularization

Once the L1-norm is applied to the data residual or constraint term, the corresponding termsbecome non-differentiable and thus cannot be solved directly. In the literature, a variety ofalgorithms have been developed to solve the L1-norm-based problem, which includes theiteratively reweighted least-squares (IRLS) (Dai and Adler 2008), lagged diffusivity (LD)iteration (Borsic 2002, Ghosh and Rudy 2009, Borsic et al 2010) and the primal dual-interiorpoint method (Borsic and Adler 2009, Borsic et al 2010). A recent review on this topic can befound in Rodriguez and Wohlberg(2009). In this study, we choose the iteratively reweightednorm (IRN) algorithm (Wohlberg and Rodriguez 2007, Rodriguez and Wohlberg 2009) tohandle the problem. The IRN algorithm is summarized as follows.

IRN algorithm:Initializations

�(0)E = (AT A + λRT R)−1A�T

For step k = 0,1, . . .

W (k)n = diag

((∣∣A�(k)E − �T

∣∣ + β)pn−2)

W(k)�E

= diag((∣∣R�

(k)E

∣∣ + β)p�E

−2)�

(k)E = (

AT W(k)n A + λRT W

(k)�E

R)−1

AT W(k)n �T

Stopping criterion

If

[f (�

(k+1)E )

f (�(k)E )

− 1

]� ε, (ε is a specified tolerance), then STOP ;

else k = k+1and go to step k.In the above algorithm, pn and p�E

control different norms for the data and image terms(1 for L1 and 2 for L2). The specified tolerance, ε, is set to be 0.01 for this work, and β is asmall positive constant with an assigned value of 10−5 in the following calculation.

In the L2-norm regularization-based inverse problem studies, the selection of theregularization parameter λ is problem dependent and several selection schemes, including thewell-known L-curve technique (Hansen 1998, Ghosh and Rudy 2009), have been proposed.However, the regularization parameter selection in the L1-norm-based schemes has not yetbeen fully resolved. Therefore, in this study, we found the value of λ by calculating theinverse solutions for a wide range of regularization parameters and then selecting that withthe smallest RE. This process minimized the dependence of the inverse solutions on theregularization parameter λ.

2.4. Evaluation procedure

We evaluated and compared the L1-norm data term-regularized (L1L2 and L1TV) solutions tothose of the L2-norm data term-based regularized schemes (ZOT, FOT and L2TV) using boththe simulated and the experimental measured dataset. For comparative studies, the simulationBSP dataset (BSPS) was calculated using the BEM forward calculation from the measured EPsand the measured one (BSPM), both being directly obtained from the ‘Map3D’ dataset carriedout in the Nora Eccles Harrison Cardiovascular Research and Training Institute, University ofUtah (MacLeod and Johnson 1993). In this dataset, the EPs were collected on 98 nodes at 96time instants and the BSPs were measured on 352 leads.

62 G Shou et al

The inverse solutions were evaluated with the RE and correlation coefficient (CC), whichcan be computed as

RE = ‖�r − �t‖L2

‖�r‖L2

CC =∑n

i [(�r)i − �r ][(�t)i − �t ]

‖�r − �r‖L2‖�t − �t‖L2

,

(10)

where the subscript ‘r’ refers to the reference result, ‘t’ corresponds to the test results and thesuperscript ‘-’ refers to the mean values. Based on the reconstructed EPs at each time instant,the RE and CC values can be calculated.

3. Results

3.1. Simulation studies

To investigate the ECG inverse problem, the forward ECG modelling was firstly conducted.In the simulation, the BSPS data obtained was based on the measured EPs data and BEMcalculations. The difference between the simulated and measured BSPs was evaluated interms of RE and CC for each time instant as shown in figure 1(a), with the large CC valuesin most of the time instants. By checking the BSP distribution at time = 96 ms with minimalCC (see figure 1(b)) and the electrogram (see figure 1(c)), we concluded that, even thoughsome errors exist because of the simplified volume conductor model, the current BEM forwardsimulation is acceptable for the inverse studies.

Using the obtained BSPS data, the above-mentioned five different regularization schemeswere compared in terms of the quality of the inverse solutions. It was noted that the EP datawere reconstructed using the BSPS data obtained on all the entire leads (350). To simulatethe real measurement, for each time instant, Gaussian white noise with a different signal tonoise ratio (SNR) was added to the BSPS data. The five methods had individually similarperformances for the dataset with similar SNR noise levels, and the results with the 30 dbnoise are illustrated in figure 2 as an example. It can be seen that all the five methods canreconstruct the EPs with a large CC (above 0.94), which indicates that the distribution of theEPs can well be recovered. In general, the FOT and L1L2 methods had the smallest RE value,and the ZOT scheme produced the largest errors, while the F2TV and L1TV performanceswere average. The activation-isochrone maps were also calculated from the epicardialelectrograms by assigning the local activation time as time instants having negative derivativesof the electrograms with a maximum (−dV/dtmax); see figure 3. Unlike the one shown infigure 2, the L2TV (panel D) and L1L2 (panel E) methods had large CC values (0.62 and 0.69,respectively) and produced results closer to real data.

During the BSP measurement, the sampling failure randomly occurred and some leadsmight record the potential data with a larger noise than the others because of a loose connectionor other possible reasons. Without the loss of generality, in the simulation the electrodes’malfunction was considered in a random manner, and 50 (out of 302) datasets were chosenand set values to be zeros (case 1) or with 20 db noise (case 2), compared to the otherelectrodes with the 30 db noise. Considering the entire leads (352), the inverse solutionswere first implemented for the above two cases and the corresponding results are displayed infigures 4(a) and (b). Then, the 302 leads (352–50) were only used in the inverse calculationwith a reduced transfer matrix A (from 352 × 98 to 302 × 98), and the results are shown infigure 4(c).

On EP reconstruction using regularization schemes with the L1-norm data term 63

(a)

(b)

(c)

Figure 1. The simulated BSPS data on the basis of the measured EPs. (a) The RE and CC curves.(b) The BSP distributions at time = 96 ms (with minimal CC): left panel, the measured BSPM;right panel, the simulated BSPS. (c) The simulated (black) and measured (blue) electrogram at thenode shown in (b).

Comparing figures 4 and 2, it can be found that the RE values of the five methodswere becoming larger with an increase in the data noise from 20 db to zero potentials inthe 50 electrodes. At the same time, the five methods had similar performances for the EPreconstruction when the 20 db noise was added to the problematic electrodes. However, once

64 G Shou et al

Figure 2. The RE and CC curves for the five employed regularization methods (ZOT, FOT, L2TV,L1L2 and L1TV). The data with 30 dB noise was used in the simulation.

(a) (b) (c)

(d) (e) (f)

Figure 3. Activation maps (posterior view) obtained from the inverse solutions. (a) Real data,(b)–(f) generated using ZOT, FOT, L2TV, L1L2 and L1TV regularization schemes, respectively.The CC values are shown in each panel. The data with 30 dB noise was used in the simulation.

the noise level became much larger, for example, the zero potential possibly recorded in those50 electrodes, these methods behaved rather differently; see in figure 4(a). The L1L2 still hada small RE value, while a large RE was obtained using the FOT method. In matrix A, omittingthose elements corresponding to the problematic electrodes led to little change in the resultsand a quite similar performance can be obtained from the five methods—compare figure 4(c)with figure 2. In fact, it might be difficult to detect those problematic electrodes in a practicalmeasurement. As shown in figure 4, for both cases, the L1L2 method was rather robust andconstantly reached the smallest RE for recovering the EPs. Based on the reconstructed EPdata, the activation map was computed and the CC values were summarized in table 1 forthe BSPS data. In comparison, the CCs of the activation map were smaller than those of thereconstructed EP, which might caused by a small coverage of the activation time (only from

On EP reconstruction using regularization schemes with the L1-norm data term 65

(b)

(c)

(a)

Figure 4. The RE and CC values produced by five regularization schemes with 50 electrodesmalfunctioned. (a) 50 problematic electrodes with zero voltage and a full-scale matrix A (352 ×98), (b) 50 problematic electrodes with 20 dB noise and a full-scale matrix A (352 × 98), (c) 302leads and reduced-scale matrix A (302 × 98). The five regularization methods are the same asthose shown in figure 2.

66 G Shou et al

Table 1. Summary of the CC data on the activation map for two BSP dataset and 50 problematicelectrodes.

No of problematicBSP (noise) Matrix A electrodes (noise) ZOT FOT L2TV L1L2 L1TV

BSPS (30 dB) 352 × 98 0 0.43 0.51 0.62 0.69 0.5450 (20 dB) 0.65 0.48 0.39 0.61 0.5650 (0) 0.19 0.17 0.37 0.46 0.38

302 × 98 50 (20 dB/0) 0.53 0.54 0.54 0.51 0.53

BSPM (0) 352 × 98 0 0.49 0.85 0.64 0.66 0.6315 (0) 0.43 0.55 0.48 0.52 0.7350 (0) 0.25 0.13 0.15 0.70 0.65

337 × 98 15 (0) 0.51 0.73 0.57 0.63 0.61302 × 98 50 (0) 0.62 0.57 0.71 0.56 0.67

1 to 15), which caused larger errors when calculating the activation time from the EP data.From table 1, it can be seen that for the activation map, the L2TV, L1L2 and L1TV methodswere much robust and the L1L2 method offered a large CC.

3.2. Experimental studies

The five regularization methods were also tested against the measured BSPM data. Inthis study, the possible zero-potential problem was only considered with three groups ofelectrodes, 0, 15(∼5%) and 50(∼15%), which were randomly altered over time. The full-scale and the scale-reduced matrix A both were used in the inverse solution when 15 and50 problematic electrodes were introduced. Figure 5 displays the RE and CC values forthe EP distributions reconstructed from the obtained BSPM with the five methods. TheCC values of the reconstructed activation map are summarized in table 1. As shown infigures 5(a)–(c), the FOT and L1L2 methods have the smallest RE and the largest CC,the solution obtained from the ZOT method has the largest errors; moreover, the L1TV’ssolution is slightly better than that of the L2TV scheme. Using the scale-reduced matrix A,all methods performed similarly to the BSPS data-based studies, as shown in figures 2 and 4(c)and also indicated by the CC values of the activation map shown in table 1. However, using thefull-scaled matrix A, these methods behaved rather differently. The L1L2 and L1TV methods,which use the L1-norm of the data-fitting term, performed better than the L2-norm, which canbe seen from the reconstructed EPs and the activation map. From the BSPM results, a similarconclusion to that in the BSPS data can be made. For example, the EPs at 20 ms (largest REfound, see figure 5(a)) were shown in figure 6. It can be seen that the L1-norm penalty termcan recover much sharper EP patterns (L2TV and L1TV), while the L2-norm-based solutionis not capable of predicting these potentials.

4. Discussion

In this study, the regularization schemes with the L1-norm data term (L1L2 and L1TV) wereinvestigated in the EP-based ECG inverse problem. The proposed schemes were comparedwith the L2-norm data term-based (ZOT, FOT and L2TV) using both the simulation and theexperimental dataset. Previously, the methods with the L2-norm data term (ZOT, FOT) andthe L2TV method with a constraint on the normal gradient of EPs had been investigated in the

On EP reconstruction using regularization schemes with the L1-norm data term 67

(a)

(b)

(c)

Figure 5. The RE and CC values obtained using five different regularization schemes for themeasured BSPM data. (a) No problematic electrodes and whole-scale matrix A, (b) 15 ‘error’electrodes and with scale-reduced matrix A (337 × 98), (c) 50 ‘error’ electrodes and with scale-reduced matrix A (302 × 98), (d) 15 ‘error’ electrodes and with whole-scale matrix A, and (e) 50‘error’ electrodes and whole-scale matrix A.

68 G Shou et al

(d)

(e)

Figure 5. (Continued.)

ECG inverse studies. The results showed that the FOT approach offers a better accuracy thanthe ZOT approach (Khoury 1994, Throne and Olson 2000). The L1-norm on the constraintterm (that is, the TV regularization) had a ‘non-smoothing’ property, preserving the spatio-temporal features in a better manner (Mocanu et al 2005, Ghosh and Rudy 2009, Milanicet al 2009). These properties can also be found in this study through an examination of thereconstructed EP distribution (figure 6) and the activation map (figure 3). It can be seen that theconstraint term in the L2-norm penalizes smooth transitions less than sharp transitions, whilethe L1-norm mainly penalizes the transition amplitude but not its slope. More importantly, therobustness of the L1-norm data term-based schemes (L1L2 and L1TV) was investigated andcompared to the L2-norm-based ones. The simulation results on the problematic electrodeswith no voltage or a larger noise input demonstrated the advantages of the L1-norm data termformulation. The superiority of the L1-norm data term solution is more obvious with a largernoise content on the problematic electrodes. We note that, owing to the random nature of theproblematic electrodes, the study involving the scale-reduced matrix A is, pragmatically, lessmanageable, which demonstrates that the L1-norm data term-based schemes are more feasible

On EP reconstruction using regularization schemes with the L1-norm data term 69

(a) (b) (c)

(e) (f)(d)

Figure 6. The EP (posterior view) patterns determined using the five regularization methods withmeasured BSPM data. The simulated time corresponds to t = 20 ms; largest RE is obtained asshown in figure 5(a). (a) Real data, (b)–(f) generated using ZOT, FOT, L2TV, L1L2 and L1TVregularizations, respectively. The RE and CC values are shown in each panel.

in a clinical practice application. In addition, the L1-norm data term can also be combinedwith the L1-norm penalty term to gain more benefits in performance.

The L1-norm formulation has two distinct advantages, as demonstrated above, in termsof edge preservation (on the penalty term) and error immunity (on the data residue term).However, the implementation of the L1-norm methods is much more complex than that ofthe L2-norm methods. We have compared the LD algorithm, which has been used beforein a cardiac inverse problem (Ghosh and Rudy 2009), with the IRN algorithm in the L2TVmethod. Figure 7 compares the results of the IRN and LD algorithms. It can be seen that theIRN algorithm offers a better solution with similar computational efficiency. Therefore, theIRN algorithm is chosen for this study because of its offered accuracy and efficiency.

In this study, we chose the regularization parameter λ by comparing a large number (30)of inverse solutions in which the λ for the j th time-step is determined, based on that for the(j -1)st time-step. This method can effectively find the optimal regularization parameteroffering acceptable inverse solutions. However, this scheme is computationally expensiveand might not be easy to implement in a clinical practice. The selection of the optimalregularization parameters has been extensively discussed in the L2-norm scenario (Hansen1998); however, this issue for the L1-norm regularization has not been fully resolved. In arecent work, Ghosh (Ghosh and Rudy 2009) used the L-curve method to choose the λ valuefor the L2TV, and a balancing principle was simply proposed to consider the regularizationparameter in the L1 data-fitting method (Clason et al 2010a, 2010b). An improvement of theselection of the regularization parameter is under investigation.

All of the L1-norm-related regularization techniques (L2TV, L1L2 and L1TV) solved theproblem using the IRN iterative procedure. Compared to the L2-norm techniques such asthe ZOT and FOT methods, these L1-norm techniques require more computational time. Forthe simulations of one entire cardiac cycle, the computation time is as follows: 7 s for the

70 G Shou et al

Figure 7. The RE and CC for the EPs calculated with the L2TV method with the IRN and LDalgorithms.

ZOT and FOT methods, 60 s for the L2TV method, 40 s for the L1L2 method and 55 s forthe L1TV method. These schemes were all implemented in Matlab on a modern PC (IntelCore (TM) i7 CPU 2.67GHz and 12.0GB RAM). These computation costs do not hinder theirclinical application.

5. Conclusion

In this paper, the L1-norm data term-based regularization schemes (L1L2 and L1TV) werepresented for the non-invasive imaging of the EPs in the ECG inverse problem. Simulationscombined with experimental data have demonstrated that, compared to the L2-norm data term-based methods (ZOT, FOT and L2TV), the L1-norm data term methods (L1TV and L1L2)are capable of offering an improved inverse solution with localized feature-keeping and errorimmunity. Therefore, these can be robustly applied in the ECG inverse problem towards aclinical practice. In future work, these regularization approaches will be improved and appliedto the study of various normal/abnormal cardiac electrical activities.

Acknowledgments

The authors would like to thank Dr Robert S Macleod from the University of Utah (CVRTI andSCI Institute) for providing the data used in this study. This project is supported by the 973National Basic Research & Development Program of China (2007CB512100), the NationalNatural Science Foundation of China (30900322), Zhejiang Provincial Natural ScienceFoundation of China (Y2090398), China Postdoctoral Science Foundation (20090461376), theFundamental Research Funds for the Central Universities (KYJD09001) and the AustralianResearch Council.

References

Alliney S 1997 A property of the minimum vectors of a regularizing functional defined by means of the absolutenorm IEEE Trans. Signal Process. 45 913–7

Aydın U and Serinagaoglu Y 2009 Use of activation time based Kalman filtering in inverse problem ofelectrocardiography IFMBE Proc. vol 22 (Berlin: Springer) pp 1200–3

On EP reconstruction using regularization schemes with the L1-norm data term 71

Bai X, Towle V L, He E J and He B 2007 Evaluation of cortical current density imaging methods using intracranialelectrocorticograms and functional MRI Neuroimage 35 598–608

Barley M, CHoppu K J, Galea A M, Armoundas A A, Hirschman G B and Cohen R J 2009a Validation of a novelcatheter guiding method for the ablative therapy of ventricular tachycardia in a phantom model IEEE Trans.Biomed. Eng. 56 907–10

Barley M E, Armoundas A A and Cohen R J 2009b A method for guiding ablation catheters to arrhythmogenic sitesusing body surface electrocardiographic signals IEEE Trans. Biomed. Eng. 56 810–9

Borsic A 2002 Regularisation methods for imaging from electrical measurements PhD Thesis School of Engineering,Oxford Brookes University

Borsic A and Adler A 2009 A primal dual—interior point framework for EIT reconstruction and regularizationwith 1-norm and 2-norm 10th Int. Conf. on Biomedical Applications of Electrical Impedance Tomography(EIT 2009) (Manchester, UK, 16–19 June) http://www.sce.carleton.ca/faculty/adler/publications/2009/borsic-EIT2009-PDIPM.pdf

Borsic A, Graham B M, Adler A and Lionheart W R 2010 In vivo impedance imaging with total variation regularizationIEEE Trans. Med. Imaging 29 44–54

Chan R, Dong Y and Hintermuller M 2010 An efficient two-phase L1-TV method for restoring blurred images withimpulse noise IEEE Trans. Image Process. 19 1731–9

Chan T F and Esedoglu S 2005 Aspects of total variation regularized L1 function approximation SIAM J. Appl.Math. 65 1817–37

Clason C, Jin B and Kunisch K 2010a A duality-based splitting method for L1-TV image restoration with automaticregularization parameter choice SIAM J. Sci. Comput. 32 1484–505

Clason C, Jin B and Kunisch K 2010b A semismooth Newton method for L1 data fitting with automatic choice ofregularization parameters and noise SIAM J. Imaging Sci. 3 119–231

Dai T and Adler A 2008 Electrical impedance tomography reconstruction using L1 norms for data and image termsIEEE EMBC 2008 pp 2721–4

Ding L and He B 2008 Sparse source imaging in electroencephalography with accurate field modeling Hum. BrainMapp. 29 1053–67

Dossel O 2000 Inverse problem of electro- and magnetocardiography: review and recent progress Int. J.Bioelectromagn. 2 (2)

Farina D and Dossel O 2009 Non-invasive model-based localization of ventricular ectopic centers from multichannelECG Int. J. Appl. Electromagn. Mech. 30 289–97

Farina D, Jiang Y and Dossel O 2009 Acceleration of FEM-based transfer matrix computation for forward and inverseproblems of electrocardiography Med. Biol. Eng. Comput. 47 1229–36

Ghosh S and Rudy Y 2009 Application of L1-norm regularization to epicardial potential solution of the inverseelectrocardiography problem Ann. Biomed. Eng. 37 902–12

Hansen P C 1998 Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion(Philadelphia, PA: SIAM)

Jiang M, Xia L, Shou G and Tang M 2007 Combination of the LSQR method and a genetic algorithm for solving theelectrocardiography inverse problem Phys. Med. Biol. 52 1277–94

Jiang Y, Qian C, Hanna R, Farina D and Dossel O 2009 Optimization of the electrode positions of multichannelECG for the reconstruction of ischemic areas by solving the inverse electrocardiographic problem Int. J.Bioelectromagn. 11 27–37

Khoury D S 1994 Use of current density in the regularization of the inverse problem of electrocardiography Proc.IEEE EMBS 3–6 133–4

MacLeod R S and Johnson C R 1993 Map3d: interactive scientific visualization for bioengineering data IEEE EMBS15 30–1

Martin R O and Pilkington T C 1972 Unconstrained inverse electrocardiography: epicardial potentials IEEE Trans.Biomed. Eng. 19 276–85

Milanic M, Jazbinsek V, Wang D, Sinstra J, MacLeod R, Brooks D and Hren R 2009 Evaluation of approaches tosolving electrocardiographic imaging problem Comput. Cardiol. 36 177–80

Mocanu D, Kettenbach J, Eisenberg S and Morega A M 2005 Nonsmooth regularization in electrocardiographicimaging Rev. Roum. Sci. Tech. Electrotech. Energ. 50 249–60

Nikolova M 2002 Minimizers of cost-functions involving nonsmooth data-fidelity terms. Application to the processingof outliers SIAM J. Numer. Anal. 40 965–94

Potse M, Dube B and Vinet A 2009 Cardiac anisotropy in boundary-element models for the electrocardiogram Med.Biol. Eng. Comput. 47 719–29

Ramanathan C, Ghanem R N, Jia P, Ryu K and Rudy Y 2004 Noninvasive electrocardiographic imaging for cardiacelectrophysiology and arrhythmia Nat. Med. 10 422–8

72 G Shou et al

Ramanathan C, Jia P, Ghanem R, Calvetti D and Rudy Y 2003 Noninvasive electrocardiographic imaging (ECGI):application of the generalized minimal residual (GMRes) method Ann. Biomed. Eng. 31 981–94

Ramanathan C, Jia P, Ghanem R, Ryu K and Rudy Y 2006 Activation and repolarization of the normal human heartunder complete physiological conditions Proc. Natl Acad. Sci. 103 6309–14

Rodriguez P and Wohlberg B 2009 Efficient minimization method for a generalized total variation functional IEEETrans. Image Process. 18 322–32

Rudin L, Osher S J and Fatemi E 1992 Nonlinear total variation based noise removal algorithms Physica D 60 259–68Ruud T S, Nielsen B F, Lysaker M and Sundnes J 2009 A computationally efficient method for determining the size

and location of myocardial ischemia IEEE Trans. Biomed. Eng. 56 263–72Serinagaoglu Y, Brooks D H and MacLeod R S 2005 Bayesian solutions and performance analysis in bioelectric

inverse problems IEEE Trans. Biomed. Eng. 52 1009–20Shou G, Xia L, Jiang M, Wei Q, Liu F and Crozier S 2008 Truncated total least squares (TTLS): a new regularization

method for the solution of ECG inverse problems IEEE Trans. Biomed. Eng. 55 1327–35Shou G, Xia L, Jiang M, Wei Q, Liu F and Crozier S 2009 Solving the ECG forward problem by means of standard

h- and h-hierachical adaptive linear boundary element method: comparison with two refinement schemes IEEETrans. Biomed. Eng. 56 1454–64

Strong D and Chan T 2003 Edge-preserving and scale-dependent properties of total variation regularization InverseProblems 19 S165–87

Throne R D and Olson L G 2000 A comparison of spatial regularization with zero and first order Tikhonovregularization for the inverse problem of electrocardiography Comput. Cardiol. 27 493–6

Tikhonov A N and Arsenin V Y 1977 Solutions of Ill-Posed Problems (New York: Wiley)van Dam P M, Oostendorp T F, Linnenbank A C and van Oosterom A 2009 Non-invasive imaging of cardiac activation

and recovery Ann. Biomed. Eng. 37 1739–56Wang L W, Zhang H Y, Wong K C L, Liu H F and Shi P C 2010 Physiological-model-constrained noninvasive

reconstruction of volumetric myocardial transmembrane potentials IEEE Trans. Biomed. Eng. 57 296–315Wang S and Arthur R M 2009 A new method for estimating cardiac transmembrane potentials from the body surface

Int. J. Bioelectromagn. 11 59–63Wohlberg B and Rodriguez P 2007 An iteratively reweighted norm algorithm for minimization of total variation

functionals IEEE Signal Process. Lett. 14 948–51