Detection of EM Fields Scattered by Langmuir Soliton Using Dynamic Logic Algorithm

14
Chapter Number 1 Implementation of Dynamic Logic Algorithm for 2 Detection of EM Fields Scattered 3 by Langmuir Soliton 4 V.I. Sotnikov, R.W. Deming and L. Perlovsky 5 Air Force Research Laboratory, Sensors Directorate, Hanscom AFB, MA, 6 USA 7 1. Introduction 8 In plasma diagnostics, scattering of electromagnetic waves is widely used for identification 9 of density and wave field perturbations. Very often difficulties with implementations of this 10 type of diagnostics are connected with the fact that scattered signals have small amplitudes 11 comparable with the noise level. Under such conditions it is very important to implement 12 the algorithm which allows us to identify and separate the scattered wave spectrum from 13 the noise pattern. In the present work we use a powerful mathematical approach, dynamic 14 logic (DL) [1], to identify the spectra of electromagnetic (EM) waves scattered by a Langmuir 15 soliton a one-dimensional localized wave structure. Such wave structures are understood 16 to be basic elements of strong plasma turbulence. 17 We are interested in the implementation of the Dynamic Logic algorithm for diagnostics of 18 electromagnetic waves scattered by Langmuir soliton in the presence of random noise with 19 amplitudes comparable with the amplitudes of the scattered waves. Scattering of EM waves 20 on Langmuir soliton was previously analyzed by many authors (see for example Sinitzin, 21 1978, 1979; Mendoca, 1983). 22 For our purposes we will use the results for electromagnetic wave (EM) scattering on 23 Langmuir soliton obtained in Mendonca, 1983, where scattering on low frequency part of 24 density perturbations (density well) as well as scattering on high frequency part of density 25 perturbations was examined and corresponding expressions for the amplitudes of the 26 scattered waves were obtained. 27 2. EM scattering by Langmuir soliton 28 Langmuir soliton is a solution to the well known Zakharov equations (Zakharov, 1972): 29 2 2 2 0 2 3 pe De pe E E n i r E t n x G Z Z w w w w (1) 30 2 2 2 2 2 2 2 2 1 || 32 s n n E c M t x x G G S w w w w w w (2) 31

Transcript of Detection of EM Fields Scattered by Langmuir Soliton Using Dynamic Logic Algorithm

Chapter Number 1

Implementation of Dynamic Logic Algorithm for 2 Detection of EM Fields Scattered 3

by Langmuir Soliton 4

V.I. Sotnikov, R.W. Deming and L. Perlovsky 5 Air Force Research Laboratory, Sensors Directorate, Hanscom AFB, MA, 6

USA 7

1. Introduction 8 In plasma diagnostics, scattering of electromagnetic waves is widely used for identification 9 of density and wave field perturbations. Very often difficulties with implementations of this 10 type of diagnostics are connected with the fact that scattered signals have small amplitudes 11 comparable with the noise level. Under such conditions it is very important to implement 12 the algorithm which allows us to identify and separate the scattered wave spectrum from 13 the noise pattern. In the present work we use a powerful mathematical approach, dynamic 14 logic (DL) [1], to identify the spectra of electromagnetic (EM) waves scattered by a Langmuir 15 soliton �– a one-dimensional localized wave structure. Such wave structures are understood 16 to be basic elements of strong plasma turbulence. 17 We are interested in the implementation of the Dynamic Logic algorithm for diagnostics of 18 electromagnetic waves scattered by Langmuir soliton in the presence of random noise with 19 amplitudes comparable with the amplitudes of the scattered waves. Scattering of EM waves 20 on Langmuir soliton was previously analyzed by many authors (see for example Sinitzin, 21 1978, 1979; Mendoca, 1983). 22 For our purposes we will use the results for electromagnetic wave (EM) scattering on 23 Langmuir soliton obtained in Mendonca, 1983, where scattering on low frequency part of 24 density perturbations (density well) as well as scattering on high frequency part of density 25 perturbations was examined and corresponding expressions for the amplitudes of the 26 scattered waves were obtained. 27

2. EM scattering by Langmuir soliton 28 Langmuir soliton is a solution to the well known Zakharov equations (Zakharov, 1972): 29

2

22

02 3 pe De pe

E E ni r Et nx

(1) 30

2 2 2 2

22 2 2

1 | |32s

n n EcMt x x

(2) 31

Astrophysics

2

Here 0n is the plasma density, n the density of perturbation, es

TcM

the ion-sound 1 velocity ( eT is electron temperature and M is ion mass), E the complex amplitude of the 2 Langmuir waves. 3 Fundamental object of the theory of strong Langmuir turbulence, Langmuir soliton 4 (Rudakov, 1972), can be obtained from the equations (1)-(2) when we are looking for 5 traveling wave solutions in the form: 6

( , ) ( ) exp[ ( )] .LS LS x LS LSt E x ut i k x t c cE r e (3) 7

with imposed boundary conditions ( ) 0E x and with LS pe . In (3) xe is 8 the unit vector along the x-axis. 9 Below we will use a well known solution to (1)-(2) which is given by: 10

( , ) sec ( )exp( ) . .LS LS x LS LSx utt E h ik x i t c cE r e (4) 11

Where is the soliton half-width, which can be expressed through the electrostatic energy 12 density normalized to the plasma thermal energy density

2

0

| |4

LSLS

e

EW n T : 13

3

De

LS

rW

(5) 14

Frequency LS and wave number LSk are defined as: 15

2

21[1 ( )]2 3vLS pe LS

Te

uW (6) 16

13vLS

Te De

ukr

(7) 17

where vTe and Der are electron thermal speed and Debye radius. 18 Electric field of an EM wave interacting with a Langmuir soliton can be written in the 19 following form: 20

0 0( )0 0( , ) . .i t

pE e c ck rE r t e (8) 21

where pe is the polarization unit vector, 0 and 0k are the frequency and the wave vector 22 of the EM wave. These are connected through the dispersion relation: 23

2 2 2 20 0 pek c (9) 24

In the process of scattering the EM wave will interact with electron density perturbations 25 associated with a Langmuir soliton. These density oscillations consist of high and low 26 frequency parts: 27

Implementation of Dynamic Logic Algorithm for Detection of EM Fields Scattered by Langmuir Soliton

3

1 2n n n (10) 1

The high frequency oscillations can be presented as: 2

( )1/21 0

2( ) sec ( ) . .3 v

LS LSi k x tLS

te

u x utn i W n h e c c (11) 3

For the low frequency part (density well) we can use: 4

22 0 sec ( )LS

x utn W n h (12) 5

It is well known that the ratio 1 2/n n is of the order of unity and we need to take into 6 account contributions to the scattered EM fields from both high and low frequency parts of 7 density perturbations. As a result far away from the soliton it is possible to obtain the 8 scattered wave field in the following asymptotic form (Mendonca, 1983): 9

0 1 0 2( ) 2 ( ) ( ) 2 ( ) ( )iqx iqxf LS s LS sE x A E E e B W E e (13) 10

In (13) ( )A and ( )B are given by: 11

2

1/20 10 12

0 0( ) ( ) ( )sec ( )

v 3 2pe

sTe e

uA hn T uc

e e (14) 12

2

20 12

0( ) ( ) sec ( )

2pe

sB co huc

e e (15) 13

where 1 and 2 are defined as: 14

1 0 LS (16) 15

2 0 16

As it follows from (13)-(16) at large distances from the soliton it is possible to detect four 17 scattered waves. Below for illustrative purposes we are presenting these four scattered 18 waves which appear due to interaction of the incident EM wave, propagating at the 450 19 angle in x-y plane with the moving along the x - axis soliton. 20 The pattern of the total normalized electric field amplitudes of scattered EM waves given in 21 (13) is presented in Fig. 2. 22 The Fourier spectrum of scattered EM waves is presented in Figure 3. Below we will use this 23 spectrum in implementation of the Dynamic Logic algorithm. 24 We are interested in implementation of Dynamic Logic algorithm for diagnostics of 25 electromagnetic waves scattered by Langmuir soliton in the presence of random noise with 26 amplitudes comparable with the amplitudes of the scattered waves. Imposing the random 27

Astrophysics

4

1 Fig. 1. Scattered EM waves produced due to interaction of the incident EM wave with 2 Langmuir soliton. 3

4

5 Fig. 2. The pattern of the total normalized electric field amplitudes of scattered EM waves. 6

7

8 Fig. 3. The Fourier spectrum of scattered EM waves from (13). 9

Implementation of Dynamic Logic Algorithm for Detection of EM Fields Scattered by Langmuir Soliton

5

noise pattern on top of the Fourier spectrum in Figure 3 with amplitudes comparable with 1 that of the scattered EM waves we will obtain the following 2D pattern in the Fourier space: 2

3 Fig. 4. Combined Fourier spectrum, consisting from the spectrum of scattered EM waves 4 and random noise pattern with comparable amplitudes. 5

As it clearly seen in Figure 4, scattered EM wave spectrum is hidden in the noise pattern. 6 Our aim is to apply Dynamic Logic algorithm in order to extract information on scattered 7 EM waves from the pattern in Figure 4. 8

3. Implementation of the dynamic logic algorithm 9 In order to demonstrate the basics of the DL algorithm we will present below one 10 illustrative example. Suppose we have two classes 1 and 2 . We also have samples 11

kX where samples with k=1 belong to class 1 and with k=2 to class 2 . Every sample k 12 has two features 1 2( , )k k kx xX . In Figure 5 two groups of samples are presented. Red dots 13 define samples from class k=1 and blue dots �– from class k=2. 14

15 Fig. 5. Two groups of samples. Samples defines as red dots belong to the class k=1 and 16 samples defined as blue dots to class k=1. 17

Typically, due to random variations there is good reason to expect the features in every class 18 to be distributed normally (that is, according to the well-known Gaussian function, or �“bell 19

Astrophysics

6

curve"). A normal distribution is completely specified by its set of parameters { , }M , 1 where M is the mean vector and is the covariance matrix (related to the width, or 2 �“spread", of the distribution). The equation for a normally-distributed probability density 3 function (pdf) for the class k=1 is: 4

11 1 1 1

1

1 1( , ) exp[ ( ) ( )]2(2 ) | |

Tn n nd

p X X M X M (17) 5

In (17) 1( , )np X is the probability that sample n with the feature vector ,1 ,2( , )n n nX XX 6 belongs to process 1 with 1 1 1( , )M . In (17) 1M is the mean vector for the class 1 7 and components of the mean vector are defined as: 8

1, 1( | )k kM X p dX X (18) 9

Components of the covariance matrix 1 for the class 1 are defined as: 10

1, 1, 1, 1 1( )( ) ( | , )ij i i j jX M X M p dX X (19) 11

When we have N training samples 1,nX which belong only to class 1 the procedure to find 12 the parameters of the Gaussian distribution 1 1 1={ , }M is as follows [6]. We need to 13 introduce the likelihood function 1( )L : 14

1 1, 11

( ) ( | )N

nn

L p X 15

Logarithm of the likelihood function : 16

1 1 1, 1

1( ) ln ( ) ln ( | )

N

nn

LL L p X

(20) 17

can be maximized by setting to zero its partial derivatives with respect to the parameters 18 1 1 1{ , }M . Resulting maximum likelihood values for the mean and covariance in the 19

class 1 are: 20

1 1,1

1

N

nnN

M X (21) 21

Components of the covariance matrix for the class 1 are defined as: 22

1 1, 1 1, 11

1 = ( )( )N

Tn n

nNX M X M (22) 23

Implementation of Dynamic Logic Algorithm for Detection of EM Fields Scattered by Langmuir Soliton

7

Now we will return to the problem formulated at the beginning of this section. Suppose we 1 have two processes (classes) running 1 and 2 . We also have identified N samples 2

,1{ }nX which belong to class 1 and another N samples ,2{ }nX which belong to class 2 . 3 Our aim is to design a classifier for sorting out the remaining unlabelled samples which 4 belong either to the class k=1 or to the class k=2, assuming that we know the features 5

1 2( , )n n nx xX of remaining samples, but do not know to which class do they belong. The 6 procedure we should follow is as follows. First we gather a set of training samples N for 7 each of the two classes k=1 and k=2. Next we assign feature values 1 2( , )n n nx xX to each 8 sample. Then for each class, separately, we estimate the mean and covariance using (22), to 9 give us a set of parameters { , }k k kM with k = 1,2. Knowing these parameters we can 10 compute the pdf ( | )kp X of the data according to equation (17). Then we can compute, for 11 each unlabelled sample, the probability 12

( | ) ( )( | )( )

n k

n

p P kP k np

XX

(23) 13

where 2

1 ( ) ( | ) ( )

k

n n kk

p p P kX X 14

Equation (23) gives the probability ( | )P k n that sample n belongs to class k. To classify the 15 unlabelled sample n, we simply choose the class k�’, for which probability is the highest. 16 Equations (23) is just an expression of Baye�’s rule. Note that in equation (23) it is assumed 17 we know the a priori probabilities P(k) for each class. Obviously, for each sample the 18 probabilities must add up to one, i.e. 19

2

1( | ) 1

k

kP k n (24) 20

After we described how to estimate the mean and covariance of a single normal (Gaussian) 21 distribution from a collection of data samples, below we will describe how to estimate 22 parameters when the pdf is a weighted mixture, or summation, of Gaussian functions. 23 Gaussian mixtures can be used to model complicated data sets. In such a case, each source 24 corresponds to a class k which is parameterized by its own mean and covariance which we 25 need to estimate. In addition to the mean and covariance, we also need to estimate the a 26 priori probabilities for each class, i.e. what fraction of the samples belong to each class. The 27 estimation of parameters for mixture models is complicated by the fact that the data samples 28 are unlabelled, i.e. we don�’t know which data samples correspond to which class. Because of 29 this inconvenience, it will be impossible to compute direct analytical expressions for the 30 maximum-likelihood parameter estimates. However, we will be able to derive a convergent, 31 iterative process to estimate the parameters. To do so we will perform the following basic 32 procedure. First, a generic model for the data is defined in which the parameters for each 33 class k are assumed to be the only unknowns. Next, a metric is selected which will be used 34 to evaluate the �“goodness of fit�” between the model and the data for certain set of parameter 35 values. Finally, an algorithm is developed to maximize the �“goodness of fit�” with respect to 36 the parameters. Below we will describe maximum likelihood (ML) parameter estimation, in 37

Astrophysics

8

which we seek to maximize the log-likelihood of the parameters with respect to a collection 1 of data samples. This approach is appropriate for standard data in which all samples are 2 weighted equally in terms determining model parameters. 3 Next, we introduce an alternative known as maximum entropy (ME) parameter estimation. 4 Here the parameters are chosen to maximize the Einsteinian log-likelyhood. This metric is 5 appropriate for processing digitized image data, in which samples are weighted according 6 to their pixel values. 7

4. Maximum likelihood (ML) estimation 8 We now describe maximum-likelihood (ML) parameter estimation for a Gaussian Mixture 9 [1, 6]. Mathematically, the mixture modelfor two classes k=1 and k=2 is represented by the 10 equation: 11

2

1( | ) ( |, )

k

k kk

p r pX X (25) 12

where kr is the mixture weight for class k and 1 1 2 2{ , , , }r r is the total set of parameters 13 for the mixture. Also ( , )k k kM is the subset of mean and covariance parameters for 14 class k. Each class k has the normal (Gaussian) distribution: 15

11 1( , ) exp[ ( ) ( )]2(2 ) | |

Tk k k kd

k

p X X M X M 16

Since kr denotes the fraction of the total samples that belong to class k, it is, by definition, 17 equivalent to the a priori probability P(k) for class k. It is a well-known fact that these 18 probabilities should sum to one, i.e.: 19

1 1( ) 1

K K

kk k

P k r (26) 20

Our goal is to compute the maximum likelihood (ML) parameter estimates 21 1 1 2 2{ , , , }r r given the additional constraint in equation (25). As in the previous 22

Section, this is accomplished by a process that incorporates the derivatives of the log-23 likelihood with respect to each parameter: 24

1 1 1

( ) ln ( | ) ln ( | ) N N K

n k n kn n k

LL p r pX X (27) 25

The maximum-likelihood parameter estimates exactly satisfy the equations: 26

1

1

( | )

( | )

N

nn

k N

n

P k n

P k n

XM (28) 27

Implementation of Dynamic Logic Algorithm for Detection of EM Fields Scattered by Langmuir Soliton

9

1

1

( | )( )( ) =

( | )

NT

n k n kn

k N

n

P k n

P k n

X M X M (29) 1

1

1 ( | )N

kn

r P k nN

(30) 2

1

( | ) ( | )( | )( | ) ( | )

k n k k n kK

j jj

r p r pP k np r p

X XX X

(31) 3

Equation (31) is actually a form of Baye's and can be interpreted as the association 4 probability, i.e. the probability that sample n belongs to class k. Equations (28) and (29) are 5 intuitively pleasing in the sense that they are identical to their counterparts Eqs. (21) and 6 (22) for the normal distribution, except that here each sample is weighted by the association 7 probability P(k/n). 8 Although Eqs. (28)-(31) look simple enough, they are actually a set of coupled nonlinear 9 equations, because the quantity P(k/n) on the right-hand sides is a function of all the 10 parameters , as can be seen from equations (31) and (25). If a person could solve these 11 equations directly, they would yield the exact values for the maximum-likelihood 12 parameters - unfortunately a direct solution is intractable. However, we can compute the 13 parameters indirectly, by iterating back and forth between the parameter estimation 14 equations (28)-(30) and the probability estimation equation (31). In other words, the 15 parameter estimates [ ( ) ( ) ( ), , I I I

k k krM ] in iteration I are computed using the association 16 probabilities ( 1)( / )IP k n computed from the previous iteration (I -1). Next, the association 17 probabilities ( )( / )IP k n in iteration I are computed using the parameter estimates 18 [ ( ) ( ) ( ), , I I I

k k krM ] from iteration I. Mathematically, this procedure corresponds to an 19 approximate set of equations, analogous to Eqs. (28)-(31), which are iteration procedure to 20 compute parameter estimates: 21

( 1)

( ) 1

( 1)

1

( | )

( | )

NI

nI n

k NI

n

P k n

P k n

XM (32) 22

( 1) ( ) ( )

( ) 1

( 1)

1

( | )( )( ) =

( | )

NI I I T

n k n kI nk N

I

n

P k n

P k n

X M X M (33) 23

( ) ( 1)

1

1 ( | )N

I Ik

nr P k n

N (34) 24

Astrophysics

10

and iteration procedure to compute probabilities : 1

( ) ( ) ( ) ( )

( )( )

( ) ( )

1

( | ) ( | )( | )

( | ) ( | )

I I I II n nk k k k

I KI I

j jj

r p r pP k n

p r p

X XX X

(35) 2

To initialize the procedure, parameter values are assigned at random, or by other means. 3 Then the above set of equations is repeated until convergence, typically on the order of 100 4 iterations. 5 Below in Figures 6 �– 7 results of implementation of the DL algorithm for the problem of 6 identification of samples which belong to two different classes 1 and 2 described above 7 and illustrated in Figure 5, are presented. 8

9 Fig. 6. With increasing iterations, the Dynamic Logic (DL) model adapts to segment data 10 samples drawn from multiple random processes. In this example, the data samples are 11 drawn from two Gaussian random processes, and the DL model consists of two 12 components. The dots and circles represent the data samples, while the black ellipses 13 represent the 2-sigma extent of the DL covariance, centered at the mean for each DL 14 component. (a) Initially, the mean parameters for the components are assigned random 15 initial values, and the covariance parameters are initialized in order to roughly span the 16 sample data. (b) After several iterations the DL components begin to partition the data in a 17 fuzzy manner as their evolving mean and covariance parameters cause them to �“shrink�” to 18 fit each sample cluster. (c) Upon convergence, the DL model correctly segments the 19 samples, and the mean and covariance parameters of the DL components provide good 20 estimates for the mean and covariance of the underlying random processes. 21

In practice, one can decide whether convergence has been obtained by plotting the log-22 likelihood vs. iteration number and looking for a leveling of in the curve. The procedure is 23 guaranteed to converge, in the sense that LL will never increase between successive 24 iterations. However, convergence to an extraneous local minimum is sometimes possible. If 25 this error occurs, and can be detected, the user must repeat the procedure after choosing a 26 new set of initial conditions. The iterative procedure described above was pioneered by 27 Perlovsky [1] and, independently, by others. In fact, the procedure has been shown to be an 28 application of the well-known expectation-maximization (or EM) algorithm which is 29 described, for example, in [20] and the references therein. 30

Implementation of Dynamic Logic Algorithm for Detection of EM Fields Scattered by Langmuir Soliton

11

5. Maximum entropy (ME) estimation 1 In ML estimation, discussed above, we utilized the log-likelihood to determine the �“goodness 2 of fit" between the model and the data. This well-known metric is appropriate for standard 3 data in which all samples are weighted equally in terms determining model parameters. 4 However for some data sets, for example those involving digitized images, we require a 5 different metric that will incorporate the natural weighting of samples corresponding to 6 their pixel values [1,6]. For these problems we now introduce the metric known as the 7 Einsteinian log-likelihood: 8

01

( ) ( )ln ( | )N

E n nn

LL p pX X (36) 9

Here, the function 0( )np X denotes the distribution of pixel values as a function of pixel 10 position within the image. Note that nX is no longer a vector of classification features, as 11 has been the case up until now, but rather a vector giving the position 1 2[ , ]Tn n nx xX of the 12 nth pixel in the image. 13 The Einsteinian log-likelihood LLE, which was introduced by Perlovsky[1], has some 14 interesting physical interpretations relating to photon ensembles, etc., as discussed in [1]. It 15 is also related to some standard measures of similarity used in the information theory arena. 16 In fact, maximizing LLE with respect to the parameters is equivalent to minimizing the 17 relative entropy, or Kullback-Leibler distance [5]. 18

00 0

( )( | ) ( )ln[ ]( )x

p xD p p p xp x

(37) 19

In any case, it is easy to see that if 0( ) 1np X , then the Einsteinian log-likelihood LLE in 20 equation (36) reduces to the standard log-likelihood LL given by equation (27). 21 The equation describing our mixture distribution is the same as equation (25): 22

1

( | ) ( |, ) K

k kk

p r pX X (38) 23

Where 24

11 1( , ) exp[ ( ) ( )]2(2 ) | |

Tk k k kd

k

p X X M X M 25

Also, as before, ( , )k kr , k = 1; 2; ::: ;K , is the total set of parameters for the mixture, 26 while { , }k k kM is the subset of mean and covariance parameters for class k, and kr is 27 the mixture weight for class k. Substituting equation (38) into equation (37), we obtain the 28 following expression for the Einsteinian log-likelihood: 29

01 1

( ) ( )ln ( | ) N K

E n k n kn k

LL p r pX X (39) 30

Astrophysics

12

For convenience we will assume that the data 0( )np X is normalized, i.e.: 1

01

( ) 1 N

nn

p X (40) 2

Therefore, to insure that also: 3

1

( | ) 1N

nn

p X (41) 4

so that the data and model have the same total energy, the kr must sum to one, i.e.: 5

1

1K

kk

r (42) 6

Our goal is to compute the maximum entropy (ME) parameters which maximize LLE 7 given by equation (39) subject to the constraint in equation (42). The mathematical 8 derivation for the ME parameter estimates is almost identical to the mathematics governing 9 the ML estimates. The result is that the ME parameters exactly satisfy the equations: 10

0

1

01

( | ) ( )

( | ) ( )

N

n nn

k N

nn

P k n p

P k n p

X XM

X (43) 11

01

01

( | ) ( )( )( ) =

( | ) ( )

NT

n n k n kn

k N

nn

P k n p

P k n p

X X M X M

X (44) 12

01

1 ( | ) ( ) N

k nn

r P k n pN

X (45) 13

1

( | )( | ) ( | )

k n kK

j jj

r pP k nr p

X

X (46) 14

As described in Section IV, the parameter estimation equations, above, are actually a set of 15 coupled nonlinear equations, due to the implicit dependence within the bracket notation on 16 the unknown parameters. However, we can compute the ME parameter estimates using a 17 convergent, iterative process, identical to the process described in Section IV for the ML 18 parameter estimates. In this process we alternate, in each iteration, between updating the 19 probabilities ( | )P k n and the parameters { kM , k , kr }. 20

Implementation of Dynamic Logic Algorithm for Detection of EM Fields Scattered by Langmuir Soliton

13

Applying maximum entropy estimation method to the problem of detection of scattered by 1 a Langmuir soliton EM waves hidden in a noise pattern (see Fig. 5) we will obtain the 2 following result, presented in Fig. 7. 3

4 Fig. 7. (upper left) The combined measured spectra of several soliton waves are completely 5 obscured by measurement noise. (upper right) The DL model is initialized with random 6 model parameters. (lower left) After a few iterations the DL model begins to learn the 7 structure of the data, and model components begin to adapt to fit the soliton spectra. (lower 8 right) Upon convergence, the DL model is able to adaptively segment the data into 9 background noise vs. multiple soliton spectra, despite the low signal-to-clutter ratio. 10

6. References 11 [1] Perlovsky, L.I. (2001). Neural Networks and Intellect: using model-based concepts. 12

Oxford University Press, New York, NY (3rd printing). [2] Rudakov, L.I., DAN 13 Sov. Acad. Sci, vol. 207, p. 821-823, 1972. 14

Astrophysics

14

[2] Rudakov, L.I., Langmuir soliton, DAN Sov. Acad. Sci, vol. 207, p. 821-823, 1972. 1 [3] Mendonca J.T., Scattering of waves by Langmuir soliton, J. Plasma Physics, Vol.30, part 1, 2

pp. 65-73, 1983. 3 [4] Sotnikov V. J.N. Leboeuf, S. Mudaliar, Scattering of electromagnetic waves in the 4

presence of wave turbulence excited by a flow with velocity shear, Trans. Plasma 5 Sci. IEEE, August 2010. 6

[5] MacKay, D.J.C., Information Theory, Inference and Learning Algorithms, Cambridge 7 University Press, Cambridge, U.K., 2003. 8

[6] R.W. Deming, Parameter estimation for mixtures, August, 2005. 9 [7] Moon, T.K., The Expectation-Maximization Algorithm, IEEE Signal Processing 10

Magazine, November, 1996, pp. 47-60. 11 [8] Duda, R. O. and Hart, P. E., Pattern Classification and Scene Analysis, section 6.4, New 12

York: Wiley, 1973. 13 [9] Dempster, A. P., Laird, N. M., and Rubin, D. B., "Maximum Likelihood from incomplete 14

data via the EM algorithm", Journal of Royal Statistical Society, Series B, vol. 39, 15 (1977), 1-38. 16

[10] McLachlan, G. J., and Krishnan, T., The EM Algorithm and Extensions, New York: 17 Wiley, 1997. 18

[11] Redner, R. A., and Walker, H. F., "Mixture densities, maximum likelihood and the EM 19 algorithm", SIAM Review, vol. 26, (1984), 195-239. 20