ADVANCED TECHNIQUES FOR DIGITAL IMAGE ...

72
ADVANCED TECHNIQUES FOR DIGITAL IMAGE PROCESSING by JAW-HORNG TARNG A THESIS IN ELECTRICAL ENGINEERING Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING Approved May, 1986

Transcript of ADVANCED TECHNIQUES FOR DIGITAL IMAGE ...

ADVANCED TECHNIQUES FOR DIGITAL IMAGE PROCESSING

by

JAW-HORNG TARNG

A THESIS

IN

ELECTRICAL ENGINEERING

Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for

the Degree of

MASTER OF SCIENCE

IN

ELECTRICAL ENGINEERING

Approved

May, 1986

AT'--

ACKNOWLEDGEMENTS

I wish to express my deepest gratitute to Dr. Thomas F. Krile and

Dr. John F. Walkup for their assistance, guidance, encouragement, and

friendship throughout my graduate career at Texas Tech University. I

also wish to thank Dr. Ren-Jeng Su for his support and encouragement

during my first year in the Department of Electrical Engineering, and

Dr. Ichiro Suzuki for being on my thesis committee. Finally, a special

thanks to my wife Huey-Wen for her patience and unending support.

n

ABSTRACT

A new algorithm for enhancing a degraded grey scale image is pro­

posed here. The enhancement algorithm is a locally adaptive Fourier

filter which locates and analyzes the Fourier spectral information and

then enhances the identifying features. Thus, it can achieve a better

enhancement result than conventional homomorphic FFT techiniques. By

using a short space basis implementation, a large amount of memory space

can be saved, consequently the computation speed is greatly improved.

The primary objective of this algorithm is to extract linear fea­

tures from a noisy image. However, the algorithm also can be modified

in order to enhance other different kinds of features.

The main advantages of this algorithm are:

1. It requires a small amount of computer memory; this makes it

easy to implement in small computers.

2. It has fast processing speed.

3. It is powerful in extracting local linear features.

m

CONTENTS

ACKNOWLEDGMENTS ii

ABSTRACT iii

LIST OF FIGURES vi

CHAPTER

I. INTRODUCTION 1

Overview of the Problem 2

The Image Processing System 4

Outline of the Thesis 6

II. TECHNICAL REVIEW 8

Introduction 8

The Model of Film Grain Noise Degradation 8

Restoration Techniques 12

A Posteriori-Determined Degradation Parameters 15 Measurement of The Optical Transfer Function 16

Measurement of Noise 16

Experimental Results . . . . 18

III. FOURIER SHORT SPACE ADAPTIVE FILTERING ALGORITHM 21

Introduction 21

Theory of the Algorithm 22 Selection of Subimages 22 Short Space Window Function 24 Adaptive Filtering Algorithm 25 Reconstructing the Image from Subimages 30

TV

IV. APPLICATIONS AND RESULTS 31

Glass Crack Image Enhancement 31

Fingerprint Enhancement 35

V. CONCLUSIONS 43

REFERENCES 45

APPENDIX 46

LIST OF FIGURES

1.1 The Experimental System 3

1.2 The Image Processing System 5

2.1 A Model for an Image Recording System 10

2.2 Simplified Model for an Image Recording System 11

2.3 An Assumed Model for an Imaging System 13

2.4 Line Spread Function and Optical Transfer Function 17

2.5 The Enhancement by Constrained Least-Squares Filter 19

3.1 Flow-Chart of Fourier Short Space Adaptive Filtering

Algorithm 23

3.2 Hamming Function 26

3.3 Wedge Filter 28

3.4 Flow-Chart of Adaptive Wedge Filtering Algorithm 29

4.1 Glass Crack Image 32

4.2 Glass Crack Detailed Image 36

4.3 Slightly Degraded Fingerprint 37

4.4 Severely Smudged Fingerprint 39

4.5 Degraded Fingerprint 40

4.6 Artifacts Using Wedge Filters 42

VI

CHAPTER I

INTRODUCTION

During recent years, digital image processing techniques have

become popular in a wide variety of fields such as engineering, computer

science, infomation science, and medicine. The results of research have

established the value of image processing techniques in a variety of

problems ranging from restoration and enhancement of space-probe pic­

tures to processing of fingerprints for commercial purposes.

Digital image processing can be divided into three major cate­

gories; coding, image analysis, and restoration-enhancement. In the

coding category, the computer manipulates the image into a form which

eliminates redundancy, thus minimizing storage and communication re­

quirements. Consequently efficient representation of the image is the

objective of image coding. The objective of image analysis is to pro­

vide a quantitative measurement of certain aspects of an image, such as

texture analysis, object detection, crop outlining and recognition,

photo-interpretation, or X-ray medical diagnosis. Digital image resto­

ration-enhancement has as its objective the removal of undesired effects

caused by such imperfect imaging situations as defocus, motion blur,

limited dynamic range and the presence of noise, and therefore the re­

sult is more suitable than the original image for a specific applica­

tion. This thesis concentrates on the last category.

Enhancement techniques can be divided into two broad categories:

frequency-domain methods and spatial-domain methods. Processing tech­

niques in the first category are based on modifying the Fourier spec­

trum of an image, and those in the second are based on direct manipula­

tion of the pixels in an image. In most cases, we combined these two

methods together to achieve the best results.

Overview of the Problem

The objective of this image enhancement research is to find an

algorithm which enhances linear features from an image that has been

severely degraded by film grain noise. The first images used were taken

from an experiment conducted by the Department of Civil Engineering,

Texas Tech University, researching the behavior of window glass under

the pressure produced by a tornado. The experimental system is shown in

Figure 1.1. A window glass specimen was secured on an air tight cham­

ber, then compressed air was blown in to break the glass.

A 16 mm high speed motion picture camera was used to record the

progress of the glass breaking. The pictures taken were bad for observ­

ing the pattern of the glass cracks because of the following reasons:

1. The combined resolving power of both the film and the camera

lens was not high enough to give a clear and noise-free

picture.

2. The size of the film used was small ( 16 mm X 10 mm ), and it

contained the image of the whole glass plate. This caused a

yery severe degradation of the image when doing an enlargement

with a large magnification [1].

<u +->

05

O

<T3

<v

(U

0)

E

6 I ^s^s^^s ^ / ^ ^ssssssssst^

I I

CD CO 05 (D Q. E o O

Although the quality of the picture can be improved by improving

those conditions mentioned above, this is not the issue here. The main

objective of our research is to recover the image of the cracks of the

breaking glass from the noisy background. Several standard restoration

algorithms were tried, for example, edge detection operators, Wiener

filter and homomorphic filter, but all of these methods resulted in a

little or no improvement in our degraded pictures.

The algorithm described in this thesis turned out to be yery useful

in this case. It greatly enhanced the severely degraded images such

that the glass cracks were much more visible than before the ehancement.

Finally, we applied the same algorithm to another area, namely enhance­

ment of degraded fingerprints. It greatly improved the quality of this

class of images as well.

The Image Processing System

The Texas Tech University image processing facility consists of the

following: A DEC VAX 11/780 mini-computer, a COMTAL VISION ONE/20 image

processing computer, a video camera, an RGB color monitor, an X-Y plot­

ter, and a color hard-copy recorder. The system configuration is shown

in Figure 1.2. The functions of this equipment are described below:

The VAX 11/780 was employed as the host computer, it saves the

images displayed on the COMTAL and stores them on either disk or tape,

and transfers them back when needed. It also performs the numerical

computations in the enhancement algorithm. The B/W video camera (either

vidicon type or CCD type), is used as a grey level area scanner. The

o

\ /

^ <o o ^ O 0)

2 i > ^

Col

or

Mon

itor

o Q. (0 H

o O) (Q

O 0)

k. 00

— "3 ^ C QL r-

O <

I

(0

CO

>» 00

en

to

(J o s. a. O) C7> (O

E

cu

cvj

O)

s -

(0

O

o

rag

o 0)

6

B/W video signal is then digitized into 256 grey levels by COMTAL's dig­

itizer. The plotter and the color hard-copy unit are the devices used

to obtain hardcopies of the image.

There are many functions that the COMTAL can perform, the functions

most often used in the processing are:

1. Digitize B/W image into 8 bits resolution (i.e., 256 grey

levels).

2. Display the image stored in the VAX.

3. Compute the histogram of the image, and/or manipulate the

histogram interactively.

4. Pseudocolor the monochromatic image.

5. Trace the image grey level pixel by pixel or line by line.

The main advantage of using the COMTAL to do the processing is

speed. It works on most of the jobs at video rates (i.e., it processes

30 pictures per second). It also can do most of the jobs interactively.

That means the users can adjust the parameters in real time, and in this

way it is yery easy to achieve the best processing outcome.

Outline of The Thesis

This thesis is divided into five chapters. Chapter I briefly in­

troduces some basic concepts of digital image processing, it also re­

views some of the problems we encountered. In chapter II, some image

ehancement techniques will be reviewed and applied to our problem. The

theory of the new image processing algorithm will be described in chap­

ter III. Then in chapter IV, the algorithm is applied to the processing

7

of several images and the results are discussed. Finally, chapter V

concludes with a discussion of this algorithm and suggestions for some

further extensions of it.

CHAPTER II

TECHNICAL REVIEW

Introduction

In this chapter several models for the image degradation system are

presented. Also some image restoration techniques will be reviewed for

possible application to our problem, i.e., enhancing the crack pattern

in a degraded image of fractured glass.

Numerous studies have shown several possible ways to enhance lines

or edges in images, either in the spatial domain (i.e., template opera­

tions [2,3,4,5,6]), or in the domain of various transformations (i.e.,

Hough transformation [7], Fourier transformation [8], Walsh-Hadamard

transformation [9]). Wiener filters, constrained and unconstrained

[10], are other ways to restore images from signal-independent noise.

In the signal-dependent noise case, there are also some approaches

available [11,12]. In spite of so many algorithms, it was difficult to

adopt those methods to solve our problem directly. In most cases, the

results were not satisfactory, as we will show in this chapter.

The Model of Film Grain Noise Degradation

As briefly reviewed in the first chapter, the problem we deal with

is the degradation caused by film grain noise and optical system blur

functions. The formation of an image on photographic film is a highly

8

9 complex optical and chemical process. Modeling this process with a high

degree of accuracy, if at all possible, often results in models that are

too complex and unsuitable for use in any subsequent mathematical

processing. On the other hand, oversimplification of the model makes

the subsequent restoration technique yery sub-optimal.

A wery complete degradation model from Naderi [1] is shown in

Figure 2.1. In this figure, the intensity of the light reflected from

the object is incident on the image forming device such as a camera.

The first of four blur functions in this model represents atmospheric

degradations, such as turbulence, as well as limiting effects of the

imaging system such as diffraction and aberrations.

The second linear blur function models optical diffusion effects,

such as scattering and halation, during the formation of the latent

image. The middle segment of this model represents the nonlinear char­

acteristic function, or H-D curve of the film, as well as the nolinear

signal dependent film grain noise.

The third linear blur function accounts for degradations such as

adjacency effects which arise during the development of the film. Fi­

nally, in the digitization process, the aperture of the scanner deter­

mines the fourth linear blur function. The output of this model is the

observed digitized image. This model is very good for describing the

imaging system but is too complicated for us to use.

After combining the linear blur functions, and assuming operation

on a linear segment of the H-D curve, the model of Figure 2.1 can be

further simplified into the form shown in Figure 2.2. This is a simpli­

fied and satisfactory model to use when doing image restoration.

10

s H&D Cur

ve

g 'en

CO « . -c o Is CO 0) -^ CO o t i CO

.i " 81

. . CO ' b: c c CD -2 O O o CQ -^ '*= S «- f« 2 — CO fc ?5 =5 <D CD « ^ P

. = ^ • - 3 IJ < Q H

4 i

Q) CD CD ^ N D ) <D *^ CO

O Q

t • • 3

m k -

co CD

c —J

c o "cO N

^ ^

O) • ^ ^ B

^ ^ CO o

CD

3 ^ M ^

^^ CD

Q CO <

I

• •

m v_ CO (D

c —I

o

usi

*^^

Q

CO C.)

b CD

en fe

e

^ ^ UJ

> N

CJ

CD O CO

T3 o<

<v CO >>

00

a> c

T3 & -O U CU

a : <u

c

O

•o o

CD en

'o <v

Li-

11

E <U

+•> CO

cn

• a i-o o <u

a>

E

C

o

o

"O

a. E

•^ 00

(U

CO CD

12

For the classical image restoration techniques that were attempted

first, namely the class of Wiener filters, an even simpler degradation

process is assumed as shown in Figure 2.3. Here, the noise is assumed

to be additive and signal independent. Also, the blur functions are

assumed to be linear and space-invariant and they are merged to become a

single, overall blur function. The general equation obtained from this

model is g = Hf + n where g is the degraded image, H is the blur func­

tion, f is the ideal image and n is the noise. As one might expect,

results based on the model of Figure 2.3 are not as good as those based

on more realistic imaging models.

Restoration Techniques

Not all the image enhancement techniques discussed at the beginning

of this chapter can be used to enhance or restore the degraded image.

In general, the edge extracting algorithms provided no improvement in

the severely degraded image. The Least-Squares Filter (Wiener filter)

is a better choice for restoring a noisy image. The disadvantage of

this filter is that the statistical parameters of the noise and signal

have to be known prior to the process along with the system blur func­

tion. Usually, they are unknown, and difficult to estimate.

Another alternative technique is to use a variation of the Wiener

filter, i.e., a Constrained Least-Squares Filter. This filter does not

require explict knowledge of statistical parameters other than an

estimate of the noise mean and variance. The filter function can be

expressed as the following equation [10]:

13

•c ( -D 9^ CD CD ^ N D)

c

c

Obs

ei

Dig

iti

Ima

^ CD ^ en I 1 -^ "K r J ^ O

Sr^ ^ X

< 4 - '

c CO

CO > »-CD C 3

•— A. CD ^ CD

O CO Q. CO

Ai

CD O) CO £ "cO

H - Q) • D

Sys

tem

. OT

or> E

h—1

c rtJ

i-o

^-

"oi "O o s -a (U E 3 CO CO

<:

c <

CO •

igu

re 2

u.

14

H (u,v) F(u,v) = 2 r G(u,v) (2-1)

|H(u,v)l + r |Q(u,v)r

for u,v = 0,1,2,...,N-1

Here F(u,v) represents the Fourier transform of the restored image;

G(u,v) is the Fourier transform of the degraded image; H(u,v) is the

optical transfer function (OTF) of the blur system; r is a constant;

and Q(u,v) is the constraint operator.

The generality of the linear operator [Q] acting on the object f

allows the development of a variety of constraints. For instance,

1. [Q] = [I]. In this case a minimum norm solution on f subject to

2 2 the noise norm equality constraint, i.e., ||g - Hf|| = ||nM ,

is sought. Thus leads to the pseudo-inverse filter.

2. [Q] = [ Finite Difference Matrix ]. Here the constraint opera­

tor may be chosen to minimize either second difference or fourth

difference energy of the estimated object. Such operator con­

straints guarantee that the object estimate f does not oscillate

wildly in the constrained solution since higher-order differ­

ences are minimized.

3. [Q] = [ Eye model ]. One may desire that the restoration be

appealing to a human from a perceptual viewpoint. In this case,

[Q] is a block circulant matrix whose properties in the Fourier

domain match the spatial frequency reponse of the psychophysics

of the human visual system.

15

The multiplier constant r must be adjusted such that the constraint

2 2 l l g - H f l l = | | n | | is sa t i s f ied . This is often done in an i t e ra t i ve

2 fashion. The norm of the noise | | n | l may be known or measurable a

poster ior i from the image. We w i l l discuss th is in the fol lowing

sect ion.

The constrained least-squares restoration procedure can be

summarized as follows:

Step 1. Choose an initial value of r, and obtain an estimate of

I|n|1^ and H(u,v).

Step 2. Compute F(u,v) using equation (2-1). Obtain f by taking

the inverse Fourier transform of F(u,v). 2

Step 3. Compute the constraint ||g - Hf|l .

Step 4. Determines the accuracy e to which the constraint is to be

satisfied. Stop the estimation procedure, with f for the

2 present value of r being the restored image, if ||g - Hf||

2 - ||n|| <.c. Otherwise, go to Step 5.

Step 5. Increment or decrement r according to the value of the

constraint.

Step 6. Return to Step 2 and continue unless Step 4 is true.

A Posteriori-Determined Degradation Parameters

The filters discussed in the previous section need some prior

knowledge about the ideal image and the imaging system, but this is not

always possible to obtain. A posteriori knowledge is important in

determining the parameters required by such filters. For example. The

point spread function of the imaging system can be determined from edges

16

or points in the degraded image that are known to exist in the ideal

image. Another example is the estimation of the noise variance and

noise power spectrum obtained from relatively smooth regions in the

degraded image.

Measurement of the Optical Transfer Function

The optical transfer function (OTF) is the Fourier transform of the

point spread function (PSF). In a space invariant system, an appropri­

ate edge (known to be an edge in the ideal image, a priori) is found in

the degraded image and a one-dimensional LSF (Line Spread Function) is

calculated. Then by assumption of circular symmetry the rotation of the

one-dimensional LSF becomes an estimate of the two-dimensional PSF.

Figure 2.4 shows an example of determining the OTF. In order to

minimize the significant amount of noise present in the image, ten or

twenty line scans across an individual edge in the image were averaged

and the resultant averaged edge was smoothed (i.e., low-pass filtered).

Next, the line spread function was obtained by numerically differentiat­

ing the edge. Finally, the estimate of the system transfer function was

taken as the modulus of the Fourier transform of the line spread

function.

Meaurement of Noise

It would be desirable to be able to evaluate an imaging system such

that the noise parameters associated with the data gathering process

were well-defined. However, there are a variety of models and mechan­

isms by which errors can enter the image acquisition process and the

17

9pni!idujv P9Z1IBUJJ0N

(fi

"c O

CL _CD Q . £ CO

CO^ 0 o 03 "en Q CD >

CD DC

cn

o CL

Q.

£ CO CD O

c "en Q CD

CD DC

(T3

diO

^ • • ^

cn

c o

^ - "O CO -^ X > • — i ^

> >

o c 0 3 D" (D U.

j0,.m^^

cn ^

• — o CL CD Q. £ CO

CO * - l _ l ^

CD O

"cn Q CD >

'f—. CO CD

DC

"O • * <v c a> o rt3 • ! -s- ••-> cu <J > c

U . •—^

cu &-

• Q . c oo o •1- cu

4-> C CJ -r-C - J 3

CJ i - «-^ cu

C T3 (T3 CU i- sz

1— ••-> o

r - O 'O E CJ <r>

•r--M 1 a.

O CO cu

T3 O ) c -o rO LLJ

c -a o cu •

•1- cn c +J rt3 O O S- - 1 -C OJ +J 3 > CJ u_ < c

3 -o --^ U-(O ^ <u *-^ J» s- cu Q . 4 -

0 0 •»< I/) 1— c

cu fO ro C C i -

• t - • . - 1 — _ j c n

•r— ^— S- fO

. o o ^ - 1 -

• 1 +J (NJ CL

to O cu cu s- cn^^ 3 - O XJ a > L i j — -

9pnj!|dujv 9pn;!|dLuv

18

model assumed for such noise sources may dramatically affect the res­

toration process. Consequently, it would be desirable to develop the

noise model and even parameterize that model by a posteriori techniques

associated with the image at hand.

In noise estimation, the noise parameters can be calculated from

regions which have relatively unchanging object content in the degraded

image. For example, in measuring the noise variance of the picture

shown in Figure 2.5(a), first choose some small regions in the lower

right corner or in the lower left corner, where no significant cracks

appear. Then the variances of those regions are computed and the aver­

aged values are taken as the estimated noise variance. The noise is

assumed to have zero mean. The values of the measure are shown in Table

2.1.

Table 2.1 The Noise Parameters of Figure 2.5(a).

No. Location Size Mean Variance

1. 2. 3. 4.

lower lef t corner lower lef t corner lower right corner lower right corner

32 X 32 32 X 32 32 X 32 32 X 32

106.4 104.1 140.2 143.1

68.2 75.7 63.5 59.3

Averaged Variance: 66.7

Experimental Results

We applied the Constrained Least-Squares Filter in an attempt to

enhance the degraded image of fractured glass. The original image is

19

(b)

Figure 2.5. The Enhancement by Constrained Least-Squares F i l te r , (a) The Original Image; (b) The Enhanced Image.

20

shown in Figure 2.5(a). After analyzing the image, we decided to use

the second difference matrix as the constraint operator since it can

smooth the noise. The OTF and the noise variance were measured by using

the techniques described above.

Figure 2.5(b) is the restored image. The noise is smoothed com­

pare to the original image. Also the cracks of the image are slightly

sharper then before. Though the picture was improved, we were not sat­

isfied since the pattern of the cracks is still not easily recognized.

Another approach to this problem is presented in the following chapters.

CHAPTER III

FOURIER SHORT SPACE ADAPTIVE FILTERING ALGORITHM

Introduction

It is easier to enhance an image if we know what kind of features

are present. A line which is not very curved can be treated as a

straight line in a block of the image if the size of the block is small

enough. This approach would help us in designing a special filter for

extracting those lines from a noisy background.

The algorithm described here has been designed to provide flexi­

bility and adaptive modification in filtering of the Fourier spectrum of

local image sections. The technique is that small sections of the image

are Fourier transformed, each transform is then modified via a filter

operation and then inverse transformed to reconstruct the spatial domain

enhanced image. The section size used is a tradeoff between the need to

adapt to locally varying conditions and, on the other hand, to ensure

that the regions are big enough to represent features of interest in the

spectral domain. The basic problem to overcome in the use of block

spectral filters is to accommodate edge effects at the borders of

adjacent blocks which are filtered. Consequently, a windowing and

overlapping strategy must be employed.

21

22

Theory of The Algorithm

Figure 3.1 shows the algorithm operation. A small image section

has its average intensity value subtracted. Then, a Hamming window is

applied in order to reduce the edge leakage effect, and the Fourier

transform is computed. Several preprocessing filters like high-pass,

low-pass, or band-pass can be added at this time. This would suppress

regional contrast variations in the scene and ultimately lead to a

locally uniform contrast reconstruction, at the same time reducing noise

and enhancing the frequencies we want.

A special adaptive filtering algorithm is introduced. It allows

only certain kinds of signals to pass through and rejects most of the

unwanted noise. Then, the D.C. term of the spectrum is replaced by a

fixed value. This sets the restored image to a proper grey level dis­

tribution in the spatial domain. The image is reconstructed from the

subimage right after the inverse transformation takes place. These

procedures are repeated until all blocks are processed. Finally, con­

trast enhancement, either contrast stretch or histogram equalization, is

used to provide the resulting processed image.

Selection of Subimages

The subimage size is chosen based on tv/o considerations. If the

size is too large, one encounters the problem of nonstationarity. In

addition, it is improper to select too large a window because v/e want to

use a piecewise linear approach to curved lines in the enhancement

algorithm. The approach only holds if the line in the block is not very

23

Input Image

Select Sub-image

Remove Average Level

Hamming Window

Fast Fourier Transform

Preprocessing Filters (High Pass, Band Pass, Low Pass)

Adaptive Wedge Filter

Replace Fixed DC

Inverse Fast Fourier Transform

Weighting Window

Store the Results

Contrast Enhancement

T Output Image

Figure 3 . 1 . Flow-Chart of Fourier Short Space Adaptive F i l t e r i ng Algorithm.

24

curved. That means the curvature of the features determines the size of

the subimage. On the other hand, if the size of the subimage is too

small, the ability of the system to reduce noise may be diminished due

to the lack of available data. Usually, for a 256 X 256 image, the size

of 32 X 32 pixels for the subimage is a good selection.

Besides the selection of the size of the block, another factor

affecting the performance is the overlapping in sampling. As is

discussed, the overlapping can reduce the discontinuity at the borders

of the subimages. However too much overlapping would slow down the

processing speed, because more subimages need to be processed. We used

50% overlap sampling for each subimage as it was easy to program.

Short Space Window Function

To successfully implement the image restoration algorithm on a

short space basis, the window function W(ni,n2) must be carefully

chosen. For example, to construct an image from its subimages, W(n-|,n2)

has to satisfy the following equation:

23XlW(ni,n2) - 1 for all n],n2 of interest. (3-1)

"l "2

In addition, W(ni,n2) should be a smooth function to avoid some

possible discontinuities or degradation that may appear at the subimage

boundaries in the processed image. There are several 2-D window func­

tions, like Triangular, Hamming, or Hanning functions, that can be used

in the algorithm. Which one to choose is a matter of preference. Among

these, the most popular one is the Hamming function as shown in the

following equations:

25

In a 1-D representation:

N-1 2TT W(n) = 0.54 + 0.46 COS ( n ) n=O...N-l (3-2)

N

N=No. of points in the window

In a 2-D representation:

W(n^,n2) = W (n^)-W (n^) (3-3)

It is straightforward to show that the 2-D Hamming window function

satisfies equation (3-1). Figure 3.2 shows the plots of 1-D and 2-D

Hamming functions in a continuous X,Y space.

Adaptive Filtering Algorithm

We now discuss the function of this algorithm and how to select a

proper filter. The main function of this algorithm is to enhance the

energies of the spectrum of the feature we are interested in, and at the

same time to reduce other spectral components. In our research we need

the feature of straight lines to be enhanced, therefore, we use wedge

filters. The properties of wedge filters will be discussed below.

The wedge filter is a type of filter not often used in most en­

hancement techniques. The shape of the pass band of the filter is a

wedge, hence the filter's name. The wedge filter allows only the spec­

trum inside the wedge shaped area to pass through and it rejects or

attenuates other spectral components. The effect of the wedge in the

spatial domain is that the filtered image would consist only of linear

26

(a) X

(b)

Figure 3.2. Hamming Function (b) 2-D Plot.

(a) 1-D Plot;

27 features in the direction pendicular to the orientation of the wedge in

the frequency domain. The lines in the image are not necessarily per­

fectly parallel to each other, they can have some degree of variation

depending on the bandwidth (i.e., angular extent) of the wedge filter.

The wider the bandwidth of the filter the more variation in directions

the lines can have. Figure 3.3 shows a typical wedge filter. It has an

orientation <)> and a bandwidth 8. The shaded area is the band in which

the spectrum can be passed.

The Adaptive Filtering Algorithm is shown in Figure 3.4. First, it

computes the wedge filters to be used such that there is a 50% overlap

between each wedge filter and its neighbor. The number of wedge filters

used depends on the directional resolution required. Then, the filters

chosen for use are applied to the spectrum of the subimage separately,

and the wedge filter order is sorted according to the total amount of

spectral energy contained in each filter. This would determine which

spatial direction contains the most significant linear feature. A

threshold can be added at this point to delete a wedge spectral band

from the sorted list if such a band contains little spectral energy.

The wedge filters are selected following the modified wedge filter list

in decreasing spectral energy order and according to how many directions

we want to enhance. The selected filters are combined together to

become the adaptive wedge filter. Then the filtering coefficients of

the adaptive wedge filter are modified (i.e., attenuating the unwanted

spectral components to a certain value instead of totally filtering them

out) to reduce the artifacts produced by the wedge filter.

28

Figure 3.3. Wedge Filter.

29

Input Spectrum of Sub-image

i Apply Wedge Filters

I Sort Wedge Filters

k Combine the Filters Wanted

I Threshold Check

I Modify the Filter

I Filter the Spectrum

J Output Spectrum of Sub-image

Figure 3.4. Flow-Chart of Adaptive Wedge Filtering Algorithm.

30

Reconstructing The Image From Subimages

Once each subimage has been processed, it is placed back into it's

location after being operated on by a proper weighting function (this

can reduce the discontinuity effect at the border of each subimage).

Again, the weighting function must be chosen carefully as has already

been discussed. Sometimes the reconstructed image may have low con­

trast. This can be improved by manipulating the histogram of the image

(i.e., histogram equalization). Finally, a contrast enhanced picture is

obtained. In some applications, we may want a binary image consisting

of only black and white grey levels, then a thresholding technique can

be applied.

CHAPTER IV

APPLICATIONS AND RESULTS

In the applications described here all pictures were sampled with

8-bit grey levels and the sample size was 256 X 256. The CPU time

needed for the process is about 3 minutes on a VAX 11/780.

Glass Crack Image Enhancement

As is mentioned in chapter I, the primary application of this

algorithm is to enhance the glass crack image taken from the experiment

discussed in chapter I.

A typical picture from the glass crack experiment is shown in

Figure 4.1(a). It is a picture that has low contrast and is very noisy.

Tracing all cracks, which appear as dark lines, directly from the pic­

ture is yery difficult. Another defect of the picture is nonuniform

illumination, the left portion of the picture is darker than the right

portion. This causes problems when doing contrast enhancement. Figure

4.1(b) shows the Fourier spectra of the subimages of the picture. In

each subimage, the spectral energy is located primarily in two wedge

bands. Therefore we select two wedge filters in the adaptive wedge

filtering algorithm. The spectral energies of two selected bands have

been enhanced as shown in Figure 4.1(c). The image after reconstruction

is shown in Figure 4.1(d). The picture is low contrast and appears

31

32

(b)

Figure 4.1. Picture;

Glass Crack Image, (a) Original (b) Fourier Transform of (a).

33

(d)

Figure 4.1. (Continued) (c) Spectrum after Adaptive Filter; (d) Reconstructed Image.

34

(e)

(f)

Figure 4.1. (Continued) (e) Histogram of (d); (f) Final Enhanced Picture.

35

blurred. The histogram of this picture is shown in Figure 4.1(e).

Histogram equalization is then applied to enhance the contrast. The

final enhanced picture is shown in Figure 4.1(f). Figure 4.2 shows the

enhancement result of another glass crack picture. Clearly the cracks

have been enhanced compared to the original picture in both cases.

Fingerprint Enhancement

In this section we use the same algorithm to enhance degraded

fingerprints.

Figure 4.3(a) shows the original picture of a fingerprint. This

fingerprint is slightly smudged. The Fourier spectra of the subimages

of this image are shown in Figure 4.3(b). Most of the spectral energies

are located within one wedge band as shown in the picture of the spec­

tra. Thus we use only one wedge filter per subimage in the filtering

algorithm. Figure 4.3(c) shows the Fourier spectra of the subimages

after the adaptive wedge filtering. The final enhanced picture of the

fingerprint is shown in Figure 4.3(d). The picture shows better quality

than the original one. Note that the gaps between the ridges have been

filled, this may present a problem in actual fingerprint enhancement.

Figure 4.4 shows the enhancement result of a severely smudged

fingerprint. Figure 4.5 is another example of enhancing a smudged

fingerprint taken from cloth. These examples show that the algorithm

improves the quality of the degraded image significantly.

As discussed in chapter III, improperly selecting the parameters of

the adaptive wedge filter may cause some undesired artifacts in the

36

(b)

Figure 4.2. Glass Crack Detailed Picture, (a) Original Picture; (b) Enhanced Picture,

37

^m,^:m:m^% % * ,# 0.000mm

^ 'm. 'i% *% •'*!* fe '* : ^ 0100^-^m :J0 S

««i ^^^'^^ ^ '^ 'H ,;# # 0 0 0 .# ^ ^ m % % % # # :# 00M ^%i*^^-^ m ^ ^ i m • ** m m m M 0 m •~!o^ # ^ -pii .m^ >f,<^ ,m, Wk. w^f P :•- -m^ m W W .m-

^ ^ ^ % % • # 'f • # # #

(b)

Figure 4 .3 . Slightly Degraded Fingerprint, (a) Original Picture; (b) Fourier Transform of (a) .

'^^^'^Nxxx I j'^j'^y^"'/* *-,•%%%%>'% /sfyyyy^s' ^^^-^KHX \ /^^^y,^^^-*» ^n^'n^^^*-.^.^^^\^^yj^*J'^>-^^^ •^*^^;^~^ * - * , ' * , * ' # ' ^ * , * ^ ^ ^ " ^ ^

N,*,-*"-,"^^-,^.^"^*^* ^*^*».-^-J «*•*•-,*,*»•,•*• , * , * - , - ," J- ,* ^ ^ ^»^ • * , ' % * • , • » . * , •'n-^..,* . . • • ^ ^ * ' ^ ' . , * ^ ' y

' •S^ m<; '-IKr Wr :i^^ ^^^ »?W ^ ^

' - » * , *^ % ^ * . • * % ^ ^ ^ .^"^ .^^ ,^^ ^'^ 0

^ * •* ^ % ^ % 1 m ^ . %.. %. % % %

VK M L ' ^ ^ . ^fc A - ^ ^ Ab, ^ ^ % % % % * ^ 4 ^ ^ ^ ^ ^ % ^ * ^ ^ ' ^ * *

* i ti 4h 0 g' (f s

I g S j 1 # .# ^ r # ,# # ^

4r .# # -Ji 4 i # # ^ # i i 1

^ ^ ^ ^.^^Ik^lk^ 1 » * ^ < ^ . ' . ^ ^ 1 ^ X % % % % 1 w % . % %&.%-- •% %

V V A JP ' j r ,Jr Jl' If %. 1 ft M « ft t i

^ % % % ^% *. % 11 1' S H' w # # n

38

(c)

(d)

Figure 4.3. (Continued) (c) Spectrum of (b) after Adaptive Wedge Filter; (d) Final Enhanced Image.

39

(a)

(b)

Figure 4.4. Severely Smudged Fingerprint (a) Original Image; (b) Final Enhanced Image.

40

(a)

(b)

Figure 4.5. Degraded Fingerprint, (a) Ori ginal Image; (b) Final Enhanced Image.

41

enhanced image. For instance, some lines which were not in the original

image may be produced as shown in Figure 4.6. These artifacts can be

reduced by selecting a proper threshold in the algorithm. Usually, the

threshold value was found by trial and error.

42

(a)

(b)

Figure 4.6. Artifacts Using Wedge Filters, (a) The Enhanced Image of Figure 4.4(a) without Threshold; (b) Same Enhanced Image with Threshold.

CHAPTER V

CONCLUSIONS

The Fourier Short Space Adaptive Filtering Algorithm has been shown

to be an effective tool for detecting the local linear features of an

image, either in a degraded or non-degraded picture. In addition, the

segmentation plus adaptive filtering enable us to handle space-variant

and even signal-dependent noise to some extent, as opposed to the Wiener

filter techniques. Also, the algorithm needs much less memory and is

faster in processing speed than the conventional frequency domain

manipulation techniques. Therefore it can be implemented in any size

computer, especially in microcomputers where the speed and memory space

are considered as the most important factors in the implementation of an

image processing algorithm.

Besides the applications in the enhancement of degraded images,

there are more possible applications in other areas. For instance,

after a low noise high contrast image has be obtained by using this

algorithm, numerous techniques of segmentation and thresholding can be

applied, consequently producing a machine recognizable picture, which is

essential in machine vision and pattern recognition applications.

Much work can be done in developing different feature extraction

techniques using adaptive filtering algorithms. For the application of

43

44 extracting linear features, the adaptive wedge f i l t e r i ng algorithm

described here is a good choice.

REFERENCES

1. Naderi, F., "Estimation and Detection of Images Degraded by Film Grain Noise," University of Southern California Image Processing Institute Report 690, 1976.

2. Shaw, B. G., "Local and Regional Edge Detectors: Some Comparisons," Computer Graphics and Image Processing, Vol. 9, pp. 135-149. 1979.

3. Suk, M., Hong, S., "An Edge Extraction Technique for Noisy Images," Computer Vision, Graphics, and Image Processing, Vol. 25, pp. 24-45. 1984.

4. Nevatia, R., Babu, K. R., "Linear Feature Extraction and Descrip­tion," Computer Graphics and Image Processing Vol. 13, pp. 257-269. 1980.

5. Strickland, R. N., Aly, M. Y., "Image Sharpness Enhancement Using Adaptive 3 X 3 Convolution Masks," Optical Engineering, Vol. 24, No. 4, pp. 683-686. July/August 1985.

6. Peli, T., Malah, D., "A Study of Edge Detection Algorithms," Compu­ter Graphics and Image Processing, Vol. 20, pp. 1-21. 1982.

7. Duda, R. 0., Hart, P. E., "Use of the Hough Transformation to Detect Lines and Curves in Pictures," Communications of the ACM Vol. 15, No. 1. pp. 11-15. Jan., 1972.

8. Brigham, E. 0., The Fast Fourier Transform, Prentice-Hall, 1974.

9. Gonzalez, R. C , Wintz, P., Digital Image Processing, Addison-Wesley, 1977.

10. Andrew, H. C , Hunt, B. R., Digital Image Restoration, Prentice-Hall, 1977.

11. Froehlich, G. K., "Estimation in Signal-Dependent Noise," PhD. dissertation, Department of Electrical Engineering, Texas Tech University, Lubbock, Texas. 1980.

12. Kasturi, R., "Adaptive Image Restoration in Signal-Dependent Noise," PhD. dissertation. Department of Electrical Engineering, Texas Tech University, Lubbock, Texas. 1982.

45

APPENDIX

46

47

^•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••*

C C FOURIER SHORT SPACE ADAPTIVE FILTERING ALGORITHM C C BY JAWHORNG TARNG C ^•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••*

C C * This program enhances a 256 X 256 image by dividing it into C 32 X 32 subimages with 50% overlap, and restoring them C individually. Finally, the image is reconstructed from these C restored subimages. C C * This program was written in VAX FORTRAN C ^•••••••••••••••••••••••••••••••••••••••••••••••^

C C * Variables: C ADPT — Adaptive filter flag. C BUFA..BUFD -- The dummy variables used for storing the images. C DATA! -- The imaginary part of the spectrum of the subimage. C DATAIN -- The spectrum intensity of the subimage. C DATAR -- The real part of the spectrum of the subimage. C ETH -- Energy threshold flag. C FACT — Wedge filter coefficient modification factor. C FILTER " Preprocessing filter. C HAM -- One-dimensional Hamming function. C IDX -- Index used to compute the wedge filters. C IMAGEA — The original image. C IMAGEB -- The enhanced image. C IMA6EC — The spectrum intensity of the original image. C IMAGED — The spectrum intensity of the enhanced image. C IX,lY -- Subimage location. C NDR -- Directional resolution. C NSD -- No. of selected direction. C OVLP — Overlap flag. C RJB -- The list of wedge bands to be rejected. C RJTNB -- Reject neighborhood flag. C THSCALE -- Threshold scale. C WF — Wedge filters. C WINDOW -- Two-dimensional Hamming function. C r••••*•••*••••••*•**•*•••*••********************************************

CHARACTER *1 R,YES,ADPT,ETH,RJTNB,OVLP CHARACTER *256 BUFA(256),BUFB(256),BUFC(256),BUFD(256) REAL DATAIN(32,32),HAM(32) REAL DATAR(32,32),DATAI(32,32) BYTE IMAGEA(256,256),IMAGEB(256,256),WF(32,32,18) BYTE IMAGED(256,256),IMAGEC(256,256) REAL WIND0W(32,32),FILTER(32,32)

48

EQUIVALENCE (IMAGEA,BUFA),(IMAGEB,BUFB),(IMAGEC,BUFC) EQUIVALENCE (IMAGED,BUFD) INTEGER RJB(18),IDX(0:19) COMMON /A/ WINDOW,FILTER,WF,DC,ADPT,NDR,NSD,IMAGEC,IMAGED,IX,lY COMMON /B/ ETH,RJNB,NRJB,RJB,IDX,THSCALE,OVLP,FACT PARAMETER (YES='Y',PI=3.141593)

C-

1 DC=131072. FORMAT (A) TYPE *,'WANT OVERLAP?' READ 1,0VLP IF (OVLP .NE. YES) THEN NW=8 LW=32

ELSE NW=15 LW=16

ENDIF

DO 1=1,256 DO J=l,256

IMAGEB(I,J)=0 ENDDO

ENDDO

; Set up proper DC value

; set up subimage sampling ; sequence

; Clear image buffer

SET UP THE PARAMETERS c c c

TYPE *,'WANT ADD ADAPTIVE FILTER?' READ 1,ADPT IF (ADPT .EQ. YES) THEN TYPE *,'DIRECTION RESOLUTION: (MAX: 18) READ *,NDR TYPE *,'HOW MANY SELECTED DIRECTIONS:' READ * NSD TYPE *,''ATTENUATION FACTOR:' READ * FACT TYPE *!'WANT ADD THRESHOLD?' READ 1,ETH IF (ETH .EQ. YES) THEN TYPE *,'ENTER THRESHOLD SCALE:' READ *,THSCALE

ENDIF c - -- -C COMPUTE THE WEDGE FILTER FUNCTION c - - -

NR=180/NDR IDX(0)=NDR IDX(NDR+1)=1 DO K=1,NDR

IDX(K)=K

49

UPLT1=NR*(K-1)+NR DWLT1=UPLT1-2*NR UPLT2=UPLT1-180 DWLT2=DWLT1+180 IF (DWLT2 .GE. 180.) DWLT2=DWLT2-360.

C DO 1=1,32

DO J=l,32 IF (I .NE. 17 .OR. J .NE. 17) THEN X=I-17 Y=J-17 DEG=ATAN2D(X,Y) IF (K .EQ. 1) THEN

IF (((DEG .LE. UPLTl) .AND. (DEG .GE. DWLTl)) .OR. & ((DEG .LE. UPLT2) .OR. (DEG .GE. DWLT2))) THEN

WF(I,J,K)=1 ELSE

WF(I,J,K)=0 ENDIF

ELSE IF (((DEG .LE. UPLTl) .AND. (DEG .GE. DWLTl)) .OR.

& ((DEG .LE. UPLT2) .AND. (DEG .GE. DWLT2))) THEN WF(I,J,K)=1

ELSE WF(I,J,K)=0

ENDIF ENDIF

ENDIF ENDDO

ENDDO C

DO 1=16,18 DO J=16,18

WF(I,J,K)=0 ENDDO

ENDDO ENDDO

ENDIF c- — - - " C COMPUTE THE HAMMING WINDOW FUNCTION c - — — -

CK=PI/15. DO 1=1,16

HAM(I)=0.54-0.46*C0S(CK*(I-1)) HAM(33-I)=HAM(I)

ENDDO c -- - - - - -C COMPUTE THE FILTER FUNCTION C-

DO 1=1,32

50

DO J=l,32 FILTER(I,J)=1.0 ; Initialize filter value WINDOW(I,J)=HAM(I)*HAM(J) ; Compute 2-D Hamming Function

ENDDO ENDDO

C CALL FILTERS (32,FILTER,DATAR) ; Compute filter function

C - --C READ IMAGE C

CALL RE256(BUFA) C - -C DO THE MAIN PROCESS C -

DO IC=0,NW-1 ; Sample subimage DO JC=0,NW-1

IX=IC*LW IY=JC*LW

C DO 1=1,32

DO J=l,32 DATAR(I,J)=ZEXT(IMAGEA(I+IX,J+IY)) DATAI(I,J)=0.

ENDDO ENDDO

C DO 1=1,NDR ; I n i t i a l i z e RJB l i s t

RJB(I)=0 ENDDO

C -C CALL FOURIER SHORT SPACE ADAPTIVE FILTER SUBROUTINE C — -- — --

CALL FSSAF(DATAR,DATAI,DATAIN) C -C RECONSTRUCT THE IMAGE FROM SUBIMAGE C -

IF (OVLP .EQ. YES) THEN DO 1=1,32 ; Overlap

DO J=l,32 IM=DATAIN(I,J)*WINDOW(I,J)+ZEXT(IMAGEB(I+IX,J+IY)) IMAGEB(I+IX,J+IY)=LEVEL(IM)

ENDDO ENDDO ELSE DO 1=1,32 ; No overlap

DO J=l,32 IM=DATAIN(I,J) IMAGEB(I+IX,J+IY)=LEVEL(IM) ; Convert to Byte

ENDDO ENDDO

51

ENDIF

ENDDO ENDDO

C TYPE *,'WANT SEND TO COMTAL?' READ 1,R IF ( R .EQ. 'Y') THEN TYPE *,'IMAGE AFTER PROCESS' CALL SEND256(BUFB) ; Send images to COMTAL TYPE *,'SPECTRUM BEFORE PROCESS' CALL SEND256(BUFC) TYPE *,'SPECTRUM AFTER PROCESS' CALL SEND256(BUFD)

ENDIF TYPE *,'D0 YOU WANT TO SAVE THIS IMAGE?' READ 1,R IF (R .EQ. YES) THEN TYPE *,'IMAGE AFTER PROCESS' ; Save images on disk CALL WR256(BUFB) TYPE *,'SPECTRUM BEFORE PROCESS' CALL WR256(BUFC) TYPE *,'SPECTRUM AFTER PROCESS' CALL WR256(BUFD)

ENDIF STOP END

C ( ••••••••••••••••••••••••••••••••••••••••••••••••••••••*****

C C FOURIER SHORT SPACE ADAPTIVE FILTER SUBROUTINE C C * Variables: C DATAF -- Power spectrum of subimage. C IR -- Order of wedge filters C NN -- Dimension of Fourier transform. C THRS — Threshold value. C TOLPW -- Total power. C TOTAL -- Total energy of each wedge band. C TPWPS -- Total energy of combined wedge filter. C WFT -- Combined wedge filter. C

SUBROUTINE FSSBF(DATAR,DATAI,DATAIN) CHARACTER *1 R,YES,ADPT,ETH,RJTNB,OVLP BYTE WF(32,32,18),IMAGEC(256,256),IMAGED(256,256) REAL DATAR(32,32),DATAI(32,32),DATAIN(32,32),T0TAL(18) REAL DATAF(32,32),WIND0W(32,32),FILTER(32,32),WFT(32,32) INTEGER RJB(18),IDX(0:19) COMMON /A/ WINDOW.FILTER,WF,DC,ADPT,NDR,NSD,IMAGEC,IMAGED,IX,IY

52

COMMON /B/ ETH,RJNB,NRJB,RJB,IDX,THSCALE,OVLP,FACT INTEGER NN(2),IR(18) DATA NN /32,32/ PARAMETER (SN=1024.,YES='Y')

C — C SUBTRUCT AVERAGE OF THE SUBIMAGE C - - -

TOL=0. DO 1=1,32

DO J=l,32 TOL=DATAR(I,J)+TOL

ENDDO ENDDO

C AVE=TOL/SN ; Average of subimage

C DO 1=1,32 ; Add Hamming window

DO J=l,32 DATAR(I,J)=(DATAR(I,J)-AVE)*WINDOW(I,J)

ENDDO ENDDO

C - -C FAST FOURIER TRANSFORM C — - --

CALL F0URT(DATAR,DATAI,NN,2,1,0,0,0) C - - - — — C FLIPPING THE COORDINATES TO MATCH FILTER C

CALL FLIP32 (DATAR,DATAI) C — -- -C SPECTRUM INTENSITY BEFORE PROCESSING C- — — -

IF (OVLP .EQ. YES) THEN DO 1=8,24 ; Overlap

DO J=8,24 IM=L0G(1+DATAR(I,J)*DATAR(I,J)+DATAI(I,J)*DATAI(I,J))*10. IMAGEC(IX+I,IY+J)=LEVEL(IM)

ENDDO ENDDO

ELSE DO 1=1,32 ; NO overlap

DO J=l,32 IM=LOG(1+DATAR(I,J)*DATAR(I,J)+DATAI(I,J)*DATAI(I,J))*10. IMAGEC(IX+I,IY+J)=LEVEL(IM)

ENDDO ENDDO

ENDIF c'"' APPLY PREPROCESSING FILTERS c

53

TOLPW=0. DO 1=1,32

DO J=l,32 DATAR(I,J)=DATAR(I,J)*FILTER(I,J) DATAI(I,J)=DATAI(I,J)*FILTER(I,J) WFT(I,J)=0. DATAF(I,J)=(DATAR(I,J)*DATAR(I,J)

& +DATAI(I,J)*DATAI(I,J)) TOLPW=TOLPW+DATAF(I,J)

ENDDO ENDDO THRS=TOLPW*THSCALE

IF (ADPT .EQ. YES) THEN

Initialize WFT

; Compute the threshold

SELECT ADAPTIVE FILTER

DO K=1,NDR TOTAL(K)=0. DO 1=1,32

DO J=l,32 IF (WF(I,J,K) .EQ. 1) THEN TOTAL(K)=TOTAL(K)+DATAF(I,J)

ENDIF ENDDO

ENDDO ENDDO

; Compute total power ; pass for individual ; wedge filter

SORTING THE ORDER

DO 1=1,NDR IR(I)=I

ENDDO

; Initialize the list

DO J=1,NDR-1 DO I=J+1,NDR

IF (TOTAL(IR(J)) IT=IR(I) IR(I)=IR(J) IR(J)=IT

ENDIF ENDDO

ENDDO

; Sorting

LT. TOTALdRd))) THEN

K=l IF (NSD .GE. 2 ) THEN KC=1

DO WHILE (K .LE. NSD .AND. KC .LE. NDR) KI=IR(KC)

54

C

C

IF (RJB(KI) .EQ. 0) THEN DO 1=1,32 ; Combining filters

DO J=l,32 WFT(I,J)=WFT(I,J)+WF(I,J,KI)

ENDDO ENDDO

RJB(IDX(KI+1))=1 ; Update the reject band list RJB(IDX(KI-1))=1 K=K+1

ENDIF

KC=KC+1 ENDDO

ELSE DO 1=1,32 ; Only one band case

DO J=l,32 WFT(I,J)=WF(I,J,IR(1))

ENDDO ENDDO

ENDIF

TPWPS=0. DO 1=1,32

DO J=l,32

MODIFY THE FILTER

IF (WFT(I,J) .GE. 1.) WFT(I,J)=1. TPWPS=TPWPS+DATAF(I

ELSE WFT(I,J)=FACT

ENDIF ENDDO

ENDDO

THEN

,J) ; Compute total power passed

CHECK FOR THRESHOLD

IF ((TPWPS .GE. THRS) .OR. (ETH .NE. YES)) THEN DO 1=1,32 i No threshold check

DO J=l,32 ; or threshold is OK DATAR(I,J)=DATAR(I,J)*WFT(I,J) DATAI(I,J)=DATAI(I,J)*WFT(I,J)

ENDDO ENDDO

ENDIF

55

IF (OVLP .EQ. YES) THEN DO 1=8,24 ; Overlap

DO J=8,24 IM=L0G(1+DATAR(I,J)*DATAR(I,J)+DATAI(I,J)*DATAI(I,J))

& *10. IMAGED(IX+I,IY+J)=LEVEL(IM)

ENDDO ENDDO

ELSE DO 1=1,32 ; No overlap

DO J=l,32 IM=L0G(1+DATAR(I,J)*DATAR(I,J)+DATAI(I,J)*DATAI(I,J))

& *10. IMAGED(IX+I,IY+J)=LEVEL(IM)

ENDDO ENDDO

ENDIF ENDIF

C - -C REPLACE D.C. TERM C - -- -

DATAR(17,17)=DC DATAI(17,17)=0.

C - - - --C FLIPPING COORDINATES BACK C -

CALL FLIP32(DATAR,DATAI) C -C INVERSE FOURIER TRANSFORM C- -

CALL F0URT(DATAR,DATAI,NN,2,0,1,0,0) C — -C CONVERT TO INTENSITY C -

DO 1=1,32 DO J=l,32

DATAIN(I,J)=DATAR(I,J)*DATAR(I,J)+DATAI(I,J)*DATAI(I,J) DATAIN(I,J)=SQRT(DATAIN(I,J))/SN

ENDDO ENDDO

C RETURN END

^^^^^^•••••••••••••••••••••••••••••••••••••••••••••••••••••••*

C C FLIPPING COORDINATES C C

SUBROUTINE FLIP32(DATAR,DATAI) REAL DATAR(32,32),DATAI(32,32)

56

DO J=l,16 DO 1=1,16

DR=DATAR(I+16,J+16) DATAR(I+16,J+16)=DATAR(I,J) DATAR(I,J)=DR DR=DATAR(I+16,J) DATAR(I+16,J)=DATAR(I,J+16) DATAR(I,J+16)=DR DI=DATAI(I+16,J+16) DATAI(I+16,J+16)=DATAI(I,J) DATAI(I,J)=DI DI=DATAI(I+16,J) DATAI(I+16,J)=DATAI(I,J+16) DATAI(I,J+16)=DI

ENDDO ENDDO RETURN END

^•••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••*

C C CLEAN THE ir^GE BUFFER ( 512 X 512) C

SUBROUTINE CLEAN (BUF) CHARACTER * 512 NULL,BUF(512) BYTE BNULL(512) EQUIVALENCE (BNULL,NULL) DATA BNULL /512*0/ DO 1=1,512

BUF(I)=NULL ENDDO RETURN END

C C CONVERT A 256 X 256 IMAGE TO A 512 X 512 IMAGE OR VICE-VERSA C C ID=0 : 256 X 256 - 512 X 512 C ID=1 : 512 X 512 - 256 X 256 C

SUBROUTINE CONVERT (BUFA,BUF,ID) CHARACTER * 128 NULLl,IBUF(4,256) CHARACTER * 512 BUF(512),NULL2,BUFB(256) CHARACTER * 256 BUFA(256) BYTE BNULL1(128),BNULL2(512) EQUIVALENCE (BNULLl,NULLl),(BNULL2,NULL2) EQUIVALENCE (IBUF,BUFB) DATA BNULLl,BNULL2 /128*0, 512*0/ IF (ID .EQ. 0) THEN

DO 1=1,128 BUF(I)=NULL2

57

BUF(I384)=NULL2 ENDDO

C DO 1=129,384

BUF(I)=NULL1//BUFA(I-128)//NULL1 ENDDO

ELSE DO J=l,256

BUFB(J)=BUF(J+128) ENDDO

C DO 1=1,256

BUFA(I)=IBUF(2,I)//IBUF(3,I) ENDDO

ENDIF RETURN END

^••••••••••••••••••••••••••••••••••••••••••••••••••••••••••••*

C C CONVERT IMAGE GRAY LEVEL TO PROPER VALUE C

FUNCTION LEVEL(IMG) IF (IMG .GE. 255) IMG=255 IF (IMG .LE. 0) IMG=0 LEVEL=IMG IF (LEVEL .GE. 128) LEVEL=LEVEL-256 RETURN END

^••••••••••••••••••••••••••••••••••••••••••••••••**

C C SEND A 256 X 256 IMAGE TO COMTAL C

SUBROUTINE SEND256 (BUFA) CHARACTER *256 BUFA(256) CHARACTER *512 BUF(512) BYTE IMAGE(512,512) EQUIVALENCE (BUF,IMAGE) CALL CONVERT (BUFA,BUF,0) CALL SEND (IMAGE) RETURN END

C C FILTERS SUBROUTINE C

SUBROUTINE FILTERS(ND,DATAR,DATAI) DIMENSION DATAR(ND,ND),DATAI(ND,ND) MPT=ND/2+l

10 TYPE *,'SELECT ONE OF THE FOLLOWING FILTERS' TYPE *,'#0 - EXIT'

58

TYPE *,'#1 - IDEAL LOW PASS FILTER' TYPE *,'#2 — IDEAL HIGH PASS FILTER' TYPE *,'#3 — BUTTERWORTH LOW PASS FILTER' TYPE *,'#4 - BUTTERWORTH HIGH PASS FILTER' TYPE *,'#5 — EXPONENTIAL LOW PASS FILTER' TYPE *,'#6 — EXPONENTIAL HIGH PASS FILTER' TYPE *,'#7 — MANIPULATE THE DC TERM' TYPE *,'ENTER FILTER NUMBER:' READ *,FUNC IF (FUNC .EQ. 0) RETURN GO TO (100,200,300,400,500,600,700) FUNC

c C IDEAL LOW PASS FILTER C 100 TYPE *,'THIS FUNCTION WILL PASS THE FFT OF AN IMAGE'

TYPE *,'THROUGH A LOW PASS FILTER' TYPE *,'ENTER CUTOFF RADIUS' READ *,D0 DO J=1,ND

DO 1=1,ND X=(I-MPT)**2 Y=(J-MPT)**2 D=SQRT(X+Y) IF(D.GT.DO) THEN DATAR(I,J)=0. DATAI(I,J)=0.

ENDIF ENDDO

ENDDO GO TO 10

p . ^ . ^ . ^ ^ ^ . j t * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *

C C IDEAL HIGH PASS FILTER C 200 TYPE *,'THIS FUNCTION WILL PASS THE FFT OF AN IMAGE'

TYPE *,'THROUGH A HIGH PASS FILTER' TYPE *,'ENTER CUTOFF RADIUS' READ *,D0 DO J=1,ND

DO 1=1,ND X=(I-MPT)**2 Y=(J-MPT)**2 D=SQRT(X+Y) IF ((D .NE. 0.) .AND. (D .LE. DO)) THEN DATAR(I,J)=0 DATAI(I,J)=0

ENDIF ENDDO

ENDDO

59

GO TO 10 ^••••••••••••••••••••••••••••••••••••••••••••••••••••••••••*

C C BUTTERWORTH LOW PASS FILTER C 300 TYPE *,'THIS FUNCTION PASSES THE FFT OF AN IMAGE'

TYPE *,'THROUGH A NTH ORDER BUTTERWORTH LOW PASS F.' TYPE *,'ENTER CUTOFF RADIUS:' READ *,D0 TYPE *,'ENTER THE ORDER OF THE BUTTERWORTH FILTER:' READ *,N DO J=1,ND

DO 1=1,ND X=(I-MPT)**2 Y=(J-MPT)**2 D=SQRT(X+Y) H=1.0/(1.0+0.414*((D/D0)**(2*N))) DATAR(I,J)=DATAR(I,J)*H DATAI(I,J)=DATAI(I,J)*H

ENDDO ENDDO GO TO 10

C C BUTTERWORTH HIGH PASS FILTER C 400 TYPE *,'THIS FUNCTION PASSES THE FFT OF AN IMAGE'

TYPE *,'THROUGH A NTH ORDER BUTTERWORTH HIGH PASS F.' TYPE *,'ENTER CUTOFF RADIUS:' READ *,D0 TYPE *,'ENTER ORDER OF BUTTERWORTH FILTER:' READ *,N DO J=1,ND

DO 1=1,ND X=(I-MPT)**2 Y=(J-MPT)**2 D=SQRT(X+Y) IF (D .EQ. 0) THEN

H=l. ELSE

H=l.0/(1.0+0.414*((D0/D)**(2*N))) ENDIF DATAI(I,J)=DATAI(I,J)*H DATAR(I,J)=DATAR(I,J)*H

ENDDO ENDDO GO TO 10

c C EXPONENTIAL LOW PASS FILTER

60

C 500 TYPE *,'THIS FUNCTION PASSES THE FFT OF AN IMAGE'

TYPE *,'THROUGH A NTH ORDER EXPONENTIAL LOW PASS FILTER' TYPE *,'ENTER CUTOFF RADIUS:' READ *,D0 TYPE *,'ENTER THE ORDER OF THE FILTER:' READ *,N TYPE *,'ENTER THE OFFSET;' READ *,0S DO J=1,ND

DO 1=1,ND X=(I-MPT)**2 Y=(J-MPT)**2 D=SQRT(X+Y) IF (D .LE. OS) THEN H=l.

ELSE H=EXP(-(((D-OS)/DO)**N))

ENDIF DATAR(I,J)=DATAR(I,J)*H DATAI(I,J)=DATAI(I,J)*H

ENDDO ENDDO GO TO 10

C C EXPONENTIAL HIGH PASS FILTER C 600 TYPE *,'THIS FUNCTION PASSES THE FFT OF AN IMAGE'

TYPE *,'THROUGH A NTH ORDER EXPONENTIAL HIGH PASS FILTER.' TYPE *,'ENTER CUTOFF RADIUS:' READ *,D0 TYPE *,'ENTER ORDER OF THE FILTER:' READ * N TYPE *!'ENTER THE OFFSET:' READ *,0S DO J=1,ND

DO 1=1,ND X=(I-MPT)**2 Y=(J-MPT)**2 D=SQRT(X+Y) IF (D .LE. OS) THEN H=l.

ELSE H=EXP(((D-OS)/DO)**N)

ENDIF DATAI(I,J)=DATAI(I,J)*H DATAR(I,J)=DATAR(I,J)*H

ENDDO ENDDO

61

GO TO 10

c C MANIPULATE THE DC TERM C 700 TYPE *,'ENTER THE MULTIPLY CONSTANT:'

READ * C DATAI(MPT,MPT)=DATAI(MPT,MPT)*C DATAR(MPT,MPT)=DATAR(MPT,MPT)*C GO TO 10 END

^•••••••••••••••••••••••••••••••••••••••••••••••••••*

C C SAVE AN IMAGE FROM COMTAL (VI/20) C C CMGET, TRANCOM, CMREL ARE THE UTILITY SUBROUTINES OF COMTAL C

SUBROUTINE SAVEIMG (IMAGE) TYPE *,'IMAGE NUMBER ON COMTAL: (1/2/3)' READ *,IMAGNUM CALL CMGET CALL TRANCOM (.FALSE.,IMAGNUM,IMAGE,0) CALL CMREL RETURN END

C C SEND AN IMAGE TO COMTAL (Vl/20) C

SUBROUTINE SEND (IMAGE) TYPE *,'IMAGE NUMBER ON COMTAL: (1/2/3)' READ *,IMAGNUM CALL CMGET CALL TRANCOM ( .TRUE. , IMAGNUM, IMAGE,0) CALL CMREL RETURN END

PERMISSION TO COPY

In presenting this thesis in partial fulfillment of the

requirements for a master's degree at Texas Tech University, I agree

that the Library and my major department shall make it freely avail­

able for research purposes. Permission to copy this thesis for

scholarly purposes may be granted by the Director of the Library or

my major professor. It is understood that any copying or publication

of this thesis for financial gain shall not be allowed without my

further written permission and that any user may be liable for copy­

right infringement.

Disagree (Permission not granted) Agree (Permission granted)

Student's signature Stud4kit's s| .

5 . n^i Date Date

Apt