PhD. Thesis: Enhancement of Color Images Captured at Different Lightening Conditions

134
Ministry of Higher Education and Scientific Research Al-Mustanseriyah University College of Education Enhancement of Color Images Captured at Different Lightening Conditions A Thesis Submitted to the Council of the Physics Department, College of Education, Al-Mustansiriyah University in Partial Fulfillment of the Requirements for the Degree of Doctor Philosophy in Physics By Hazim Gati' Dway Al-Khuzai (B.Sc.2001) (M.Sc.2004) Supervised by 2011 A.D. 1432 A.H. Dr. Radhi Sh.Hamoudi Al-Taweel Assit. Prof Dr. Ali A . Al – Zuky Assit. Prof

Transcript of PhD. Thesis: Enhancement of Color Images Captured at Different Lightening Conditions

Ministry of Higher Education and Scientific Research Al-Mustanseriyah University College of Education

Enhancement of Color Images Captured at Different Lightening

Conditions

A Thesis Submitted to the Council of the Physics Department, College of

Education, Al-Mustansiriyah University in Partial Fulfillment of the Requirements for the Degree of Doctor Philosophy in Physics

By

Hazim Gati' Dway Al-Khuzai

(B.Sc.2001) (M.Sc.2004)

Supervised by

2011 A.D. 1432 A.H.

Dr. Radhi Sh.Hamoudi Al-Taweel Assit. Prof

Dr. Ali A . Al – Zuky Assit. Prof

����������� �������� ���

��������������� ����� �������

������������� ��������������������

��������

��������� ������������

������������������ ��

��

DEDICATION

This thesis is dedicated to my mother soul with mercy. Also, this thesis is dedicated to my father, my wife and my brothers and sisters who believe in the richness of learning.

Hazim

Acknowledgments

First, praise be to Allah, Full of Majesty for giving

me the health, and support throughout my research. I would like to express my deep thanks and gratitude

to my supervisors, Dr. Ali A . Al – Zuky and Dr. Radhi Sh. Hamoudi Al – Taweel for suggesting the topic of this research, and their assistance throughout the course of this work.

I am grateful to the staff members of physics department in the college of education Al-Mostanseriyah University.

I would like to express my deep gratitude to my family.

My thanks go to postgraduate colleagues and all friends of the physics department.

Hazim

ii

Abstract

The images with good lightness and contrast are a strong requirement in

several areas of applications such as good vision, remote sensing, medical

and aerial image enhancement and the analysis of such images by

measuring the quality is very important for this applications. In this study

first, we have suggested new criteria to determine the quality of the color

image based on changing in lightness and contrast called Quality Factor

(QF), where the change in the lightness and contrast was done by

controlling the illuminance levels of the captured images. We used the

LEDs lighting system to generate the illuminance which is graded from low

to moderate levels and MH lighting system to generate the illuminance

which is graded from moderate to high illuminance levels and also we

study the distribution of illuminance in the two systems. Six groups of the

color images (150 images) have been captured under different illuminance

levels are analyzed by using many methods of image quality assessments;

like, the mean of locally (µ ,σ ) model , CM , EFD and SSIM, then we

compared these methods with suggested method QF.

Second, we introduced two methods to enhance color images based on

changing in the lightness and contrasts, the named Modified Retinex (MR)

and Adaptive Histogram Equalization (AHE). These algorithms have been

compared with the other algorithms like (MSRCR, HE and AINDANE).

From the test results, it is noted that the QF assessment is a robust method

to determine quality of images with different lightness level from low to

high. The MR and MSRCR are good algorithms to enhance the color

images with low and moderate lightness levels compared with others

algorithms, whereas the images with high lightness levels, namely the MR

and AHE algorithm are best methods to enhance these images.

iii

AC Alternating Current AHE Adaptiv histogram equalization AINDANE Adaptive Integrated Neighborhood Dependent Approach for Nonlinear Enhancement ANSI American National Standards Institute BRDF Bi-Directional Reflectance Distribution Function CCT Correlated Color Temperature CIE Commission International de l’´Eclairage CIELAB Color Space CIELUV Color Space CIE XYZ Color Space CM Colorfullness Metrics CRI Color Rendering Index CRT Cathode Ray Tube CSF Contrast Sensitivity Function EFD Entropy of the First Derivative Image HE Histogram Equalization HSV Color Space JPEG Joint Photographic Experts Group LED Lighting Emission Diode LEDs Lighting Emission Diodes MH Metal Halide MOS Mean Opinion Score MR Modified Retinex algorithm MSE Mean Squared Error MSR Multi Scale Retinex algorithm MSRCR Multi Scale Retinex Algorithm with Color Resroration NASA National Aeronautics and Space Administration NTSC National Television Standards Committee PDF Probability Density Function PSNR Peak Signal to Noise Ratio QF Quality Factor R,G,B Red, Green, Blue RGB Color Space SPD Spectral Power Distribution SSIM Structural Similarity Index SSR Signal Scale Retinex YIQ Color Space

Abbreviations

Symbol Definition

iv

Acknowledgments …………………………………………………… i Abstract ……………………………………………………………… ii List of Abbreviation …………………………………………………. iii List of Contents ………………………………………………………. iv

1-7

1.1 Introduction …………………………………………………… … 1 1.2 Challenges ………………………………………………………… 2 1.3 Literature Review ……………………………………………..… 3 1.4. Research Aim …...…………………………………………. ……. 6 1.3. Structure of thesis ..…………………………………… ………… 6

2.1 Introduction ………….. …………………………………………… 8 2.2 Light ……………………………………………………………….. 8 2.2.1 Light and Materials ……………………………………………… 8 2.2.2 Artificial light sources ………………………………………….. . 11 2.2.3 Lighting quality …………………………………………………... 13 2.3 Human visual system ………………………………………………. 15 2.3.1 The human eye …………………………………………………... 16 2.3.2 Visual sensitivity ………………………………………………... 17 2.3.3 Contrast ………………………………………………………….. 17 2.3.5 The Contrast Sensitivity Function ……………………………….. 18 2.3.6 Adaptation …………………………………….………………….. 19 2.3.7 Brightness perception ……………………………………………. 20 2.3.8 Lightness and color constancy ………………………………….. 20 2.4 Color space …………………………………………………………. 21 2.4.1 CIE chromaticity system ………………………………………... 21 2.4.2 RGB color space …………………………………………………. 23 2.4.3 YIQ color model ………………………………………………… 24 2.4.4 HSV Color Model ……………………………………………….. 25

Subject Pages

List of Contents

Chapter One …… General Introduction 1-7

Chapter Two …… Theoretical Principle 8-28

v

2.4.5 CIELAB COLOR SPACE ………………………………………. 27 3.1 Introduction ……………………………………………………..… 29 3.2 Image quality Measurement ……………………………………..… 29 3.2.1 Subjective Quality Measurement……………..………………..… 29 3.2.2 Objective Quality Measurement…………...…………………..…. 30 3.3 Image Quality assessment………………………………………….. 31 3.3.1 The Structural Similarity Index ………………………………….. 32 3.3.2 The entropy of the first derivative image………………………… 34 3.3.3 The Mean of locally (µ ,σ ) model …….………………………. 35 3.3.4 Color fullness metrics………………...…………………………… 36 3.4 Contrast and lightness enhancement algorithms ….…………….. …..36 3.4.1 Histogram equalization……………………………………..…….. 36 3.4.2 Multi scale retinex algorithm….………………………………….. 38 3.4.2 AINDANE algorithm……………………………………………... 39 4.1 Introduction …………………………………...………………….. 44 4.2 Lighting systems and images features ………………………….. 44 4.2.1. LEds Lighting system…………………………………………… 44 4.2.2 MH lighting system ……………………………….…………… .. 46 4.3 Adaptive methods in the Objective Quality based on lightness changing ……………………………………………………………… 46 4.4 Contrast and lightness enhancement algorithms ……...……………. 49 4.4.1 Adaptive histogram equalization (AHE) algorithm …………… 49 4.4.2 Modify retinex (MF) algorithm ……………………………...….. 51

5.1 Introduction ….……...…………………………………………... 54 5.2 Determining the Distribution of illuminance …………………… 54 5.2.1 Illuminance Distribution in the LEds Lighting system...………. 54 5.2.2 Illuminance Distribution in the MH lighting system……...…… 66 5.3 Results of the Images quality assessments …………….………… 76 5.3.1 The entropy of the first derivative image (EFD) …………........ 76

Chapter Three …. Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast 29-43

Chapter FOUR ... Lighting Systems and Suggested Algorithms of image Quality Qssessment and Enhancement 44-53

Chapter Five ... The Results And Discussion 54-110

vi

5.3.2 The Mean of locally (µ ,σ ) model ……………………………...... 76 5.3.3 Colorfullness metrics CM ……..………………………………… 81 5.3.4 Quality factor (QF) assessment ………………………….……... 81 5.3.5 Stricture similarity index ………………………………………... 81 5.4 The results of images enhancement ……………………………….. 89 5.4.1Color images enhancement with low and moderate

lightness levels …………………………………..... 89

5.4.2 Color images enhancement with moderate and high lightness levels ……………………………………... 96 Conclusions ………………………………………………...……… 112 Suggestion Future Works …………………………………..……… 113 References …………………………………………………..……… 114

Chapter Six ... The Conclusion and Suggested Future Work 111-112

Chapter One:

General Introduction

Chapter One: General Introduction

1

1.1 Introduction

Digital image processing is the technology of applying a number of computer

algorithms to process digital images. The outcomes of this process can be either

images or a set of representative characteristics in properties of the original

images. The applications of digital image processing have been commonly

found in robotics/intelligent system, medical imaging, remote sensing,

photography and forensics. The development of computer has led to developing

the science of image processing. At the beginning of the 20P

thP century, it was

common to use small colored image P

, Pbecause of the development of computer

system and its great ability to store, there was a wide range to use colored image

that has a high clarity and big sizes including colored image processing. Digital

image processing can be divided generally into three main categories [1]:

a. Image Enhancement and Restoration.

b. Image Coding and Compression.

c. Image Segmentation and Description.

Statically analysis plays an important role in various image processing

applications in the above groups to know image quality, for example, the

processed image results from enhancement, compression and coding needs to be

assess its quality, however, this done by using mathematical analysis . Image

enhancement by reducing the noise to a minimal level is one of the most

fundamental research topics in image processing. Different types of noise as

additive or multiplicative noise are initiated during the process of acquisition to

digitization of an image, causing degradation in quality, but there is important

element can affect the image illumination that incident on the scene in real

world. The level, type, distribution and uniformity of the illumination can be

determine the image quality hence, various image processing techniques have

been developed to recover the meaningful information under changing lighting

conditions. Among these, the algorithms based on integrated neighborhood

Chapter One: General Introduction

2

dependency of pixel characteristics and those based on the illumination

reflectance model which perform well for enhancing the visual quality of digital

images captured under nonuniform (extremely low and high) lighting

conditions.

1.2 Challenges

A common and often serious discrepancy exists between recorded color

images and the direct observation of scenes. Human perception excels at

constructing a visual representation with vivid color and detail across the wide

range of photometric levels due to lighting variations. In addition, human vision

computes color so as to be relatively independent of spectral variations in

illumination [2]. Therefore the images taken from a camera or displayed on

monitors/display devices suffer from certain limitations. As a result, bright

regions of the image may appear overexposed and dark regions may appear

underexposed. The contrast and lightness enhancement are necessary to improve

the visual quality of images. But color image enhancement (outcome to the

noise or insufficient lighting) can cause:

a. High smoothing in the edges region (edges distortion).

b. Color shift or false.

c. Halo effect.

Many researchers [2-5] have enhanced color image based on insufficient

lightning and determined images quality by using their subjective quality

measurement without using any objective measurement. This method does not

represent a good criterion to determine the efficiency of the enhancement, hence

it is has better use of the objective measurements or the subjective

measurements with large sample of persons, but the two manners have

drawback, especially the subjective measurements that is affected by displays

Chapter One: General Introduction

3

tools, illumination of the environment and other personal factors. The crucial

difficulty in image quality assessment depending on the objective measurements

is no reference or optimal image that is used to determine quality of the

enhancement image. No reference image quality is useful to many still image

applications as assessment equality of high resolution image, JPEG image

compressed [6]. Moreover, measured image quality depending on variety of

lightness or image enhancement that results from insufficient lightness.

1.3 Literature Review

The previous work in color image processing techniques enhancement based

on lightness change. And image analysis to determine its quality will be given

with a brief description to each of them as following:

Eli Peli (1990): proposed method for contrast enhancement dependent on the

physical contrast of simple images such as sinusoidal gratings or a single patch

of light on a uniform background which is well defined and agrees with the

perceived contrast [7].

D. J. Jobson etal (1996): introduced new algorithm to improve the brightness,

contrast and sharpness of an image. It performs a non-linear spatial/spectral

transform that provides simultaneous dynamic range compression P

P[8], and in

1997 they compared this method with another enhancement technique such as

histogram equalization homomorphic filtering P

P[9].

B. V. Funt etal (1997): introduced investigations into Multi-Scale Retinex

algorithm approach to image enhancement to explain the effect of the processing

from a theoretical standpoint P

P[10]. In the same year they modified the multi-

scale retinex approach to image enhancement so that the processing is more

justified from a theoretical standpoint by suggested a new algorithm with fewer

arbitrary parameters that is more flexible P

P[11].

Chapter One: General Introduction

4

D.J.Jabson etal (2002): suggested no reference image quality assessment, by

proposing the idea that good visual representations seem to be based upon some

combination of high regional visual lightness and contrast depending on the

value of mean and a standard deviation locally [12].

D. Hasler and S. Susstrunk (2003): they measured colorfulness meters in

natural images in CIELAB color space depending on the trigometric length of

the standard deviation and mean in A, B components [13].

Ayten N. Al-Biaty (2005): she evaluated image quality depending on computing

the image contrast in edge regions, and introduces quantitative measures to

determine image quality, and then estimated the efficiency of the various

techniques in image processing applications. In her work suggested new

techniques to calculate image contrast and studying it as a function of number of

smoothing iterations from using mean filter and a function of gray level

resolution [14].

Li Tao and Vijayan K. Asari(2005): they introduced AINDANE (Adaptive and

integrated neighborhood dependent approach for nonlinear enhancement) of

color images neighborhood algorithm to improve the visual quality of digital

images captured under extremely low or uniform lightening conditions. It

consists of two main parts: luminance enhancement and contrast enhancement P

P[4].

Li Tao etal (2006): they proposed image enhancement technique which has

been developed to improve the visual quality of digital images that exhibit dark

shadows due to the limited dynamic ranges of imaging and display devices

which are incapable of handling high dynamic range scenes. The proposed

technique processes images into two separate steps: dynamic range compression

and local contrast enhancement. Dynamic range compression is a neighborhood

dependent intensity transformation which is able to enhance the luminance in

Chapter One: General Introduction

5

dark shadows while keeping the overall tonality consistent with that of the input

image [15].

Osman Nuri and Capt Ender (2007) : proposed a new algorithm to enhance

night scenes and under nonuniform lighting conditions, either the low intensity

areas or the high intensity areas cannot be clearly seen depending on non linear

transformP

P[16].

Ali J. Al Dalawy (2008): studied the TV-Satellite images of "Al-Hurra" channel

broadcasted on Arabsat, Hotbird and Nilesat. These images were the same with

respect to the type on the three satellites. Analyzing these images done

statistically by finding the statistics distribution and studying the relations

between the mean and the standard deviation of the color compound (RGB) and

light component for the image as whole and for the extracted homogeneous

regions. Also he studied the contrast of image edges depending on sobel

operator in limiting the edges and studied the contrast as function for edge

finding threshold. He found the Hotbird has the best results [17].

Thuy Tuong etal (2008): proposed method to determine no reference image

quality by calculating the entropy of the first derivative of the lightness

component and evolute probability of the edges region [18].

Salema S. Salman(2009) studied the effect of different lighting operations in

type and intensity on test images using different light sources (tungsten and

fluorescent lamp) and studied the distribution homogeneity of light intensity for

the line partitioned from the middle of white test image width and height and

focused on the contrast ratio as a function for light intensity using a test image

with one half white and the other half black [19].

W.S. Malpica, A.C. Bovik (2009): they suggested full Image Quality

Assessment using the structural similarity index, this method requires two

Chapter One: General Introduction

6

images (optimal and original image) and then evaluation of the three different

measures, the luminance, contrast, and structure comparison is made [20].

Qieshi Zhang etal (2010): they presented an adaptive histogram separation and

mapping method for Backlight image enhancement the proposed adaptive

histogram separation unit .And then mapping the low dynamic range histogram

partition into high dynamic range . By doing this, the excessive or scarcity

enhancement can be avoid. The results show that the proposed method can gives

better enhancement if it compared with some histogram analyzing based

methods [21].

Diana C. Gil etal (2011): They introduced contrast enhancement methods and

Image quality is evaluated by means of objective metrics such as intensity

contrast and brightness error, and by subjective assessment. Execution time is

also measured. They found the technique based on histogram modification

presents a better trade-off considering both aspects [22].

1.4 Research Aim

There are two main purposes of the present study: first, analyzing color images

by using many algorithms which are captured under different illuminance

(lightness and contrast ) levels by using two types of lighting systems (LEDs

and MH) and study the distribution of the illuminance in the two system. Where

the quality of these images have been determined using mean of locally (µ, σ)

model, Structural Similarity Index (SSIM), entropy of the first derivative image

(EFD) and colorfulness metrics (CM). Moreover, we suggested new method

"called Quality Factor QF" . After knowing the effect of illuminance degree on

the image, the second goal is enhance, these images rely on many algorithms;

Chapter One: General Introduction

7

like, histogram equalization and multi scale retinex algorithm with color

restoration (MSRCR).

1.5 Structure of thesis

The organization of the rest of the thesis is as follows: chapter two details

all the theoretical principles of the light , human visual system and color.

The Color Image Analysis and Enhancement based on lightness and contrast

change are introduced in the chapter three. Chapter four includs the lighting

systems and the suggested algorithms of image quality assessment and

enhancement. The fifth chapter gives results and discussion, Conclusions and

future works are also given in chapter six.

Chapter Two:

Theoretical Principles

Chapter Two: Theoretical Principles

8

2.1 Introduction Color in images is usually represented by a tri band signal, for instance Red

Green and Blue (R, G, B) this signal is sensitive to changes in illumination and

in real world the image is formed when light energy is scattered by surfaces in

the scene towards the viewport and it is the product of the shape, reflectance and

illumination in a scene. Many theoretical concepts play important role in the

image formation, analysis and enhancement in this chapter we focus on many

concepts such as light, color space and human visual system.

2.2 Light Light is just one portion of the various electromagnetic waves flying

through space. These waves have both a frequency and a length, the values of

which are distinguishing light from other forms of energy on the

electromagnetic spectrum. Light is emitted from a body due to Incandescence,

Electric Discharge, Electro luminescence and Photoluminescence [23]. Images

cannot exist without light. To produce an image, the scene must be illuminated

with one or more light sources. In this section we focus on interaction of light

with surface and some artificial light source. Moreover we determine general

factors that affect on the light equality assessment.

2.2.1 Light and Materials A Radiometry is the science of measuring light from any part of the

electromagnetic spectrum. In general, the term usually applies to the

measurement using optical instruments of light in the visible, infrared and

ultraviolet wavelength regions. The terms and units have been standardized in

the American National Standards Institute publications (ANSI ) [24], whereas

the Photometry is the science of measuring light within the visible portion of the

electromagnetic spectrum in units weighted in accordance with the sensitivity of

the human visual system [25]. Light is a form of radiometric energy, radiometry

is used in graphics to provide the basis for illumination calculations which are

Chapter Two: Theoretical Principles

9

(Radiant power, Radiant Intensity, Irradiance and Radiance) and the

corresponding in Photometry units are (Luminous flux, Luminous intensity,

Illuminance and Luminance) [26]. If we addressed in the simulation of light

distribution it involves characterizing the reflections of light from surfaces.

Various materials reflect light in very different ways, for example matt house

paint reflects light many differently than the often highly specular paint used on

a sports car. Reflection is the process whereby light of a specific wavelength is

(at least partially) propagated outward by a material without change in

wavelength, or more precisely, “reflection is the process by which

electromagnetic flux (power), incident on a stationary surface or medium, leaves

that surface or medium from the incident side without change in frequency;

reflectance is the fraction of the incident flux that is reflected”, [27]. The effect

of reflection depends on the directional properties of the surface involved. The

reflective behavior of a surface is described by its Bi-Directional Reflectance

Distribution Function (BRDF).

The BRDF expresses the probability that the light coming from a given

direction will be reflected in another direction [28] as shown in figure (2.1).

Generally we define BRDF as the ratio of the reflected intensity (𝐼𝐼𝑣𝑣 radiance) to

the radiation flux (Eir irradiance) in the incident beam [28]:

𝑅𝑅(𝜃𝜃𝑖𝑖 ,𝜙𝜙𝑖𝑖 ,𝜃𝜃𝑒𝑒 ,𝜙𝜙𝑒𝑒) = 𝐼𝐼𝑣𝑣(𝜃𝜃𝑖𝑖 ,𝜙𝜙𝑖𝑖 ,𝜃𝜃𝑒𝑒 ,𝜙𝜙𝑒𝑒)𝐸𝐸𝑖𝑖r (𝜃𝜃𝑖𝑖 ,𝜙𝜙𝑖𝑖)

(2.1)

Figure 2.1: Geometry of the BRDF.

Chapter Two: Theoretical Principles

10

This relates the light in the direction (𝜃𝜃𝑖𝑖 ,𝜙𝜙𝑖𝑖) to outgoing light in the

direction (𝜃𝜃𝑒𝑒 ,𝜙𝜙𝑒𝑒) , Where:

𝐸𝐸ir (𝜃𝜃𝑖𝑖 ,𝜙𝜙𝑖𝑖) = 𝐼𝐼𝑣𝑣(𝜃𝜃𝑖𝑖 ,𝜙𝜙𝑖𝑖) 𝑐𝑐𝑐𝑐𝑐𝑐 𝜃𝜃𝑖𝑖 𝑑𝑑𝑑𝑑 (2.2)

dω is the solid angle .

Figure (2.2) shows different types of material behavior, which are defined as

follows [29]:

• Specular (mirror): Specular materials reflect light in one direction only,

the mirror direction. The outgoing direction is in the incident plane and

the angle of reflection is equal to the angle of incidence.

• Diffuse: Diffuse, or Lambertian materials reflect light equally in all

directions. Reflection of light from a diffuse surface is independent of

incoming direction. The reflected light is the same in all directions and

does not change with viewing angle.

• Mixed: Reflection is a combination of specular and diffuse reflection.

Overall reflectance is given by a weighted combination of diffuse and

specular components.

• Retro-Reflection: Retro-Reflection occurs when the light is reflected

back on itself that is the outgoing direction is equal, or close to the

incident direction. Retro reflective devices are widely used in the areas of

night time transportation and safety.

• Gloss: Glossy materials exhibit the property that involves mixed

reflection and is responsible for a mirror like appearance of a rough

surface.

Figure 2.2: Types of Reflection in the surface of Materials.

Chapter Two: Theoretical Principles

11

Most materials do not fall exactly into one of the idealized material

categories described above, but instead exhibit a combination of specular and

diffuse characteristics and real materials generally have a more complex

behavior, with a directional character resulting from surface finish and sub-

surface scattering [29].

2.2.2 Artificial Light Sources There are many different types of lamps for everyday lighting and for

color imaging lighting. Many of the major categories for everyday lighting are

incandescent, tungsten halogen, fluorescent, mercury, metal halide, sodium and

Lighting Emission Diodes (LEDs). For color imaging (photography), the major

category is the electronic flash lamp.

Two general characteristics of lamps that are important for color imaging are

their spectral power distribution as a function of their life time and operating

conditions. The light output of a lamp decreases during its life. Also, the spectral

power distribution of a tungsten lamp depends on the voltage at which it is

operated. Therefore, for critical color calibration or measurement, we cannot

always assume that the spectral power distribution of a lamp will remain the

same after hours of use or at various operating temperatures [30]. In this study

we used two types of artificial light sources are white LEDs and metal halide

which are described as follows:

a. Light Emitting Diode (LED):

The basic operating principle behind light emitting diodes is inducing

conduction by negatively charged carriers (n-type) and some by positively

charged carriers (p-type). When charged carriers of different types recombine

the energy released may be emitted as light [31]. LED lamps are the newest

addition to the list of energy efficient light sources. While LED lamps emit

visible light in a very narrow spectral band, they can produce "white light". This

is accomplished with either a red-blue-green array or a phosphor-coated blue

LED lamp. LED lamps have made their way into numerous lighting applications

Chapter Two: Theoretical Principles

12

including exit signs, traffic signals, under-cabinet lights, and various decorative

applications. Though still in their infancy, LED lamp technologies are rapidly

progressing and show promise for the future [32].

b. Metal-halide (MH)

It is discharge lamps which are high-pressure mercury lamps with a clear

bulb. In the discharge, tube are added, in addition to the mercury, also different

halide compounds of rare earth metals. The spectra emitted from the added rare

earth metal vapors improve the color, color rendering and the efficacy of the

original high-pressure mercury lamps. Metal-halide discharge lamps are used for

lighting of large sized sports stadiums, squares, etc., where whitish color and

good color rendering is necessary [32]. Table (2.1) shown specification of white

LEDs and MH lamp, in this table we can see some optical and electrical

properties, whereas figure (2.3) demonstrated relative spectral distribution of

them compared with the daylight at D65(white light at 6500k). In this figure the

distribution of white LEDs lamp fairly near the distribution of the daylight.

MH lamp White LEDs lamp Light source BTE40/Italy 3W4CH/340/China model

4100 k 3500k Color temp. 70 75 CRI

100 lm/W 30 lm/W Efficacy 220 v 3 v Average voltage

1000 W 0.7 W for one chip Average Power

Table 2.1: some properties of White LEDs and MH

Chapter Two: Theoretical Principles

13

2.2.3 Lighting Quality What does lighting quality mean? There is no complete answer to the

question. Lighting quality depends on several factors. It depends largely on

people’s expectations and past experiences of electric lighting and Lighting

quality cannot be expressed simply in terms of photometric measures nor

can there be a single universally applicable recipe for good quality lighting [36],

many quality issues are addressed in this section as following:

Visual performance [36]: Is one of the major aspects of the lighting

practice and recommendations is to provide adequate lighting for people to

carry out their visual tasks. Visibility is defined by our ability to detect objects

or signs of given dimensions, at given distances and with given contrasts of the

background. Visual performance is defined by the speed and accuracy of

performing a visual task and visual performance models are used to

evaluate the interrelationships between visual task performance, visual target

size and contrast, observer age and luminance levels Light levels that are

optimized in terms of visual performance should guarantee that the visual

performance can be carried out well above the visibility threshold limits.

Visual performance is improved with increasing luminance. Yet, there is a

plateau above which further increases in luminance do not lead to

Figure 2.3: Relative Spectral Power distribution of MH

[33] and white LEDs lamp [34] ,and daylight[35].

Chapter Two: Theoretical Principles

14

improvements in visual performance. Thus increasing luminance levels above

the optimum for visual performance may not be justified and can on the

contrary lead to excessive use of energy. The visual performance aspect and

consumption of electricity for lighting should be in balance in order to increase

energy efficiency as shown in figure (2.4).

Color characteristics: The color characteristics of light in space are determined

by the spectral power distribution (SPD) of the light source and the

reflectance properties of the surfaces in the room. The color of light

sources is usually described by two properties, namely the correlated color

temperature (CCT) and general color rendering index (CRI) [36]. CRI is the

evaluation of how the color looks in a given light source as compared to a

reference source. The spectral composition of different lamps is different and the

sample may reflect different wave lengths and look different. Special CRI can

be measured by Commission Internationale de l’´Eclairage (CIE) by [30]:

𝑅𝑅𝑖𝑖 = 100 – 4.6𝑑𝑑𝐸𝐸𝑖𝑖 (2.3)

Where 𝑑𝑑𝐸𝐸𝑖𝑖is the distance in the CIELUV between colors coordinates of the test.

Figure 2.4: Relative visual performance as a function of background

luminance and target contrast [36].

Chapter Two: Theoretical Principles

15

The CIE general color-rendering index 𝑅𝑅𝑎𝑎 is defined as the arithmetic mean of

the eight CIE special color-rendering indices 𝑅𝑅𝑖𝑖 for the eight standard test-color

samples[30], i.e.,

𝑅𝑅𝑎𝑎 = 18

∑ 𝑅𝑅𝑖𝑖8𝑖𝑖=1 (2.4)

Uniformity of lighting [36]: Uniformity of lighting in space can be desirable or

less desirable depending on the function of the space and type of activities. A

completely uniform space is usually undesirable whereas too nonuniform

lighting may cause distraction and discomfort. Lighting standards and codes

usually provide recommended illuminance ratios between the task area and

its surroundings. Most indoor lighting design is based on providing levels

of illuminances while the visual system deals with light reflected from

surfaces i.e. luminances. For office lighting there are recommended

luminance ratios between the task and its immediate surroundings.

Glare [36]: Is caused by high luminances or excessive luminance differences in

the visual field. Disability glare and discomfort glare are two types of glare, but

in indoor lighting the main concern is about discomfort glare. This is visual

discomfort in the presence of bright light sources, luminaries, windows or

other bright surfaces.

Flicker [36] Is produced by the fluctuation of light emitted by a light

source. Light sources that are operated with AC supply, produce regular

fluctuations in light output. The visibility of these fluctuations depends on the

frequency and modulation of the fluctuation.

2.3 Human Visual System (HVS) Perception is the process that enables humans to make sense of the stimulus

that surrounds them. HVS can be divided into two main parts, the eyes which

captures the images and converts to signals that can be interpreted by

Chapter Two: Theoretical Principles

16

the brain and the visual pathways, that process and transmit this

information along the brain [37].

In recent years visual perception has increased in importance in computer

graphics, predominantly due to the demand for realistic computer generated

images [38, 39]. The goal of perceptually-based rendering is to produce imagery

that evokes the same responses as an observer would have when viewing a real-

world equivalent. To this end, work has been carried out on exploiting the

behavior of the HVS. Psychophysical experiments can be used to determine

responses such as sensitivity to a stimulus. In the field of computer graphics, this

information can then be used to design systems that are finely attuned to the

perceptual attributes of the visual system. To make an assessment of the effects

of reflected ambient light on the perception of electronically displayed images, it

is necessary to understand several perceptual phenomena that may play a part in

the process. The attributes of the HVS relevant are detailed below.

2.3.1 The human eye

HVS receives and processes electromagnetic energy in the form of light

waves. This starts with the path of light through the pupil (figure 2.5), which

changes in size to control the amount of light reaching the back of the eye. Light

then passes through the lens, which provides focusing adjustments, before

reaching the photoreceptors in the retina at the back of the eye. These receptors

in the retina consist of about 120 million rods and 8 million cones [40].

Figure 2.5: A schematic section through the human eye [41].

Chapter Two: Theoretical Principles

17

Rods are highly sensitive to light and provide low intensity vision in low light

levels, but they cannot detect color. They are located primarily in the periphery

of the visual field. In contrast to this, high-acuity color vision is provided

through three types of cones: L, which are sensitive to long wavelengths; M,

which are sensitive to medium wavelengths; and S, which are short wavelength

sensitive. Finally, the photo pigments in the rods and cones transform this light

into electrical impulses that are passed to neuronal cells and transmitted to the

brain via the optic nerve [41].

2.3.2 Visual sensitivity

The way in which can be perceive images depends on the amount of light

available. In dark scenes our visual acuity (the ability to resolve spatial detail) is

low and colors cannot be distinguished. Daylight (cone) vision is called

‘photopic vision’ and night (rod) vision ‘scotopicvision’, and between there is a

range of ‘mesopic vision’ where both cones and rods are active as shown in

figure (2.6) a luminance of white light above about 3 cd/mP

2P regarded as

photopic, and a luminance below 0.001 cd/mP

2P is scoptopic, while the mesopic

range is from 0.001 to 3 cd/mP

2P [40, 41].

Figure 2.6: The range of luminance in the natural environment

and associated visual parameters [41].

Chapter Two: Theoretical Principles

18

2.3.3 Contrast

The term contrast generally refers to the intensity difference between given

light and dark values. If the difference is great then the contrast is said to be

high; if small, then the contrast is low. Contrast can be computed in several

ways but one of the most common ways is the Michelson formula. The

Michelson formula is used to compute the contrast of a periodic pattern, and is

defined as [7]:

𝐶𝐶𝑤𝑤 = 𝐿𝐿𝑚𝑚𝑎𝑎𝑚𝑚 − 𝐿𝐿𝑚𝑚𝑖𝑖𝑚𝑚𝐿𝐿𝑚𝑚𝑎𝑎𝑚𝑚 +𝐿𝐿𝑚𝑚𝑖𝑖𝑚𝑚

(2.5)

Where 𝐿𝐿𝑚𝑚𝑎𝑎𝑚𝑚 and 𝐿𝐿𝑚𝑚𝑖𝑖𝑚𝑚 refer respectively to the maximum and minimum

luminance values in the pattern.

2.3.4 Thresholds

It is easily demonstrated that in a brightly-lit room the addition of a single

candle is not obvious, but when the room is dark, lighting a candle makes an

immediate impression. Similarly, a whisper is sufficient to be heard in a quiet

environment, whereas a shout is necessary in noisy conditions. In 1834 the

German physiologist, Weber observed this principle1, defining Weber’s Law

[40]: the ratio of the increment threshold to the background intensity is a

constant, denoted the Weber fraction that is given by [40]:

𝑘𝑘𝑤𝑤 = ∆𝐼𝐼𝑖𝑖𝐼𝐼𝑖𝑖

(2.6)

Here 𝐼𝐼𝑖𝑖 is the stimulus intensity (for example, a given luminance value), ∆𝐼𝐼𝑖𝑖 is

the increment or decrement in intensity needed for an observer to notice a

difference in the initial intensity, and 𝑘𝑘𝑤𝑤 is the value of the constant ratio.

2.3.5 The Contrast Sensitivity Function

The ability to perceive a just noticeable difference is known as contrast

sensitivity. In 1968 Campbell and Robson presented a theory of perception

showing that contrast sensitivity varies according to spatial frequency [41].

Spatial frequency indicates the number of gratings (pairs of bars, one black and

Chapter Two: Theoretical Principles

19

one white, also known as a cycle) which form a retinal image at a given distance

[40]. They measured this variation through the use of a compound sinusoidal

grating stimulus, as shown in figure (2.7). The use of gratings of different spatial

frequencies (i.e. with different numbers of cycles per degree of angle of vision)

means that contrast sensitivity can be measured at each spatial frequency. This

provides a curve that describes the threshold contrast needed to detect a given

spatial frequency, and this curve is known as the contrast sensitivity function

(CSF), which is shown in figure (2.7).

2.3.6 Adaptation

The HVS adjusts to the stimuli that are presented to it, resulting in changes

in sensitivity known as adaptation. This process enables the visual system to

respond to large variations in luminance, allowing it to adjust to the prevailing

light level. The rods in the eye are around ten times as sensitive as cones, and so

provide maximum sensitivity at low light levels [43].Visual adaptation from

light to dark is known as dark adaptation, and can last for tens of minutes; for

example, the length of time it takes the eye to adapt at night when the light is

Figure 2.7: The Campbell-Robson sensitivity chart (left, from [42]). The spatial

frequency increases logarithmically from left to right; the contrast varies

logarithmically from bottom to top. The resulting curve of the threshold determines an

individual’s contrast sensitivity function (right, from [40]).

Chapter Two: Theoretical Principles

20

switched off. Conversely, light adaptation, from dark to light, can take only

seconds, such as leaving a dimly lit room and stepping into bright sunlight. This

change in sensitivity is brought about through physiological processes. In high

luminance levels the photopigment in the eye is bleached, causing a loss of

sensitivity in the photoreceptors. The photoreceptors regain their sensitivity

gradually, accounting for the temporal aspects of adaptation. Additionally,

though less significantly, the amount of light entering the pupil changes [44].

2.3.7 Brightness perception

While luminance intensity can be measured on a physical scale

(photometric and radiometric), the term brightness actually denotes a perceptual

variable, which refers to a perceived level of illumination, such as the amount of

light an area appears to emit [40]. In addition, the term lightness usually refers o

the perceived reflectance of a surface. Brightness can be estimated for unrelated

stimuli (visual stimuli presented in isolation) and related stimuli (visual stimuli

presented alongside other visual stimuli) [45].The relationship between

luminance intensity and perceived brightness is non-linear and can be described

by a power law function as:

𝑆𝑆 = 𝑘𝑘𝐼𝐼𝑖𝑖𝑎𝑎 (2.7)

Known as the Stevens’ Power Law [46], where 𝑆𝑆 is the magnitude of the

sensation, 𝑘𝑘 is a scale constant, and 𝐼𝐼𝑖𝑖 is the intensity of the physical stimulus

raised to a power 𝑎𝑎. The exponent for brightness is experimentally determined to

be (0.33).

2.3.8 Lightness and color constancy

The ability to judge a surface’s reflectance properties despite any changes

in illumination is known as color constancy. Lightness constancy is the term

used to describe the phenomena whereby a surface appears to look the same

regardless of any differences in the illumination [47]. For example, white paper

with black text maintains its appearance when viewed indoors in a dark

Chapter Two: Theoretical Principles

21

environment or outdoors in bright sunlight, even if the black ink on a page

viewed outdoors actually reflects more light than the white paper viewed

indoors. Chromatic color constancy extends this to color: a plant seems as green

when it’s outside in the sun as it does if it’s taken indoors under artificial light.

2.4 Color Space Color is an important feature of many applications and processes, including

our everyday life. Color can be defined in several ways. When we look at an

object, we can usually straightaway tell which hue it has, i.e. what color “class”

it belongs to, whether it is red, green, yellow, blue and so on. We can also

distinguish between the color’s brightness, i.e. if the color is light or dark. With

these different features in mind, we need a space to describe them exactly and

therefore be able to differentiate between single colors. In 1931 the CIE

proposed the XYZ space which contains all colors human beings can see and is

based on three imaginary primary colors. The HSV, CIELAB and CIELUV or

the YIQ spaces are subsets of the CIE XYZ space and therefore represent only a

fraction of all possible colors. Moreover, most color spaces in this section are

not visually uniform .That is, distances in the color space do not reflect

perceived distances between two colors.

2.4.1 CIE chromaticity system

In 1931, the CIE defined three standard primary colors to be combined to

produce all possible perceivable colors. The three standard primaries of the 1931

CIE, called X, Y, and Z, are imaginary colors[48], figure (2.8) shows

mathematically with positive color matching functions, whose values are

specified the amount of each primary needed to describe any spectral color at 5

nm intervals. The CIE primaries provide an international standard definition for

all colors, and it eliminates negative value color matching and other problems

associated with selecting a set of real primaries [49]. Tristimulus values X, Y,

Chapter Two: Theoretical Principles

22

and Z are computed for a primary light source with power spectrum 𝐿𝐿(𝜆𝜆) from

the color-matching functions 𝑚𝑚, 𝑦𝑦, and 𝑧𝑧 as follows[50]:

𝑋𝑋 = 𝑘𝑘𝑚𝑚 ∫ 𝐿𝐿(𝜆𝜆) 𝑚𝑚(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆 (2.8)

𝑌𝑌 = 𝑘𝑘𝑚𝑚 ∫ 𝐿𝐿(𝜆𝜆) 𝑦𝑦(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆 (2.9)

𝑍𝑍 = 𝑘𝑘𝑚𝑚 ∫ 𝐿𝐿(𝜆𝜆) 𝑧𝑧(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆 (2.10)

where 𝑘𝑘𝑚𝑚 = 683.002 lm/w, If tristimulus values X, Y, and Z are to be calculated

for a reflecting object, then the following equations are to be used[50]:

𝑋𝑋 = 𝑘𝑘𝑚𝑚 ∫ 𝑅𝑅(𝜆𝜆)𝐿𝐿(𝜆𝜆) 𝑚𝑚(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆 (2.11)

𝑌𝑌 = 𝑘𝑘𝑚𝑚 ∫ 𝑅𝑅(𝜆𝜆)𝐿𝐿(𝜆𝜆) 𝑦𝑦(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆 (2.12)

𝑍𝑍 = 𝑘𝑘𝑚𝑚 ∫ 𝑅𝑅(𝜆𝜆)𝐿𝐿(𝜆𝜆) 𝑧𝑧(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆 (2.13)

Here R(λ) denotes the reflectance of the object. In this case, the constant 𝑘𝑘𝑚𝑚 is

set to [33]:

𝑘𝑘𝑚𝑚 = 100

∫ 𝐿𝐿(𝜆𝜆) 𝑦𝑦(𝜆𝜆)𝑑𝑑𝜆𝜆𝜆𝜆

(2.14)

Figure 2.8: CIE 1931 standard observe and chromaticity diagram [50].

Chapter Two: Theoretical Principles

23

If we normalize these tristimulus values, we get the so-called chromaticity

coordinates [33]:

𝑚𝑚 = 𝑋𝑋𝑋𝑋+𝑌𝑌+𝑍𝑍

, 𝑦𝑦 = 𝑌𝑌𝑋𝑋+𝑌𝑌+𝑍𝑍

, 𝑧𝑧 = 𝑍𝑍𝑋𝑋+𝑌𝑌+𝑍𝑍

(2.15)

By plotting (x) and (y) for all visible colors, a horseshoe-shaped diagram can

be drawn which is called the CIE chromaticity diagram. The interior and

boundary of the diagram represent all visible chromaticity values. In figure (2.8)

the boundary of the diagram represents the 100 percent pure colors of the

spectrum. The line joining the red and violet spectral points called the purple

line, is not part of the spectrum. The center point of the diagram represents a

standard white light, which approximates sunlight.

The forward and inverse matrix conversion from RGB to XYZ is expressed

mathematically as follows [33]:

𝑀𝑀𝑅𝑅𝑅𝑅𝑅𝑅 𝑡𝑡𝑐𝑐 𝑋𝑋𝑌𝑌𝑍𝑍 = �0.412 0.358 0.1800.213 0.715 0.0720.019 0.119 0.950

� (2.16)

𝑀𝑀 𝑋𝑋𝑌𝑌𝑍𝑍 𝑡𝑡𝑐𝑐𝑅𝑅𝑅𝑅𝑅𝑅 = �3.240 1.537 0.498−0.969 1.876 0.0420.056 0.204 1.057

� (2.17)

Although CIE XYZ defines all humanly perceivable colors, it is not

perceptually uniform. The distance between any two points in the space does not

determent the relative closeness of those colors. However, the CIE XYZ is

fundamental to colorimetric.

2.4.2 RGB color space

The RGB space is the most frequently used color space for image

processing. Since color cameras, scanners and displays are most often provided

with direct RGB signal input or output, this color space is the basic one, which

is, if necessary, transformed into other color spaces [51]. The RGB color model

is made of three additive primaries Red, Green, and Blue. It is the system used

Chapter Two: Theoretical Principles

24

in almost all color in the Cathode Ray Tube (CRT) monitors, and is device-

dependent (e.g. the actual color displayed depends on what monitor you have,

and what its settings are). It is called additive, because the three different

primaries are added together to produce the desired color. The color model is

shown as a cartesian cube, with usually Red being the x- axis, Green being the

y-axis, and Blue being the z-axis, as shown in figure (2.9) [52]. Each color,

which is described by its RGB components, is represented by a point and can be

found either on the surface or inside the cube.

All grey colors are placed on the main diagonal of this cube from black

(R=G=B=0) to white (R=G=B=max), The colors with a P are the primary

colors. The dashed line indicates where to find the grays, going from (0, 0, 0) to

(max , max, max) [33], where the max value equal 255 or 1.

2.4.3 YIQ color model

In the development of the NTSC television system used in the United

States, a color coordinate system with the coordinates Y, I, and Q is defined for

transmission purposes. To transmit a color signal efficiently, the R,G, and B

signal is more conveniently coded from a linear transformation. The luminance.

Signal is coded in the Y-component. The additional portions I and Q contain the

entire chromaticity information that is also denoted as chrominance signal in

Figure 2.9: RGB color space [52].

Chapter Two: Theoretical Principles

25

television technology [33].

(I) component containing orange-cyan hue information, and (Q) containing

green-magenta hue information. Transform color image from basic RGB color

space to YIQ color space by preformed by [51]:

𝑀𝑀𝑅𝑅𝑅𝑅𝑅𝑅 𝑡𝑡𝑐𝑐 𝑌𝑌𝐼𝐼𝑌𝑌 = � 0.299 0.587 0.1140.596 − 0.270 − 0.3220.211 − 0.253 0.312

� (2.18)

While the inverse transformation is given by:

𝑀𝑀𝑌𝑌𝐼𝐼𝑌𝑌 𝑡𝑡𝑐𝑐 𝑅𝑅𝑅𝑅𝑅𝑅 = �1 0.956 0.6211 − 0.272 − 0.6471 − 1.060 1.703

� (2.19)

There are two peculiarities with the YIQ color model, the first is that this

system is more sensitive to changes in luminance than to changes in

chromaticity; the second is that color gamut is quite small; it can be specified

adequately with one rather than two color dimensions. These properties are very

convenient for the transfer of TV signals [53].

2.4.4 HSV Color Model The three components in HSV color model are hue (H), saturation (S) and

value (V). Hue is an attribute associated with the dominant wavelength in a

mixture of light waves [54]. For example, a light wave with central tendency of

565 to 590 nm will be perceived as “yellow” by human observer. In HSV color

model, hue represents the dominant color as observed by the human eye and

measured in degree from 0° to 360°. Saturation measures how vivid or pure a

color is and the purity refers the amount of “white” color mixed with a hue [54].

A highly saturated color implies a pure color while no saturation makes the hue

appear grey. The degree of saturation is inversely proportional to the amount of

white light added and white color has zero saturation. Value represents

brightness of a color. While hue and saturation defines chromaticity, value

represents the achromatic notion of its intensity. Pure achromatic colors range

Chapter Two: Theoretical Principles

26

from black to white with all the possible gray colors in between. HSV color

space can be represented in various ways [50,51,54]. One typical representation

is using a hexagonal disk model The saturation S and hue H components specify

a point inside the hexagonal disk, Saturation for a given level of V is defined as

the relative length of the vector that points to the given color to the length of the

vector that points to the corresponding color on the border or the hexagonal disk

[50]. This results in a set of loci of constant S as shown in figure (2.10).

The transformation from RGB color space to HSV color space is given by [50]:

𝑉𝑉 = 𝑚𝑚𝑎𝑎𝑚𝑚{𝑅𝑅,𝑅𝑅,𝑅𝑅} (2.20)

𝑆𝑆 = 𝑚𝑚𝑎𝑎𝑚𝑚 −𝑚𝑚𝑖𝑖𝑚𝑚𝑚𝑚𝑎𝑎𝑚𝑚

(2.21)

𝐻𝐻 =

⎩⎪⎨

⎪⎧

𝑅𝑅−𝑅𝑅6(𝑚𝑚𝑎𝑎𝑚𝑚 −𝑚𝑚𝑖𝑖𝑚𝑚 )

16

(2 + 𝑅𝑅−𝑅𝑅𝑚𝑚𝑎𝑎𝑚𝑚 −𝑚𝑚𝑖𝑖𝑚𝑚

)16

(4 + 𝑅𝑅−𝑅𝑅𝑚𝑚𝑎𝑎𝑚𝑚 −𝑚𝑚𝑖𝑖𝑚𝑚

)

� (2.22)

where 𝑚𝑚𝑎𝑎𝑚𝑚 = 𝑚𝑚𝑎𝑎𝑚𝑚{𝑅𝑅,𝑅𝑅,𝑅𝑅} and 𝑚𝑚𝑖𝑖𝑚𝑚 = 𝑚𝑚𝑖𝑖𝑚𝑚{𝑅𝑅,𝑅𝑅,𝑅𝑅}. All three components 𝑉𝑉,

S, and H are in the range [0, 1]. The transformation from HSV back to RGB is

given by [50]:

[𝑅𝑅,𝑅𝑅,𝑅𝑅] = [𝑉𝑉,𝑉𝑉,𝑉𝑉] 𝑖𝑖𝑖𝑖 𝑆𝑆 = 0 (2.23)

Figure 2.10: HSV

Color Space .

Chapter Two: Theoretical Principles

27

If the saturation is not zero, then the RGB components are given:

[𝑅𝑅,𝑅𝑅,𝑅𝑅] =

⎩⎪⎪⎨

⎪⎪⎧

[𝑉𝑉,𝐾𝐾,𝑀𝑀] 𝑖𝑖𝑖𝑖 0 ≤ 𝐻𝐻 < 1/6 [𝑁𝑁,𝑉𝑉,𝑀𝑀] 𝑖𝑖𝑖𝑖 1/6 ≤ 𝐻𝐻 < 2/6 [𝑀𝑀,𝑉𝑉,𝐾𝐾] 𝑖𝑖𝑖𝑖 2/6 ≤ 𝐻𝐻 < 3/6 [𝑀𝑀,𝑁𝑁,𝑉𝑉] 𝑖𝑖𝑖𝑖 3/6 ≤ 𝐻𝐻 < 4/6 [𝐾𝐾,𝑀𝑀,𝑉𝑉] 𝑖𝑖𝑖𝑖 4/6 ≤ 𝐻𝐻 < 5/6[𝑉𝑉,𝑀𝑀,𝑁𝑁] 𝑖𝑖𝑖𝑖 5/6 ≤ 𝐻𝐻 < 1

� (2.24)

where 𝑀𝑀, 𝑁𝑁, and 𝐾𝐾 are defined as:

𝑀𝑀 = 𝑉𝑉 (1 − 𝑆𝑆) (2.25) 𝑁𝑁 = 𝑉𝑉 (1 − 𝑆𝑆𝑆𝑆) (2.26)

𝐾𝐾 = 𝑉𝑉 (1 − 𝑆𝑆(1 − 𝑆𝑆)) (2.27)

And 𝑆𝑆 = 6𝐻𝐻 – 𝑉𝑉 (2.28)

2.4.5 CIELAB COLOR SPACE

The CIELAB color coordinate system is developed to give a simple

measure of color in agreement with the Munsell color system. The CIELAB

space has been designed to be a perceptually uniform space. A system is

perceptually uniform if a small perturbation to a component value is

approximately equally perceptible across the range of that value [33].

In CIELAB, the L-axis is known as the lightness and extends from 0 (black) to

100 (white). The other two coordinates A and B represent redness-greenness and

yellowness-blueness, respectively and samples for which A=B=0 are

achromatic. Therefore, the L-axis represents the achromatic scale of grays from

black to white. The three coordinates L, A and B are computed from the

tristimulus values X, Y, and Z as follows [50]:

𝐿𝐿 = �116( 𝑌𝑌

𝑌𝑌𝑚𝑚)

13 − 16 𝑖𝑖𝑖𝑖 𝑌𝑌

𝑌𝑌𝑚𝑚> 0.008856

116( 𝑌𝑌𝑌𝑌𝑚𝑚

)13 𝑖𝑖𝑖𝑖 𝑌𝑌

𝑌𝑌𝑚𝑚≤ 0.008856

� (2.29)

Chapter Two: Theoretical Principles

28

𝐴𝐴 = 500(𝑖𝑖 � 𝑋𝑋𝑋𝑋𝑚𝑚� − 𝑖𝑖 �𝑌𝑌

𝑌𝑌𝑚𝑚�) (2.30)

𝑅𝑅 = 200(𝑖𝑖 � 𝑌𝑌𝑌𝑌𝑚𝑚� − 𝑖𝑖 �𝑍𝑍

𝑍𝑍𝑚𝑚�) (2.31)

where Xn, Yn, and Zn describe a specified white object color stimulus and the

function f is defined as[50]:

𝑖𝑖(𝑚𝑚) = �𝑋𝑋

13 𝑖𝑖𝑖𝑖 𝑋𝑋 > 0.008856

7.787𝑋𝑋 + 16116

𝑖𝑖𝑖𝑖 𝑋𝑋 ≤ 0.008856� (2.32)

The B coordinate represents the yellowness–blueness. The coordinates A and B

have a range of approximately [−100, 100] [33].

And chroma , hue angle given by[49]:

𝐶𝐶𝐴𝐴𝑅𝑅 = √𝐴𝐴2 + 𝑅𝑅2 (2.33)

𝐻𝐻𝐴𝐴𝑅𝑅 = 𝑡𝑡𝑎𝑎𝑚𝑚−1 𝑅𝑅𝐴𝐴 (2.34)

Chapter Three:

Color Image Analysis and Enhancement Based on

Changing in Lightness and Contrast

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

29

3.1 Introduction This chapter includes image Analysis using image equality assessment based

on changing in the lightness and contrast, many methods used to predict quality

of the color image, some depending on the optimal reference is called full

reference image quality, and the other do not dependent on optimal reference is

called no reference image quality. In this study we focus on the second types

due to the changing in lightness and contrast where there is not an optimal

image. And, in this chapter we were introduced the color image enhancement

algorithms, these algorithm enhance the lightness and contrast for the color

images.

3.2 Image Quality Measurement Measurement of image quality is very important for many image-

processing algorithms, such as acquisition, compression, restoration,

enhancement and other applications. Image quality assessment is a very

important activity for many image applications. The image quality metrics can

be broadly classified into two categories, subjective and objective. A large

numbers of objective image quality metrics have been developed during the last

decade. Objective metrics can be divided in three categories: Full Reference,

Reduced Reference and No Reference. The best way to assess the quality of an

image is to ask observers to look at it as the HVS is the end-receiver in most

processing environments. However, this approach is tiresome and it requires

long time Moreover, it needs a normalized environment ensuring the best

conditions for the targeted application, thus we used objective measurement.

3.2.1 Subjective Quality Measurement

Essentially, image quality is always an outcome of human sensation. HVS

is the final decisions about quality based on their own visual preferences that,

naturally, are not only affected by the psychophysical aspects of the observer,

but also the fidelity of the image and the observation situation. For evaluating

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

30

image quality, testing with human observers, i.e. subjective evaluation, is often

considered the most reliable way to estimate the quality of images[55,56].

The mean opinion score (MOS) is often regarded as the most reliable image

quality measure, but since it requires numerous human observations and a

specific test arrangement, MOS is also slow and expensive method in real world

situations [56].

3.2.2 Objective Quality Measurement

Objective image quality measures play important roles in various

applications in the image processing and they are numerical measures. A good

objective measure reflects the distortion on image due to blurring, noise,

compression, sensor inadequacy and any source can be distorted the image.

Some time objective image quality assessment models typically require the

access to a reference image that is assumed to have perfect quality. Objective

assessment relies on computational models that can predict the image quality

observations of humans. The accurate objective image quality model predicts

the image quality sensation of an average human observer in other words, strong

correlations to subjective observations are essential when defining a good

objective quality model [55]. Because image quality is strongly based on

subjective observations, traditional objective models such as the mean-squared-

error (MSE) rarely work accurately on a quality context [57].

Depending on how much prior information is available and on how a perfect

candidate image should look like, objective image quality algorithms can be

classified as following:

a. Full Reference image quality

Full reference image quality assessment mostly interprets image quality as

accuracy or similarity with a “reference” or “perfect” image in some perceptual

space. The image quality assessment algorithms attempt to achieve consistency

in quality prediction by modeling salient physiological and psycho visual

features of the HVS, or by signal fidelity measures. Full Reference image

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

31

quality methods approach the image quality assessment problem as an

information fidelity problem [55, 58]. The reference signal is typically processed

to yield a distorted visual data, which can be compared to the reference using

full reference methods. Typically, this comparison involves measuring the

distance between the two signals in a perceptually meaningful way. This can be

achieved by studying, characterizing and deriving the perceptual impact of the

distorted signal to human viewers by means of subjective experiments. The full

reference metric is convenient for image coding scheme comparison [59].

b. Reduced Reference image quality

Reduced-reference measures occur between full-reference and no-reference

measures .The Reduced-reference image quality measures prepare to predict the

visual quality of distorted images with partial information about the reference

images. Reduced-reference approaches are clearly introduced in video

applications [60].

c. No reference image quality

No reference image quality refers to the problem of predicting the visual

quality of image without any reference to an original optimal quality image. This

assessment is the most difficult problem in the field of image objective analysis

[56], since many unquantifiable factors play a role in human perceptions of

quality, such as aesthetics, cognitive relevance, learning, context etc [60]. No

reference image quality is useful to many still image applications as assessment

equality of high resolution image, JPGE image compressed [6] moreover, this

objective method can measure image equality depending on verity of lightness

and contrast.

3.3 Image Quality assessment The image quality assessment is important process to determining the level

of enhancement, in this section we present many algorithms to assess the quality

of the color image based in lightness change such as The Structural Similarity

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

32

Index, the entropy of the first derivative image, Mean of locally (µ ,σ ) model

and color fullness metrics, moreover we suggest new algorithm called quality

factor assessment.

3.3.1 The Structural Similarity Index (SSIM)

Mean squared error (MSE) and Peak Signal to Noise Ratio (PSNR) are the

most common measures of full-reference image quality but, in formulas include

error difference a large errors do not always result in large structural distortions

.this can be interpreted as follows [62]:

If we will use the Minkowski error metric as an example , in the spatial

domain, the Minkowski metric between a reference image 𝑋𝑋𝑖𝑖 (assumed to

have perfect quality) and a distorted image 𝑌𝑌𝑖𝑖 is defined as:

𝐸𝐸𝑝𝑝 = �∑ |𝑋𝑋𝑖𝑖−𝑌𝑌𝑖𝑖|𝑝𝑝𝑁𝑁𝑖𝑖=1

�1/𝑝𝑝 (3.1)

Where 𝑋𝑋𝑖𝑖 and 𝑌𝑌𝑖𝑖 are the 𝑖𝑖_𝑡𝑡ℎ samples in images 𝑋𝑋𝑖𝑖 and 𝑌𝑌𝑖𝑖 ,

respectively, 𝑁𝑁 is the number of image samples, and 𝑝𝑝 refers to the degree

of power. Figure (3.1) shows a two distorted images generated from the same

original image. The first distorted image is obtained by adding a constant

number to all signal samples, and the second is generated using the same

method except that the signs of the constant are randomly chosen to be positive

or negative. It can be easily shown that the Minkowski metrics between the

original image and both of the distorted images are exactly the same, no

matter what power 𝑝𝑝 is used. However, the visual quality of the two distorted

images is drastically different. To overcome the problem in the measurements

that depending on error deference Zhou Wang and Alan Bovik suggested in the

full image quality assessment using the Structural Similarity Index (SSIM) [63].

The basic SSIM algorithm requires that the two images being compared are

properly aligned and scaled so they can be compared point by point. The

computations are performed in a sliding 𝑁𝑁𝑁𝑁𝑁𝑁 (typically 11x11) gaussian-

weighted window.

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

33

The SSIM metric is based on the evaluation of three different measures, the

luminance, contrast, and structure comparison measures are computed as[20]:

1

221

),(),(),(),(2),(

CyxyxCyxyxyxl

YX

YX

+++

=µµµµ (3.2)

222

2

),(),(),(),(2),(

CyxyxCyxyxyxc

YX

YX

+++

=σσσσ (3.3)

3

3

),(),(),(),(

CyxyxCyxyxs

YX

XY

++

=σσ

σ (3.4)

Where X and Y correspond to two different images that we would like to

match, i.e. two different blocks in two separate images, xµ , 2xσ , and xyσ the mean

of 𝑋𝑋, the variance of 𝑋𝑋, and the covariance of 𝑋𝑋 and 𝑌𝑌 respectively where[20]:

𝜇𝜇(𝑁𝑁, 𝑦𝑦) = ∑ ∑ 𝑤𝑤(𝑝𝑝, 𝑞𝑞)𝑋𝑋(𝑁𝑁 + 𝑝𝑝,𝑦𝑦 + 𝑞𝑞)𝑄𝑄𝑞𝑞=−𝑄𝑄

𝑃𝑃𝑝𝑝=−𝑃𝑃 (3.5)

σ2(x, y) = ∑ ∑ w(p, q)[X(x + p, y + q)Qq=−Q

Pp=−P − μX(x, y)]2 (3.6)

Figure 3.1: Failure of the Minkowski metric for image quality prediction. A: original

image; B distorted image by adding a positive constant; C distorted image by adding the

same constant, but with random sign. Images B and C have the same Minkowski metric

with respect to image. A, but drastically different visual quality [9].

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

34

𝜎𝜎 .𝑋𝑋𝑌𝑌(𝑁𝑁,𝑦𝑦) = � � 𝑤𝑤(𝑝𝑝, 𝑞𝑞)[𝑋𝑋(𝑁𝑁 + 𝑝𝑝, 𝑦𝑦 + 𝑞𝑞)

𝑄𝑄

𝑞𝑞=−𝑄𝑄

𝑃𝑃

𝑝𝑝=−𝑃𝑃

− 𝜇𝜇𝑋𝑋(𝑁𝑁,𝑦𝑦)]

[ Y(x + p, y + q) − μY(x, y)] (3.7)

Where w(p, q) is a Gaussian weighing function such that:

∑ ∑ w(p, q) = 1Qq=−Q

Pp=−P (3.8)

And 1C , 2C , and 3C are constants given by ( )211 LKC = , ( )222 LKC = , and 2/23 CC =

. L is the dynamic range for the sample data, i.e. 𝐿𝐿 = 255 for 8 bit content and

𝐾𝐾1 << 1 and 𝐾𝐾2 << 1 are two scalar constants In this study we used 𝐾𝐾1=0.01

and 𝐾𝐾2=0.03 [20]. Given the above measures the structural similarity can be

computed as [20]:

[ ] [ ] [ ]),(),(),(),( yxsyxcyxlyxSSIM ⋅⋅= (3.9)

3.3.2 The Entropy of the First Derivative (EFD) Image

This method dependent on the first derivative of an image can be shown by

the following formula:

𝐼𝐼𝑑𝑑(𝑁𝑁, 𝑦𝑦) = 𝜕𝜕2𝐼𝐼(𝑁𝑁 ,𝑦𝑦)𝜕𝜕𝑁𝑁𝜕𝜕𝑦𝑦

(3.10)

Figure (3.2) show the first derivative of high lightness and low lightness

images. The entropy of the first derivative is defined as follows [18]:

𝐻𝐻(𝜒𝜒) = ∑ 𝑃𝑃(𝑁𝑁𝑘𝑘) log2( 1𝑃𝑃(𝑁𝑁𝑘𝑘)

𝑛𝑛𝑘𝑘=1 ) (3.11)

Where 𝜒𝜒 is a discrete random variable with possible outcomes xR1R, xR2R,... xRnR;

𝑃𝑃(𝑁𝑁𝑘𝑘)is the probability of the outcome 𝑁𝑁𝑘𝑘 . The outcome is understood as a gray

level in the lightness image, and its probability is calculated by:

𝑃𝑃(𝑁𝑁𝑘𝑘) = 𝑛𝑛𝑘𝑘𝑁𝑁𝑡𝑡

(3.12)

Where 𝑘𝑘 = 1 ,2 , . . .𝑛𝑛, 𝑛𝑛 is the total number of possible lightness in the image,

𝑁𝑁𝑡𝑡 is the total number of pixels, and nRkR is the number of pixels that have

lightness level 𝑁𝑁𝑘𝑘 . The higher entropy value denotes a better contrast in the

image.

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

35

3.3.3 The Mean of locally (µ ,σ ) model

In NASA Langley research center [12] concluded heuristics to measure

quality of the image, proposed the idea that good visual representations seem to

be based upon some combinations of high regional visual lightness and contrast.

To compute the regional parameters, we divided the image into non overlapping

blocks that are 50×50 pixels. For each block, the mean (𝐼𝐼) and a standard

deviation (𝜎𝜎) are computed, and then taking the mean of them 𝐼𝐼 ̅ and 𝜎𝜎� as shown

in figure (3.3). If the points tend to visual optimal region the image has higher

quality of lightness and contrast, whereas if 𝜎𝜎� (without𝐼𝐼)̅ is increased, it makes

image having insufficient lightness, but if 𝐼𝐼 ̅ (without 𝜎𝜎� ) is increased it makes

insufficient contrast in the image.

Figure 3.2: images and its first derivative in different lightness levels.

(a) High lightness. (b) First derivative of (a). (c) Low lightness. (d) First derivative of (c).

σFigure 3.3: Image quality description in the mean of locally

mean and standard deviation of image model [12].

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

36

3.3.4 Color Fullness Metrics (CM)

Colorfulness or called Chromaticness is the attribute of a visual sensation

according to which the perceived color of an area appears to be more or less

chromatic [13] for evaluating the colorfulness of a digital image is to examine

the distribution of pixels in CIELAB color space. Then, the image colorfulness

can be summarized as a linear combination of different characteristic quantities

of the said distribution. Based on the testing of numerous colorfulness methods

with diverse weighting coefficients, two metrics for images in CIELAB color

space are introduced for objective colorfulness calculations one of them [13]:

𝑀𝑀 = 𝜎𝜎𝐴𝐴𝐴𝐴 + 0.94𝜇𝜇𝑐𝑐 (3.13)

And

𝜎𝜎𝐴𝐴𝐴𝐴 = �(𝜎𝜎𝐴𝐴 )2 + (𝜎𝜎𝐴𝐴)2 (3.14)

Where 𝜎𝜎𝐴𝐴𝐴𝐴 is the trigometric length of the standard deviation in A and B

components and 𝜇𝜇𝑐𝑐 is the mean of chroma component.

3.4 Contrast and Lightness Enhancement Algorithms Many algorithms are suggested to enhance color images from any type of

distortion (such as noise, color shift, inverse transform for any operation and

lightness change) ,in this section we introduce many enhancement methods such

as Histogram equalization, Multi scale retinex algorithm and AINDANE

algorithm these algorithm used to enhancement lightness and contrast in the

color image. 3.4.1 Histogram Equalization (HE)

Histogram equalization and its variations have traditionally been used to

correct for uniform lighting and exposure problems. This technique is based on

the idea of remapping the histogram of the scene to a histogram that has a near-

uniform probability density function. This results in reassigning dark regions to

brighter values and bright regions to darker values. Histogram equalization

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

37

works well for scenes that have unimodal or weakly bi-modal histograms (i.e.

very dark, or very bright), but not so well for those images with strongly bi-

modal histograms (i.e. scenes that contain very dark and very bright regions)[9].

HE the global technique that works well for a wide variety of images is

histogram equalization If lightness levels are continuous quantities normalized

to the range (0, 1), pRrR(r) which denote the probability density function (PDF) of

the lightness levels in a given image, where the subscript is use for

differentiating between the PDFs of the input and output images. Suppose that

we perform the following transformation on the input levels to obtain output

(processed) intensity levels [1],

∫==r

dwwrprTs0

)()( (3.15)

Where w is a dummy variable of integration, that the probability density

function of the output levels is uniform, that is[1]:

≤≤

=else

sforssP0

101)( (3.16)

When dealing with discrete quantities we work with histograms and call the

preceding technique histogram equalization, where [4]:

∑=

=∑=

=k

ojjn

jrk

j rpksN

)(0

Lk ,...2,1,0= (3.17)

L =255 for lightness band with 8 bit/pixel), sRkR corresponding normalized

intensity level of the output image and nRjR being the number of pixel with

intensity level j and n is the total number. The eq (3.17) is represent the

cumulative probability density function (CPDF) . rRjR is normalized intensity level

of the input image corresponding to the (un –normalized) intensity level this

algorithm summarized by using following steps:

1. Input color image C(n,m,i), i=1,2,3 ( red ,green, blue) components.

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

38

2. Normalize each component rRjR(i)= C(x,y,i)/ 255 and calculated frequency of occurrence each gradual level nRjR(i) , where j=0,1,..255 .

3. Compute histogram from P(rRjR(i))=nRjR(i) /N , where N being the size of image. 4. Calculate cumulative histogram by :

∑=

=k

j N

ijniks

0

)()( where k=0,1,..255 .

5. Replace each normalized component rRj (i)Rby value of sRkR(i) we get the output image .

3.4.2 Multi Scale Retinex Algorithm

The multiscale retinex (MSR) is explained from single-scale retinex (SSR)

we have [2,8]:

)],(),,(log[)],(log[),,( yxiIcyxFyxiIcyxiR ⊗−= (3.18)

Where ),,( cyxiR the output of channel i ( i ∈ R,G,B) at position x,y , c is

the Gaussian shaped surrounding space constant , ),( yxiI is the image value for

channel i and symbol ⊗ denoted convolution . ),,( cyxF Gaussian surrounds

function that is calculated by[2]:

)2)22(exp()(),,(

cyxkcyxF −−= (3.19)

k is determined by[8]:

1),,( =∫∫ dydxcyxF (3.20)

The MSR output is then simply a weighted sum of the outputs of several

different SSR output where[2,8]:

∑=

=N

n ncyxiRnWcwyxMSRR1

),,(),,,( (3.21)

Where N is the number of scales, ),,( ncyxiR the i'th component of the n’th

scale, ),,,( cWyxMSRR the i'th spectral component of the MSR output and nW

the weight associated with the n’th scale. And we insist that ( 1=∑ nW ). The

result of the above processing will have both negative and positive RGB values,

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

39

and the histogram will typically have large tails. Thus a final gain-offset is

applied as mentioned in [8] and discussed in more detail below. This processing

can cause image colors to go towards gray, and thus an additional processing

step is proposed in [2]:

),,('.' cyxiIMSRRR = (3.22)

Where I' given by

]3

1),(

),(1log[),,('

∑=

+=

iyxiI

yxiIaayxiI (3.23)

Where we have taken the liberty to use log(1+x) in place of log(x) to ensure a

positive result. In [2] a value of 125 is suggested for (a) second. And the final

step is gain-offset by 0.35 and 056 respectively [11]. In this work we used

(wR1R=wR2R=wR3R=1/3 ) and (cR1R=250,cR2R=120,cR3R=80) [2] .

This algorithm done by using following steps:

1. Input color image IRiR(x,y) , i= r,g,b . 2. Calculate Gaussian surrounds function ))(exp()(),,( 2

22

nn c

yxkcyxF −−= ,

where k is normalization constant, cRnR, n=3, {cR1R=250, cR2R=120, cR3R=80}. 3. Compute SSR from )],(),,(log[)],(log[),,( yxIcyxFyxIcyxR inii ⊗−= .

4. Compute MSR from ∑=

=N

nninMSR cyxRWcwyxR

1),,(),,,( , N=3

{wR1R=wR2R=wR3R=1/3}.

5. Calculate MSR with color restoration by: ]),(

),(1log[),,,(' 3

1∑=

+=

ii

ii

yxI

yxIabbayxI ,

b=100, a=125. 6. Output image is gotten form gain offset by IRpiR(x,y)=0.35( ),,,(' bayxIi +0.56).

3.4.2 AINDANE algorithm

Adaptive Integrated Neighborhood Dependent Approach for Nonlinear

Enhancement of Color Images (AINDANE) [4] is an algorithm to improve the

visual quality of digital images captured under extremely low or uniform

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

40

lightening conditions. it is composed of three main parts: adaptive Luminance

enhancement, contrast enhancement and color restoration, this algorithm can be

described as follows[4]:

a. Adaptive luminance enhancement: is consisting of two steps, the first step

is luminance estimation to obtain by conversion of the luminance information by

using National Television Standards Committee (NTSC) color space. Intensity

values of RGB image can be obtained using eq. (2.18) and the normalized

intensity is:

255/),(),( yxIyxnI = (3.24)

The image information according to human vision behavior can be

simplified and formulated as [1]:

),(),(),( yxRyxLyxI = (3.25)

Where ),( yxR is the reflectance and ),( yxL is the illumination at each position

),( yx , the luminance L is assumed to be contained in the low frequency

component of the image while the reflectance R, mainly represents the high

frequency components of the image. For estimation of illumination, the result

of Gaussian low-pass filter applied on the intensity image is used. In spatial

domain, this process is a 2D discrete convolution with a Gaussian kernel which

can be expressed as:

),,(),(),(),( cyxFyxIyxLyxcI ⊗== (3.26)

The IRcR is image convolution.

The second step is called adaptive Dynamic range compression of

illuminance it can be applied using the transfer function nT by [4]:

2

25.0)1(24.0nInInI

nT +−+= (3.27)

This transformation can largely increase the luminance for the dark pixels,

figure (3.4) illustrate the relationship graphically.

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

41

b. contrast enhancement: is done by Center-surround contrast enhancement

using[4] :

),(),(255),( yxEyxnTyxR = (3.28)

pyxIyxcIyxE )),(),((),( = (3.29)

Where p constant can be manually adjusted by users to tune the contrast

enhancement process generally its value is ( 25.0 ≤≤ p ). It depends on the

global standard deviation of the input intensity image.

According to this method, if the center pixel’s intensity is higher than the

average intensity of surrounding pixels, the corresponding pixel on the intensity

enhanced image will be pulled up, otherwise it will be pulled down. In fact, this

process is an intensity transformation process. Considering the enhanced

intensity pixels are in the range [0 1] and the power of these pixels.

c. Color restoration: a linear color restoration process is applied, it is based on

the chromatic information of the original image it is applied to convert the

enhanced intensity image to RGB color image .The (𝑟𝑟′,𝑔𝑔′, 𝑏𝑏′) of the restored

color image are obtained by [9]:

Figure 3.4: Relationship between input lightness versus output lightness

in AINDANE algorithm.

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

42

bIRhbgI

RhgrIRhr === ',',' (3.30)

),(),( yxiRi iwyxR ∑= (3.31)

Where 𝑖𝑖 = 1,2,3, … represents different scales (cRiR) and wRiR is the weight factor

for each constant enhancement and (R1, R2, R3) are calculated from equation

(3.28). The scales used in this work are [4]: cR1R=5, cR2R= 20 and cR3R=240 and

wR1R=wR2R=wR3R=1/3.

Figure (3.5) show steps of AINDANE algorithm that is done by the following:

1. Input color image 𝐶𝐶(𝑁𝑁, 𝑦𝑦).

2. Transform color image 𝐶𝐶(𝑁𝑁,𝑦𝑦) from RGB space to YIQ space and estimated lightness component 𝑌𝑌(𝑁𝑁,𝑦𝑦).

Illumination Reflectance

Contrast enhancement

Input color image

Intensity image

Illumination

estimation and

reflectance

Adaptive dynamic

range compression

Illumination

enhancement

Color restoration

Output color image

Figure 3.5: The diagram of AINDANE algorithm.

Chapter Three: Color Image Analysis and Enhancement Based on Changing in Lightness and Contrast

43

3. Normalize the lightness component 𝐼𝐼(𝑁𝑁,𝑦𝑦) = 𝑌𝑌(𝑁𝑁,𝑦𝑦)/255 . 4. Calculate the Gaussian surrounds function ))(exp()(),,( 2

22

nn c

yxkcyxF −−= ,

where k is normalization constant, cRnR, n=3, {c1=5, c2=20, c3=240}. 5. Compute the convolution image from:

∑∑−

=

=

++=1

0

1

0),,(),(),(

M

m

N

nnc cynxmFnmIyxI

6. Calculate 2

25.0)1(24.0nInInI

nT+−+

=

7. Compute reflectance image from ),(),(255),( yxEn yxTyxR = ,

pc

yxIyxIyxE )),(),((),( = ,P=1

8. Output the image result from components ; bIRbg

IRgr

IRr === ','',' .

Chapter four:

Lighting Systems and

Suggested Algorithms of Image Quality Assessment

and Enhancement

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

44

4.1 Introduction This chapter focuses on two parts: the first is the experimental part of the

lighting system belongs to the distribution of the lightness and features of

images. These images are captured under the LEDs and MH lighting systems.

The LEDs lighting system is used to lit images in many cases from low to

moderate lightness, however, the MH lightness system is used to lit images from

moderate to high lightness levels. Part two include Images analysis and

enhancement , the analysis of images has been done by using suggested method

of image quality assessment based on changing in lightness and contrast, this

method depending on edge region in the lightness component. And we suggest

two methods to enhance the color image, the first method depending on the

retinex algorithm and the other depending on histogram equalization. All

Algorithms and calculations have been done by using Matlab version 8, except

fitting calculations are done by table curve version 5.

4.2 Lighting Systems and Images Features All images in this study are taken by digital Sony camera model (DCR-

HC62E) and remote control device which is used so that the image will not be

disturbed. These images captured are of the (JPEG) format with size (640×480

pixel) and these images covered area about (52×40 cmP

2P) in the real word. The

control of lightness is done by using the LEDs system of lighting and MH

system as follows:

4.2.1. LEDs lighting system

Figure (4.1) shows the LEDs lighting system which consists of circuit

board with 98 chips LEDs arranged in circular forms (four circles in all with

the same center), the radii of these circles are (5,7,9 and 12 cm) . In the center

there is a circular aperture with radial (3 cm) that is used to fix the camera that

will capture the plane of the image perpendicular to the center of the camera

lens. Next to the board, there is a power supply with switches (89 small switches

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

45

for each one chip LEDs) inside this power supply lies a low voltage electrical

transformer. To control the illuminance by switching on the chips LEDs

gradually (one by one or more that). The illuminance is measured by using

luxmeter that is shown in figure (4.2).

4.2.2 MH lighting system

Figure (4.3.a) demonstrates the MH lighting system which consists of

projector that contains the MH lamp .The center of camera lens (CCL) is located

beside the projector to capture the image (image plane IP) in maximum angle

Figure 4.1: LEDs lighting system in two view (a) Forward (b) Backward.

(a) Forward view LEDs lighting system.

Camera Aperture

Power supply

White LEDs

(b) Backward view LEDs lighting system.

Camera fixed in

the aperture in

the center of

LEDs board.

Power supply

contains 89

switches.

Back of LEDs board.

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

46

about 30°between the lens an center of projector(CP) , (at distance 40cm

between the center of camera lens and image plane),this demonstrated in figure(

3.4.b). To control the illuminance by moving the projector faraway the scene,

this leads to decrease the angel between CP and CCL.

4.3 Adaptive method in the Objective Quality based on lightness

changing In this method the image quality has determined depending on edge

regions for lightness component in YIQ color space that is given by [8]:

Figure 4.2: digital luxmetre model( LX1330B) china.

Figure 4.3: MH lighting system in (a) and their geometric view at distance 40cm

between the center of camera lens(CCL) and image plane(IP) in (b).

Camera

Projector

MH lamp

(a)

(b)

30°

40cm CP

CCL

IP

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

47

𝑦𝑦 = 0.596𝑟𝑟 − .0270𝑔𝑔 + 0.322𝑏𝑏 (4.1)

And the edge image gets by using Sobel edge detection with two kernel that is

given [1]:

𝐺𝐺𝑥𝑥 = �−1 0 − 1−2 0 − 2−1 0 − 1

� (4.2)

𝐺𝐺𝑦𝑦 = �1 2 10 0 0−1 − 2 − 1

� (4.3)

So, edge image can be determined using:

𝐼𝐼𝑥𝑥 = 𝑦𝑦⊗𝐺𝐺𝑥𝑥 (4.4)

𝐼𝐼𝑦𝑦 = 𝑦𝑦⊗𝐺𝐺𝑦𝑦 (4.5)

𝐼𝐼𝐼𝐼 = max(𝐼𝐼𝑥𝑥 , 𝐼𝐼𝑦𝑦) (4.6)

Where 𝐺𝐺𝑥𝑥,𝐺𝐺𝑦𝑦 being horizontal and vertical of Sobel kernel, and 𝐼𝐼𝐼𝐼 is the

edge image. The edge determined in the image depending on threshold value

(𝑡𝑡) from (0 to 255) or (0 to 1) if the image is normalized. Let 𝑛𝑛𝑏𝑏 is the number

of black pixel and 𝑛𝑛𝑛𝑛 the number of white pixel the contrast factor given by:

𝐶𝐶𝐶𝐶(𝑡𝑡) = �𝑚𝑚𝑚𝑚𝑛𝑛 (𝑛𝑛𝑛𝑛 ,𝑛𝑛𝑏𝑏 )𝑚𝑚𝑚𝑚𝑥𝑥 (𝑛𝑛𝑛𝑛 ,𝑛𝑛𝑏𝑏 )

𝑚𝑚𝑖𝑖 𝑛𝑛𝑛𝑛 ≠ 𝑛𝑛𝑏𝑏

1 𝑚𝑚𝑖𝑖 𝑛𝑛𝑛𝑛 = 𝑛𝑛𝑏𝑏� (4.7)

This mean that the maximum value of contrast factor is 𝐶𝐶𝐶𝐶 = 1 (high

contrast), and the minimum value is near to zero (low contrast).The contrast

factor is the function of threshold value 𝑡𝑡 , where the threshold value variety

form (0 to 255) the relationship between 𝐶𝐶𝐶𝐶 and 𝑡𝑡 is The logarithmic normal

distribution, this distribution is used when the data have a lower limit where

have the general form:

𝐶𝐶𝐶𝐶(𝑡𝑡 ) = 𝑚𝑚 + 𝑏𝑏. 𝐼𝐼(−0.5(𝑙𝑙𝑛𝑛 (𝑡𝑡 /𝑐𝑐)/𝑑𝑑)2 ) (4.8)

Where 𝑚𝑚, 𝑏𝑏, 𝑐𝑐 and 𝑑𝑑 is a constants dependent on distribution (locally,

globally) contrast and lightness in the image , if we attempt to determine CF

we need a large number of images captured with different lightness levels

moreover have different standard division, figure (4.4) shows the different

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

48

thresholds and 𝐶𝐶𝐶𝐶 values for color image appears depending 𝐶𝐶𝐶𝐶 on threshold

edge detection.

The Quality Factor (QF) is determined by area under the curve that is given

by integration the function 𝐶𝐶𝐶𝐶(𝑡𝑡 ) over all threshold value (𝑡𝑡)

𝑄𝑄𝐶𝐶 = ∫ 𝐶𝐶𝐶𝐶(𝑡𝑡 )𝑑𝑑𝑡𝑡 255

0 (4.9)

That is obtained by using a trapezoidal numerical integration rule. Figure

(4.5) shows different 𝑄𝑄𝐶𝐶 values and their curves. We can see the increase 𝑄𝑄𝐶𝐶

due to the increase of area under the curve for image having different lightness

level.

Figure 4.5: Different curves of the QF assessment.

t =50 , CF=0.200

t= 100 , CF= 0.065

t =250, CF= 0.009

t=12 , CF=0.994

Figure 4.4: Original image and its Edge detection in different thresholds and Contrast Factor

(CF) values.

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

49

We can summarize adaptive method of image equality assessment

depending on QF by general steps in figure (4.6).

4.4 Contrast and lightness enhancement algorithms In this study, we suggest two algorithms to enhance the color images based

on lightness and contrast changing; they are the adapted Histogram equalization

(AHE) and modified retinex (MR).

4.4.1 Adaptive histogram equalization (AHE) algorithm

The first step in this algorithm is transforming color image from basic RGB

color space to YIQ color space, the forward transform is given by equation

(2.18). In the second step is transforming normalized lightness value using

sigmoid function that is given by:

)1

1/(1nI

nInS

−+= (4.10)

Input color image

Edge detection in different thresholds values

Estimation lightness component

Measure Contrast Factor for each image edge

Determine Quality Factor

Figure 4.6: Steps of QF assessment.

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

50

Figure (4.7) shows the relationship between input lightness IRnR versus output

lightness SRnR. The third step is applying HE on modify lightness component, the

processing lightness component Y RPR has been gotten form this step. Finally

inverse transformation from YIQ to RGB color space calculated in YRpRIQ that is

given by [51]:

+−=

−−=

++=

qipybqipygqipyr

703.1106.1647.0272.0621.0956.0

(4.11)

This algorithm has been reduced color shift compared with traditional HE

due to processing lightness component and in the same time enhance lightness

component dependent on sigmoid transform.

This can be achieved by the following steps:

1. Input color image C(x,y,i), i=1,2,3 ( red ,green, blue) components.

2. Transform color image from RGB color space toYIQ and estimation Y

component.

3. Normalized Y component by 𝐼𝐼𝑛𝑛 = 𝑌𝑌255

.

4. Transform lightness by using )1

1/(1nI

nInS

−+= .

Figure 4.7: Relationship between input lightness

versus output lightness in AHE.

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

51

5. Applied HE on transformed lightness component sRnR getting processed

lightness component yRpR.

6. Transform color image from YRpRIQ to RGB to get out put image.

4.4.2 Modify retinex (MF) algorithm

In the MSRCR the color value of a pixel is computed by taking the ratio of

the pixel to the weighted average of the neighboring pixels. One disadvantage of

this technique is that there could be abnormal color shifts because three color

channels are treated independently. An inherent problem in most retinex

implementation is the strong ’halo’ effect in regions having strong contrast. The

’halo’ effects are shown in Figure (4.8). The ’halo’ effects and color shifts are

reduced in MF algorithm by processing Y component, the steps of it is:

a. Lightness enhancement

Transform color image from basic RGB color space to YIQ color space by

used equation (3.15). And then transformed normalized lightness value using

sigmoid function given in eq. (4.10).

b. Chromatic enhancement

Figure 4.8: Halo effect caused during retinex enhancement can be

observed around the edge of the body and the background.

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

52

The chromatic channels i,q are enhanced by applying MSRCR on the original

RGB components after using forward transformation ; this method of

processing will decrease the color shift ,but it needs to redefined gain-offset

value, in this work used value 0.1 and 0.5, are respectively.

c. Combining enhanced channels

The final step is combining the lightness enhanced component yRpR and

chromatic enhanced channels iRpR,qRpR by using inverse transform(equation 4.11) to

get enhancement components rRp R,gRpR ,bRpR .Figure (4.9) shows the summarization

steps of the proposed algorithm.

We do this by the following steps:

1. Input color image C(x,y,i), i=1,2,3 ( red ,green, blue) components

Input original

RGB image

Transform to

YIQ

Y-extraction

Tone mapping

Applied MSR

Applied MSRCR

Transform to

YIQ

IQ-extraction

Transform to

RGB

Output

enhanced image

Figure 4.9: flowchart of modified retinex (MR)

algorithm.

Chapter four: Lighting Systems and Suggested Algorithms of Image Quality Assessment and Enhancement

53

2. Transform color image from RGB color space to YIQ and estimation Y

component.

3. Normalized Y component by 𝐼𝐼𝑛𝑛 = 𝑌𝑌255

.

4. Transform lightness component by using )1

1/(1nI

nInS

−+= getting processed

lightness component YRpR. 5. Apply MSRCR on transformed original image in step1 getting RRRRGRRRBRRR components 6. Transform color image from RRRRGRRRBRRR to YIQ to getting YRRRIRRRQRRR.

7. Collection component YRp and RIRRRQRRR , then applied inverse transformation to

basic RGB color space to getting output image.

Chapter five:

The Results and Discussion

Chapter five: The Results and Discussion

54

5.1 Introduction This chapter includes the most important results that are obtained. Section two

includes distribution of the illuminance for the LEDs and MH system, many

images captured for gray test image to study the distribution of illuminance for

these systems. Section three deals with color image analysis by using quality

assessments for different lightness and contrast levels. These images are

captured under the LEDs lightning system in many cases from low lightness to

moderate lightness, and are also captured with MH lightness system in many

cases from moderate lightness to high lightness. Section four contains color

image enhancement in different lightness levels, twelve images are captured

under the LEDs and MH system are considered. All these images have been

enhanced depending on HE, AINDANE, MSRCR, MR and AHE. In order to

determine the best method of enhancement, equality assessments are taken into

account using CF, EFD, QF, Mean of locally (µ ,σ ) model and histogram

distribution.

5.2 Determining the Distribution of Illuminance Illuminance distribution has been concluded by capturing different images

of gray test image with varied illuminance in two lighting system as following:

5.2.1 Illuminance Distribution in the LEDs Lighting system

To recognized the illuminance distribution for one chip of white LED, the

distance between the gray test image and camera lens is changed step by step

(from 5cm to 50cm) that leads to decrease the illuminance. Figure (5.1) shows

the relationship between the distance and the maximum illuminance, we can

note decreasing the illuminance occurred due to increase the distance, according

to inverse square law of the intensity. To study the distribution of the

illuminance for one ship white LED we can lit the gray test image by this LED

Chapter five: The Results and Discussion

55

at the same time we increase the distance between the lighting source and the

gray image this leads to decreasing the illuminance.

If we capture these images and then take CIELA color transform ,we can

get the illuminance components of these images ,that is shown in figure (5.2)

which represent the 2D and 3D representation of the illuminance and gray test

image was lighted by one ship white LED. The distribution of illuminance is the

Gaussian distribution and its peak decreasing directly with decreasing the

illuminance. Figure (5.4) is shows 3D representation of the Gaussian fitting of

the illuminance for gray test image which is lighted by LEDs system at max.

illuminance( 42.5 lux) and the original curve, the function of illuminance

is where (a=29.8 ,b=78.8 ,c=27.8

,d=12.8 ,e=31.7, f=12.8 ). In this work 28 images are captured for three groups

(a,b and c) under the illuminance varied from low to moderate illuminance.

Table (5.1) illustrates the number of LEDs and corresponding values of

illuminance for 28 levels. Figure (5.3) is shows the distributions of illuminance

for these images in figure (5.5, 5.6 and 5.7), and in figure (5.3) the distribution

of illuminance was remained Gaussian distribution, and diffusion of the light

and peak increasing directly with the increase of illuminance.

Figure 5.1: The relationship between the distance and the

maximum illuminance for one ship white LED.

Chapter five: The Results and Discussion

56

704 lux

314 lux

119 lux

66 lux

44 lux 8.4 lux

11.2 lux

15.3 lux

23 lux

31 lux

Figure 5.2: 2D and 3D representation of the illuminance (illuminance value was got from CIELAB

color space for the gray image), and gray image was lighted by one chip white LED in different

illuminance (maximum illuminance in the real world) levels. A gradient in the illuminance is

accrues by increasing the distance between LEDs board and gray test image.

Chapter five: The Results and Discussion

57

7.5 lux

11.2 lux

24.7 lux

29.4 lux

13.7 lux 31.3 lux

42.5 lux

3.9 lux 20.7 lux

15.2 lux

Figure 5.3: 2D and 3D representation of the illuminance (illuminance value was got from CIELAB color

space for the gray image test), and gray test image was lighted by multi white LEDs. A gradient in the

illuminance is accrues by switches the chips of LEDs gradually (number of LEDs are

1,2,3,…10,15,20,…90 and 98 ).

Chapter five: The Results and Discussion

58

57.5 lux

79.4 lux

93.9 lux

103.2 lux

164.1 lux

154.8lux

124.8 lux

133.5 lux

55.1 lux 18.2 lux

Figure 5.3: continue.

Chapter five: The Results and Discussion

59

112.2 lux

124.6 lux

172.2 lux

182.7 lux

202 lux

214 lux

244 lux

196.2 lux

Figure 5.3: continue.

Figure 5.4: In (a). 3D representation of the illuminance for gray test image which is

lighted by LEDs system at max. illuminance= 42.5 lux and (b). 3D Gaussian fitting of a.

(a) (b)

Chapter five: The Results and Discussion

60

a1

a2

a3

a5

a4

a6

a11

a12

a7

a13

a8

a9

a14

a15

a10

Figure 5.5: The first group of the images lighted by white LEDs with different illumunance levels.

Chapter five: The Results and Discussion

61

a16

a20

a21

a24

a17

a25

a26

a22

a18

a23

a27

a19

a28

Figure 5.5: continue.

Chapter five: The Results and Discussion

62

Figure 5.6: The second group of the images lighted by white LEDs with different illuminance levels.

b15

b10

b5

b6

b11

b12

b7

b3

b8

b9

b14

b1

b2

b3

b4

Chapter five: The Results and Discussion

63

b

Figure 5.6: continued.

b16

b20

b21

b24

b17

b25

b26

b22

b18

b23

b27

b19

b28

Figure 5.6: continue.

Chapter five: The Results and Discussion

64

Figure 5.7: The third group of the images lighted by white LEDs with different illuminance levels.

c15

c10

c5

c6

c11

c12

c7

c13

c8

c9

c14

c1

c2

c3

c4

Chapter five: The Results and Discussion

65

Figure 5.7: continue.

c16

c20

c21

c24

c17

c25

c26

c22

c18

c23

c27

c19

c28

Chapter five: The Results and Discussion

66

5.2.2 Illuminance Distribution in the MH lighting system

To determine the illuminance distribution for MH lighting system, the distance

between the gray image and MH lamp (Projector) is changed step by step (from

40 cm to 490 cm with step 20) that leads to a decrease in illuminance. Figure

(5.8) shows the relationship between the distance and the maximum illuminance,

in this relationship the illuminance decreasing with the increasing distance

according to inverse square law. As in the last section the gray image test has

been captured at this illuminance levels from high (12480 lux) to moderate (220

lux), figure (5.9) illustrates 2D and 3D representation of the illuminance and

gray test images for these level of the illuminance.

No. of LEDs 1 2 3 4 5 6 7 8 9 10 15 20 25 30

Max. Illuminance

(lux) 3.9 7.5 11.2 13.7 15.2 18.2 20.7 24.7 29.4 31.3 42.5 55.1 57.5 79.4

No. of LEDs 35 40 45 50 55 60 65 70 75 80 85 90 95 89

Max. Illuminance

(lux) 93.9 103.2 112.2 124.6 133.5 142.8 154.8 164.1 172.2 182.7 196.5 202 214 224

Table 5.1: The number of switch on LEDs and its illuminance values for

28 levels that have gradient from minimum to moderate level.

Figure 5.8: The relationship between the distance and the

maximum illuminance for MH lamp.

Chapter five: The Results and Discussion

67

220.4 lux 400.1 lux

290.5 lux 584.8 lux

312.5 lux 657.3 lux

240.7 lux 450.6 lux

357.6 lux 788.3 lux

236.8 lux 511.2 lux

Figure 5.9: 2D and 3D representation of the illuminance (illuminance value was got from CIELAB color

space for the gray image test), and gray image was lighted by MH lamp. A gradient in the illuminance is

accrues by increasing the distance between Projector and gray sense.

Chapter five: The Results and Discussion

68

Figure 5.9: continue.

3870.4 lux 1117.3 lux

931.7 lux 2843.0 lux

1362.9 lux 5495.3 lux

2164.1 lux 12480.1 lux

8159.6 lux 1696.6 lux

Chapter five: The Results and Discussion

69

The distribution of illuminance approximately is a plane in the area of

projection (52×40 cmP

2P) in the real world, figure (5.10) shows the 3D illuminance

for gray test image which is lighted by MH lightness system at max. illuminance

(220.4 lux) and plane fitting of this distribution at where

(a=81.25,b=0.01,c=0.01). This distribution is a special case of the Gaussian

distribution in the large area. In this system, we capture three groups images

each one have 22 images, its illuminance was changed from moderate to high as

shown in figures (5.11, 5.12 and 5.13) .

Figure 5.10: In (a) 3D representation of the illuminance for gray test image which is lighted by

MH lightness system at max. illuminance= 220.4 lux and (b). 3D plane fitting of a.

(a) (b)

Chapter five: The Results and Discussion

70

Figure 5.11: The first group of the images lighted by MH lighting lamp with different illumunance

levels.

d11

d7

d8

d12

d3

d4

d5

d9

d1

d10

d6

d2

Chapter five: The Results and Discussion

71

Figure 5.11: continue.

d21

d18

d22

d15

d16

d19

d13

d20

d17

d14

Chapter five: The Results and Discussion

72

Figure 5.12: The second group of the images lighted by MH lighting lamp with different illuminance

levels.

e11

e7

e8

e12

e3

e4

e5

e9

e1

e10

e6

e2

Chapter five: The Results and Discussion

73

Figure 5.12: continue.

e21

e18

e22

e15

e16

e19

e13

e20

e17

e14

Chapter five: The Results and Discussion

74

Figure 5.13: The third group of the images lighted by MH lighting lamp with different illuminance

levels.

f11

f7

f8

f12

f3

f4

f5

f9

f1

f10

f6

f2

Chapter five: The Results and Discussion

75

Figure 5.13: continue.

f21

f18

f22

f15

f16

f19

f13

f20

f17

f14

Chapter five: The Results and Discussion

76

5.3 Results of the Images Quality Assessments Image quality assessment based on lighting change has been determined in

the two lightness levels, the first grades from low to moderate lightness level by

using LEDs lighting system, where the different images (groups a,b and c) are

capture in these levels of illuminance, and the second is grades from moderate to

high lightness level by using MH lighting system (groups d,e and f). Images

quality has been determined by using EFD, Mean of locally (µ ,σ ) model and

CM , QF and. SSIM ( for LED lighting system only) as following:

5.3.1 The entropy of the first derivative image (EFD)

Figures (5.14) and (5.15) show the relationship between maximum

illuminance and EFD for three groups with images captured by LED and MH

lighting system respectively, generally we can be noted:

1. This relation is not linear.

2. In the LED lighting system (from low to moderate lightness levels) the

fluctuation increases in the low illuminance levels (smaller than 50 lux),

then, the increasing become low above this value of the illuminance.

3. For MH lighting system (from moderate to high lightness levels) , the EFD

increases with increase of illuminance, then it reaches a maximum value

(threshold value between 1000 and 1500 lux) and then the EFD decreases

with the increase of illuminance.

5.3.2 The Mean of locally (µ ,σ ) model

Figures (5.16) illustrates the 2D representation of mean of locally (µ ,σ )

model for three groups with images captured by LEDs lightness system and

figure (5.17) shows the 3D representation of this model as the terms of

maximum illuminance, in figure (5.16) this relationship is semi linear and the

mean of image increases with increasing the mean of locally standard deviation,

the doted points value is the maximum illuminance (third dimension in the 3D

representation) where the increase of illuminance will cause the tendency of

points to the optimal region(according to figure 3.3 page 35).

Chapter five: The Results and Discussion

77

Figure 5.14: Relationship between maximum illuminance and entropy of the first derivative for all

groups (a,b and c) images that are captured with the LEDs lighting system.

Group (a)

Group (b)

Group (c)

Chapter five: The Results and Discussion

78

Group (d)

Group (e)

Group (f)

Figure 5.15: Relationship between maximum illuminance and entropy of the first derivative for all

groups (d,e and f ) images that are captured with MH lighting system.

Chapter five: The Results and Discussion

79

Group (a)

Group (b)

Group (c)

Figure 5.16: The 2D representation of the Mean of locally (µ ,σ ) model for all groups (a,b and c ) images

that are captured with the LEDs lighting system, the values beside the dotted points are the maximum

illuminance .

Chapter five: The Results and Discussion

80

Group (a)

Group (b)

Group (c)

Figure 5.17: The 3D representation of the Mean of locally (µ ,σ ) model in the terms of maximum

illuminance for all groups (a,b and c ) images that are captured with the LEDs lighting system.

Chapter five: The Results and Discussion

81

Whereas, figures (5.18) and (5.19) show the 2D and 3D representation of the

Mean of locally (µ ,σ ) model of three groups with images MH lighting system,

in figure (5.18) the relationship between mean of locally standard deviation and

mean of image is not linear. And in 2D and 3D the illuminance with levels (from

1000 lux to 1500 lux) is tends to the optimal region.

5.3.3 Colorfullness Metrics (CM)

Figures (5.20) and (5.21) show the relationship between maximum

illuminance and CM for three groups with images captured under LED and MH

lighting system respectively, generally this relation is not linear, fluctuated at

low illuminance levels (in the LEDs lightnig system). And the CM increases

with increasing the illuminance in the moderate lightness levels (in the LEDs

and MH lighting system), while in the high lightness levels for MH lighting

system there are threshold value of illuminance which lies between (1500 to

2000 lux) became the CM was maximum.

5.3.4 Quality Factor (QF) assessment

Figures (5.22) and (5.23) show the relationship between maximum

illuminance and QF for three groups with images captured under LEDs and MH

lighting system respectively. Generally in figures (5.22) the QF increases

with increasing the illuminance and this relation is not linear with low

fluctuation. In figure (5.23) as in EFD and CM there are threshold value of

illuminance which lies between (1500 to 2000 lux) became the QF was

maximum.

5.3.5 Stricture Similarity Index (SIMM)

Figures (5.24) shows the relationship between maximum illuminance and

SSIM for three groups with images captured by LEDs lighting system, generally

the SSIM increases with increasing the illuminance and this relation is not linear

and like the QF.

Chapter five: The Results and Discussion

82

Group (d)

Group (e)

Group (f)

Figure 5.18: The 2D representation of The Mean of locally (µ ,σ ) model for all groups (d,e and f ) images

that are captured with the MH lighting system, the values beside the dotted points are the maximum

illuminance .

Chapter five: The Results and Discussion

83

Group (d)

Group (e)

Group (f)

Figure 5.19: the 3D representation of The Mean of locally (µ ,σ ) model in the terms of maximum

illuminance for all groups ((d,e and f )) images that are captured with the MH lighting system.

Chapter five: The Results and Discussion

84

Group (a)

Group (b)

Group (c)

Figure 5.20: Relationship between maximum illuminance and color fullness index for all groups (a,b

and c ) images that are captured with the LEDs lighting system.

Chapter five: The Results and Discussion

85

Group (d)

Group (e)

Group (f)

Figure 5.21: Relationship between maximum illuminance and color fullness index for all groups (d,e

and f ) images that are captured with the MH lighting system.

Chapter five: The Results and Discussion

86

Group (a)

Group (b)

Group (c)

Figure 5.22: Relationship between maximum illuminance and Quality factor for all groups (a,b and c)

images that are captured with the LED lighting system.

Chapter five: The Results and Discussion

87

Group (d)

Group (e)

Group (f)

Figure 5.23: Relationship between maximum illuminance and Quality factor for all groups (d,e and f)

images that are captured with the MH lighting system.

Chapter five: The Results and Discussion

88

Group (a)

Group (b)

Group (c)

Figure 5.24: Relationship between maximum illuminance and structure similarity index(SSIM) for all

groups (a,b and c) images that are captured with the LED lighting system.

Chapter five: The Results and Discussion

89

5.4 The Results of Images Enhancement In this work, five algorithms (HE, MSRCR, AINDANE AHE and MR have

been used to enhance color images with different lightness and contrast level ,

these images are captured under different luminance levels by used LEDs and

MH lighting systems. In this section twelve images (six with low and moderate

lightness levels and other six with moderate and high lightness levels) have been

enhanced as follows: 5.4.1 Color images enhancement with low and moderate lightness levels

These image are captured under LEDs lighting system, six images with low

and moderate lightness level have been used to enhancement. Figure (5.25)

illustrates these images and their histogram, moreover the values of illuminance

at which they are captured. In this figure the most recursions in the histogram

for the original images with low lightness belong to the low value of the

intensity, whereas in the images with moderate lightness levels, we can see the

distributions have covered the most intensity levels. Figure (5.26) shows the

images and their histograms which have been enhanced by AINDANE

algorithm. The histograms of these images are reduced in the middle value of

the intensity. The images enhancement by HE algorithm and the histogram of

these have been illustrated in figure (5.27). We can note that this distribution is

semi linear with low fluctuation. If we compare this distribution with the

distribution of images enhancement by AHE in figure (5.28), there is a similarity

between them except in the high intensity level of images enhancement by AHE,

this distribution will be increase and will become more fluctuated.

Figures (5.29) illustrates the images enhancement by MSRCR algorithm image

and their histograms, the distribution of these histograms are increased in the

low and moderate intensity levels. And the distribution for the images

enhancement by MR algorithm in figure (5.30), will be increased in the

moderate

Chapter five: The Results and Discussion

90

(1) 20.7 lux (2) 224 lux (3) 31.3 lux

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) 20.7 lux (5) 224 lux (6) 31.3 lux

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.25: The first and third rows, images with low and moderate lightness levels that is

used to enhancement and their histogram in the second and fourth rows, these images are

captured by LEDs lightness.

Chapter five: The Results and Discussion

91

Figure 5.26: The first and third rows, images with low and moderate lightness that are

enhanced by AINDANE algorithm and their histogram in the second and fourth rows.

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Chapter five: The Results and Discussion

92

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.27: The first and third rows, images with low and moderate lightness that are

enhanced by HE algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

93

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.28: The first and third rows, images with low and moderate lightness that are

enhanced by AHE algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

94

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.29: The first and third rows, images with low and moderate lightness that are enhanced by

MSRCR algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

95

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.30: The first and third rows, images with low and moderate lightness that are

enhanced by MR algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

96

and high lightness level, this mean that the enhancement of lightness became

the best. In the objective assessments of images enhancement by different

algorithms we can note that all method have been succeeded to enhancement,

but we can see in AINDANE algorithm the color shift is high and in HE

algorithm the images with low lightness go to gray. The mean of locally (µ, σ)

model has been illustrated in figure (5.31) for images with low and moderate

lightness that is enhanced by different algorithms. We can see, the most points

in the optimal regions (increase in contrast and lighting) belong to MH

algorithm then follows MSRCR, AHE and HE. But, the AINDANE algorithm

enhances the lightness without the contrast, this behavior is reflected in table

(5.2) that represents the value of EFD, CM and QF for the original and enhance

images. From this value, we can see that the best algorithm to enhancements

color images with low and moderate lightness levels is MR due to higher values

of EFD, CM and QF, and then MSRCR is followed. If a comparison between

the HE, AHE and AINDANE is done, we can see the performance of AHE

algorithm is the best.

5.4.2 Color images enhancement with moderate and high lightness levels

These image captured by MH lighting system, six images with moderate and

high lightness levels are used to enhancement. Figure (5.32) shows these

images, their histogram and the values of illuminance at which they are

captured. In this figure the most frequency value in the histogram for the

original images with high lightness belong to the high value of the intensity,

however in the images with moderate lightness levels we can see that the

distributions have been covered the most intensity levels. Figure (5.33) shows

the images and their histograms which have been enhanced by AINDANE

algorithm. The histograms of these images are reduced in the middle value of

the intensity and some values have large peaks.

Chapter five: The Results and Discussion

97

Image 2

Figure 5.31 : The Mean of locally (µ ,σ ) model for image enhancement by HE ,MSRCR ,AINAANE ,MR and AHE , these image capture by the LEDs lighting system.

Image 1

Image 3

Chapter five: The Results and Discussion

98

Image 4

Image 5

Image 6

Figure 5.31: continued.

Chapter five: The Results and Discussion

99

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

16.515 30.367 25.703 30.211 14.983 12.556 CM

5.369 5.915 5.207 5.936 5.188 4.817 EFD

39.427 61.669 26.247 61.658 34.614 14.068 QF

Image 1

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

24.047 31.185 18.965 30.607 21.437 12.871 CM

5.507 5.890 5.042 5.875 5.369 4.272 EFD

43.679 53.575 21.158 54.490 38.030 16.913 QF

Image 3

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

40.826 46.101 39.393 47.737 36.431 33.539 CM

5.929 6.355 5.520 6.331 5.731 5.439 EFD

56.483 75.83 32.794 72.503 48.235 34.676 QF

Image 2

Table 5.2: The values of CM, EFD and QF of the original images that is captured by

LEDs lighting system and they are enhanced by the HE, MSRCR, AINDANE, MR and

AHE.

Chapter five: The Results and Discussion

100

The enhanced images by HE algorithm and their histograms are presented in

the figure (5.34). We can note these distributions are homogeneous for all

intensity levels and with low fluctuation. If we compare this distribution with

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

14.493 22.077 16.395 21.901 13.807 12.282 CM

4.972 5.488 4.861 5.377 4.849 4.556 EFD

29.965 42.309 19.218 39.568 26.212 13.881 QF

Image 5

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

37.990 38.524 28.551 38.947 33.331 27.001 CM

5.491 5.828 4.916 5.673 5.321 4.985 EFD

43.587 54.187 21.842 49.821 38.539 29.227 QF

Image 4

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

39.277 49.072 35.151 48.009 35.612 33.551 CM

5.410 5.504 4.857 5.307 4.902 4.596 EFD

30.733 42.884 20.231 36.790 27.794 22.077 QF

Image 6

Table 5.2: continued.

Chapter five: The Results and Discussion

101

the distribution of images enhanced by AHE ,shown in figure (5.35); there is a

similarity between them except in the high intensity level of the images

enhancement by AHE, the distribution well be increased in some points and the

distribution will become more fluctuated. And we can see in the two method a

decrease of the frequency in the value of high intensity if they compared with

the histogram of the original images. Figure (5.36), illustrates the images

enhancement by MSRCR algorithm and their histograms, the distribution of

these histograms have been decreased in the high intensity levels ,and the

distributions for the images enhancement by MR algorithm in figure (5.37)

generally, is similar to distribution in the MSRCR algorithm. If we use the

objective assessments as criterion of images enhancement, we can say that in the

color shift is increased in the AINDANE algorithm and halo affected appears

spatially in the MSRCR and HE, though the algorithms AHE,MR,MSRCR and

HE succeed to enhance these images. The mean of locally (µ ,σ ) has been

illustrated in the figure (5.38) for images with low and moderate lightness that is

enhanced by different algorithms. The point of the original images that is

captured with high illuminance levels have high lightness value but with low

contrast, after they enhancement they had low lightness but high contrast.

Generally, we can note the most points in the optimal regions are decrease in the

lightness but increase in the contrast in the original images enhancement with

high lightness, but increasing in the contrast and lighting in the original image

enhancement with moderate lightness. This region belongs to MR and AHE

algorithms, and then follows MSRCR, HE and AINDANE algorithms. Table

(4.3) shows the value of EFD, CM and QF for the original and enhancement

images. We can see from these values that the best algorithms to enhance color

images with high and moderate lightness levels are the MR and AHE due to

higher values of EFD,CM and QF ,then MSRCR ,HE and AINDANE are after

them.

Chapter five: The Results and Discussion

102

(1) 220.4 lux (2) 12480.1 lux (3) 220.7 lux

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) 8159.6 lux (5) 236.8 lux (6) 511.2

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.32: the first and third rows, images with low and moderate lightness levels that is

used to enhance and their histogram in the second and fourth rows, these images are

captured by MH lightness system.

Chapter five: The Results and Discussion

103

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.33: The first and third rows, images with moderate and high lightness that are

enhanced by ANDAN algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

104

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.34: The first and third rows, images with moderate and high lightness that are

enhanced by HE algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

105

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.35: The first and third rows, images with moderate and high lightness that are

enhanced by AHE algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

106

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.36: The first and third rows, images with moderate and high lightness that are

enhanced by MSRCR algorithm and their histogram in the second and fourth rows.

Chapter five: The Results and Discussion

107

(1) (2) (3)

(a) hist. of (1) (b) hist. of (2) (c) hist. of (3)

(4) (5) (6)

(d) hist. of (4) (e) hist. of (5) (f) hist. of (6)

Figure 5.37: The first and third rows, images with moderate and high lightness that are

enhanced by MR algorithm and its histogram in the second and fourth rows.

Chapter five: The Results and Discussion

108

Image 1

Image 2

Image 3

Figure 5.38 : The Mean of locally (µ ,σ ) model for image enhancement by HE ,MSRCR ,AINAANE ,MR and AHE , these image capture by the MH lighting system.

Chapter five: The Results and Discussion

109

Image 4

Image 5

Image 6

Figure 5.38 : continued.

Chapter five: The Results and Discussion

110

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

36.640 34.3548 34.847 37.627 29.717 33.050 CM

5.8748 6.044 5.525 5.933 5.817 5.530 EFD

53.911 66.3471 37.633 62.963 53.878 44.634 QF

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

44.875 34.9482 35.727 38.047 30.289 34.521 CM

6.355 6.303 5.595 6.064 6.339 5.832 EFD

72.323 79.1476 43.478 71.979 72.296 57.058 QF

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

55.556 46.5773 39.326 48.637 38.032 38.672 CM

4.852 4.911 4.313 4.789 4.895 4.473 EFD

27.644 29.3224 16.139 26.065 27.116 19.723 QF

Image 1

Image 2

Image 3

Table 4.3: The values of CM, EFD and QF of the original images that are captured by

MH lighting system and they are enhanced by the HE, MSRCR, AINDANE, MR and AHE.

Chapter five: The Results and Discussion

111

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

53.913 45.6017 38.564 47.742 38.364 37.129 CM

5.207 5.013 4.317 4.895 5.337 4.589 EFD

39.034 31.8003 16.941 28.519 38.007 21.297 QF

MHE MR AINDANE MSRCR HE Original QUALITY

TYPE

30.725 34.6014 32.770 39.891 27.581 30.887 CM

5.598 6.024 5.545 5.857 5.546 5.484 EFD

43.632 61.8811 34.228 57.020 42.291 40.564 QF

AHE MR AINDANE MSRCR HE Original QUALITY

TYPE

37.631 38.6453 34.800 44.156 32.885 33.314 CM

5.994 6.313 5.756 6.130 5.906 5.820 EFD

56.188 74.6554 41.273 68.182 54.344 51.003 QF

Image 4

Image 6

Image 5

Table 4.3: continued.

Chapter Six:

Conclusions and Suggestion Future Works

Chapter Six : Conclusions and Suggestion Future Works

112

6.1 Conclusions In this research we study the distribution of lighting for LEDs and MH

lighting sources and capture six groups of images with different illuminance

(contrast and lightness) which have been used to determine the best method to

quality image assessment based on lightness and contrast change. And then

different images with different lighting and contrast have been enhanced by

using many traditional and suggested algorithms.

From the results of the present study the following points were concluded:

1. The lightness distribution for the LEDs lamp is the Gaussian distribution,

whereas the distribution in the MH lamp is plane at near distance (about 2.5 m

or less from the object).

2. The increase in the illuminance (lightness) in which images are captured with,

does not mean necessarily increasing in the quality of the image thus, there are

illuminance levels that make the image having high quality which lays between

(1500 to 2000 lux).

3. Belong of the image quality assessment, the methods CM and EFD does not

measure images with low and high lightness levels accurately.

4. The mean of locally (µ ,σ ) model is good method to determine the quality of

images with different lightness levels: low ,moderate and high ,but the drawback

in this method does not determine the numerical value.

5. The suggested method, QF is robust method to assessment quality of images

with different lightness levels.

6. The MR and MSRCR are good algorithms to enhance the color images with

low and moderate lightness levels.

7. In the images with high lightness levels, the MR and AHE algorithm are the

best methods to enhance these images.

Chapter Six : Conclusions and Suggestion Future Works

113

8. The suggested algorithm MR has been succeeded to enhance color images

with different lightness levels (low, moderate and high). Also it appears the best

result if it is compared with other algorithms such (MSRCR, HE, AINDANE).

6.2 Suggestions Future Works

Some proposals for future works derived from this project are demonstrated in

following points which are concerned with different image quality assessment

and color image enhancement based on lightness change:

1. Aerial image enhancement depending on MR and AHE algorithms.

2. Developing MSRCR algorithm by using color space transformation.

3. Studying the image quality assessment in different illuminans levels by

using Macbeth chart test.

4. Deterring the coefficients of QF by using Macbeth chart test.

5. Medical image enhancement depending on MR algorithm based on

segmentation.

6. Foggy and dusty image enhancement depending on MR algorithm.

114

1. Rafael C. Gonzales, Richard E. Woods," Digital Image Processing", second edition, Prentice Hall, 2002 .

2. D.Jabson, Z.Rahman, and G.A. Woodel, “A multi-scale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process. 6, pp. 965-976, July 1997.

3..Barnard, K. and Funt, B., "Analysis and Improvement of Multi-Scale Retinex," Proc. Fifth IS&T Color Imaging Conference," Scottsdale 1997 .

4.Li. Tao and V. Asari, "An integrated neighborhood dependent approach for nonlinear enhancement of color images", International Conference on Information Technology: Coding Computing; ITCC, p 138-139, 2004.

5. Z. Rahman, D. Jobson, and G.Woodell, "Retinex processing for automatic image enhancement, "in Journalof Electronic Imaging, 13, No. 1, pp. 100–110, January 2004.

6. Xin Li, "Blind image quality assessment" in Image Processing Proceedings, International Conference on, vol. 1, pp. I–449, 2002.

7. Eli Peli," Contrast in complex images", J. Opt. Soc. Am. A/Vol. 7, No. 10, 1990.

8. D. J. Jobson, Z. Rahman, and G. A. Woodell, "Properties and performance of a center/surround retinex", IEEE Trans. on Image Processing 6, pp. 451–462, March 1996.

9. D. J. Jobson, Z. Rahman, and G. A. Woodell, "A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques”, Proceedings of the IS&T’s 50P

thP Annual

Conference, Cambridge, MA, 1997.

10. B. V. Funt, K. Barnard, M. Brockington, and V.Cardei, "Luminance-based multi-scale Retinex, "Proceedings AIC Color 97, Kyoto, Japan, 1997 .

11. K. Barnard and B. Funt, "Analysis and Improvement of Multi-ScaleRetinex" In Proceedings of the IS&T/SID Fifth Color Imaging Conference: Color Science, Systems and Applications, Scottsdale, Arizona, pp. 221-226, 1997.

12. D .Jabson, Z.Rahman,G.A. Woodell, “Statistics of visual representation,” Proc.SPIE 4736, pp25- 35, 2002.

13. D. Hasler and S. Susstrunk, “Measuring colourfulness in natural images,” in Proc. SPIE/IS&T HumanVision and Electronic Imaging, 5007, pp. 87–95,2003.

14.Ayten Al-Bayaty, "Adaptive Techniques for Image Contrast Estimation Based on Edge Detection", Master Degree Thesis, Physics Dep., Al-Mustansiriya Univ., 2005.

15.Li Tao, Ming-Jung Seow and Vijayan K. Asari," Nonlinear Image Enhancement to Improve Face ", International Journal of Computational Intelligence Research.ISSN 0973-1873 Vol.2, No.4, pp. 327-336 ,2006.

16. osman Nuri and Capt. Ender," A non-linear Technique for The enhancement of extremely non-uniforrm lighting images ", journal of aeronautics and space technologies june V. 3 No. 2 ,2007.

References

115

17. Ali J. Al-Dalawy, "A Study of TV Images Quality for Channels Broadcast Television Satellite", Master of Science in Physics, Physics Department, Al-Mustansiriya University, 2008.

18. Thuy Tuong Nguyen, Xuan Dai Pham, Dongkyun Kim and Jae Wook Jeon,"Automatic Exposure Compensation for Line Detection Applications",IEEE International Conference on Multisensor Fusion and Integration for Intelligent SystemsSeoul, Korea, 2008.

19. Salema S. Salman," A Study Lighting Effect in Determining of Test Image Resolution", Ms.c thesis College of Science , Al- Mustansiriya University , 2009.

20. W.S. Malpica, A.C. Bovik," SSIM based range image quality assessment" Fourth International workshop on Video Processing and Quality Metrics for Consumer Electronics Scottsdale Arizon, 2009 .

21. Q. Zhang, H. Inaba, and S. Kamata, "Adaptive histogram analysis for image enhancement," Pacific-Rim Symp. on Image and Video Technology (PSIVT 2010), pp. 408-413, Nov. 2010.

22. Diana Gil, Rana Farah, J.M. Pierre Langlois, Guillaume-Alexandre Bi, Yvon Savaria2," Comparative analysis of contrast enhancement algorithms in surveillance imaging",IEEE, ISCAS, pp. 849-852, 2011.

23. Anil Walia, "Designing with Light- A lighting Handbook " , United Nations Environment Programme ,2006, download from: www.retscreen.net/fichier.php/903/Chapter-Lighting.pdf.

24. ANSI Institute. "ANSI standard nomenclature and definitions for illuminating engineering". ANSI/IES RP-16-1986, Illuminating Engineering Society, New York, NY 10017, 1986.

25. Wyszecki and Stiles. "Color Science: concepts and methods, quantitative data and formulae" (2nd edition). New York: Wiley, 1986.

26. LED Metrology Handbook of LED Metrology Instrument Systems GmbH ,2000,download from: www.instrumentsystems.com/fileadmin/editors/.../LED_Handbook_e.pdf

27. F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I.W. Ginsberg, and T. Limperis. "Geometric considerations and nomenclature for reflectance. Monograph", National Bureau of Standards US, 1977.

28. M. F. Cohen and J. R.Wallace. "Radiosity and Realistic Image Synthesis". Academic Press Professional, Boston, 1993.

29. A. S. Glassner. " Principles of Digital Image Synthesis". Morgan Kaufmann Publishers, San Francisco, California, 1995.

30. Hsin-Che Lee," Introduction to Color Imaging Science", Cambridge University Press 2005.

31. Duco Schreuder. "Outdoor Lighting: Physics, Vision, and Perception". Springer-Verlag, 2008 .

32. "SLL Lighting Handbook” (published by the Society of Light and Lighting, CIBSE, 2009 download from http: //ezzatbaroudi .files. wordpress. Com /2011 /02/handbook.pdf.

116

33. Andreas Koschan Mongi Abidi,"Digital color image processing",John Wiley & Sons ,2007.

34. Peter Flesch," Light and light sources: high-intensity discharge lamps", Springer, 2006.

35. Zhonghui Li, Zhijian Yang , Xiaomin Ding , Guoyi Zhang, Yuchun Feng1, Baoping Guo 1and Hanben Niu " Fabrication and Properties for White LED with InGaN SQW",SPIE,2005.

36. Eino Tetri, Liisa Halonen ," Guidebook on energy efficient electric lighting for buildings", Aalto University School of Science and Technology,2010.

37 . Winkler, Stefan ,"Digital Video Quality: vision models and metrics", John Wiley. 2005

38. H. Rushmeier, G. Ward, C. Piatko, P. Sanders, and B. Rust. "Comparing real and synthetic images: Some ideas about metrics". In Proceedings of the EurographicsRendering Workshop 1995.

39. J.A. Ferwerda. "Three varieties of realism in computer graphics". In Proceedings SPIE Human Vision and Electronic Imaging , 2003.

40.R. Sekuler and R. Blake. "Perception". McGraw-Hill, third edition, 1994.

41. Arne Valberg, "Light Vision Color", John Wiley & Sons,2005.

42. F. Campbell and J. Robson. "Application of Fourier analysis to the visibility of gratings", Journal of Physiology, 1968.

43. A.S. Glassner. "Principles of Digital Image Synthesis", Morgan Kauffman, San Fran-cisco, CA, 1995.

44. C. Ware. "Information Visualization: Perception for Design". Morgan Kauffman,2000

45. G. Wyszecki and W.S. Stiles. "Color Science". John Wiley and Sons, Inc., secondedition, 2000.

46. S.S. Stevens. "To honor Fechner and repeal his law". Science, 133: PP.80–86, 1961.

47. S.E. Palmer. "Vision Science". The MIT Press, 1999.

48. J. Foley, A. van Dam, S. Feiner, and J. Hughes, " Computer Graphics: Principles and Practice", Second Edition, Addision-Wesley, 1996.

49. G. Wyszecki and W. Stiles, "Color Science", John Wiley & Sons, 1967.

50. Marc Ebner," Color Constancy",John Wiley & Sons ,2007.

51. Sony Wine S., J and Horne R.E., "The Color Image Processing Hand Book", International Thomson, 1998.

52. Ziad M. Abood Al-Bayati, "Digital Image Processing Techniques for Breast Cancer Cells Detection" PH.D. Thesis, Al- Mustansiriya University , College of Education. 2005.

53. K. Gurevicius, "CRT display calibration of psychophysical color measurements", Master’s Sc. Thesis of University of Kuopio, 1998.

117

54. Haim Levkowitz, “Color theory and modeling for computer graphics, visualization, and multimedia applications ”, Kluwer Academic Publishers, 1997.

55. Wang, Z. & Bovik, A.C. "Modern image quality assessmen"t, Vol. 2. San Rafael, California, USA, Morgan & Claypool Publishers. 2006.

56. Wang, Z. & Bovik, A. C. " Why is image quality assessment so difficult? " IEEE IC-ASSP’02 International Conference on Acoustics, Speech and Signal Processing, Orlando, Florida, USA, pp. 3313-3316,2002.

57. Seshadrinathan, K., Sheikh, H., Wang, Z. & Bovik, A., " Structural and information theoretic approaches to image quality assessment". In: Blum, R. & Zheng, L. (eds.) Multi-sensor Image Fusion and Its Applications. Florida, USA, CRC Press. pp. 473-501,2005

58. H. R. Sheikh, M. F. Sabir, and A. C. Bovik, "A Statistical Evaluation of Recent Full Reference Image Quality Assessment Algorithms," Image Processing, IEEE Transactions on, vol. 15, pp. 3440-3451, 2006.

59. M. Carnec, P. Le Callet, and D. Barba, "Full reference and reduced reference metrics for image quality assessment," presented at Seventh International Symposium on Signal Processing and Its Applications, 2003.

60. Horita, Y., Miyata, T., Gunawan, P. I., Murai, T., and Ghanbari, M., "Evaluation model considering static-temporal quality degradation and human memory for SSCQE video quality," in Proc. SPIE, Lugano, Switzerland, PP. 1601-1611 ,2003 .

61. Z. Wang, H. R. Sheikh, and A. C. Bovik, “No-reference per-ceptual quality assessment of JPEG compressed images,” IEEEInt’l Conf. Image Proc., vol. 1, pp. 477–480, 2002.

62.Alan C. Bovik ,"Handbook of Image and Video Processing", 2nd edition, ed., Academic Press, 2005.

63. Z. Wang and A. C. Bovik. "A Universal Image Quality Index". IEEE SignalProcessing Letters, Vol. 9, No. 3, pp. 81– 84, 2002.

��������

���������������� ����� ����������� ���������������������������������� ��!�������"���

� ���!��� ������ ��#$��� �% &��� '#$����� �������������(��)$�����' �*� �!#���� ������ +)�

���� ���������,��� ��!����+)�����-,���#������.�����/��������� ����0���*�����������$��

� ��1/ ������������2��3)1�� ������������� 1���#��� ������45��67�$�������,�����2���

�� �� ��������% ��#/��8��#���,�$���� �!#������ ������9��! ����:����-�� � ����;�#� &�

����<&��� ��������)����1 ���� �% &�����!#���=>?@�ABCDEBFC�@G@EHI��������� ���#�

���% ��#/��) �����)��3�����!#������6������<&���� ������ ����������!#����JK�ABCDEBFC�

@G@EHI�� ���#�����������% ��#/���!#��������3������ �����6���% ��#/��L�M����#����L�

�����<&������)����-&�)�������������L�� �����#�! ����,��� ���NOPQ������R�&1�� ���#�

��#���%���S)�;�,���� ���,��;�#��T��!��1� �)�)$��U��&�����������������������������

������)��:� ������V��$&/� mean of locally (µ ,� ) model����&�)��� �������' � ���

WJ��� �X�)�������&/��3��Y��EFD�������������9� X������ ���SSIM���6��&� �����,�

�$�� ����� ��!���L��T�!���+)��45-��

�� �& �6�$���*� &��������M���;��������#$�����2��3)1�� ��1/ ���&�)������ ��������� �����

����M���;ZHEBFH[������!���J\���$�� ����,��]��#�������#�����M���;�^K>�6����&��*

�+)� ��M���;����T�!���1�L�8�;������N;� ��M���J_\W\�,��]��#�������#����K>��

��R^`a?^a>��-b% �&����<$c���c;������<������� ��!45��' �*�����M ����� ��!���

������������&1�� ���#��L����� ����������)�) �3������ ���-��+��������M���;�� �J\�

� MSRCR�� �� ��M���;���� ���#���� �&1� ������� ��#$�� ��� ������)�) ��� �� �

� ���#���������� �&����!#��������� ������ ��!�� ����� ���J\���^K>��T%��!���

������+)����#$���������-��

�������������������� ��������������

���������������������

���������������

�����

��

������������ ������� ��������������������������� � ����

����

�����������������

��������� � !�������������"���������������������

������� ��#��$�����$�%������������&���'������(�����������

& �(�����

��� �

������������� ��������� ���

)�����$�*�������"�!""#+��

)��������� ��#������$"�!""$+��

��

,��-���

��

���./ /��������#���#�����������������������./ /��/&������0����(�1�-��2����

����

#$%!����%��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������!"##� ����������������������������������������������������������������������������������������������������