Post on 25-Jan-2023
This article appeared in a journal published by Elsevier. The attachedcopy is furnished to the author for internal non-commercial researchand education use, including for instruction at the authors institution
and sharing with colleagues.
Other uses, including reproduction and distribution, or selling orlicensing copies, or posting to personal, institutional or third party
websites are prohibited.
In most cases authors are permitted to post their version of thearticle (e.g. in Word or Tex form) to their personal website orinstitutional repository. Authors requiring further information
regarding Elsevier’s archiving and manuscript policies areencouraged to visit:
http://www.elsevier.com/copyright
Author's personal copy
On the use of stereovision to develop a novel instrumentation systemto extract geometric fire fronts characteristics
L. Rossi a,n, M. Akhloufi b, Y. Tison a
a UMR CNRS SPE 6134, University of Corsica, 20250 Corte, Franceb CRVI, 205, rue Mgr-Bourget, Levis, Quebec, Canada G6V 6Z9
a r t i c l e i n f o
Article history:
Received 14 October 2009
Received in revised form
14 January 2010
Accepted 22 March 2010Available online 22 April 2010
Keywords:
Forest fire
Stereovision
Rate of spread
Geometric characteristics
a b s t r a c t
This paper presents a new instrumentation system, based on stereovision, for the visualization and
quantitative characterization of fire fronts in outdoor conditions. The system consists of a visible
pre-calibrated stereo camera and a computer with dedicated software. In the proposed approach,
images are captured simultaneously and processed using specialized algorithms. These algorithms
permit to model 3D fire fronts and extract geometric characteristics like volume, surface area, heading
direction and length. Experiments were carried out in outdoor scenarios and the obtained results show
the efficiency of the proposed system. This system successfully measures 3D geometric parameters of
fire fronts over a range of combustible and environmental conditions.
& 2010 Elsevier Ltd. All rights reserved.
1. Introduction
Given the scale of combustion flames, compartment fires andforest fires, monitoring and characterizing flame front geometryare important tasks for an improved understanding of firebehavior and fire fighting. For more than two decades, visualand infrared cameras have been used as complementary metro-logical instruments in fire and flame experiments [1–5]. Visionsystems are now capable of reconstructing a 3D turbulent flameand its front structure when the flame is the only density field inimages [6,7]. Image processing methods are also applied tomonitor forest fire properties like the rate of spread of flamefronts, fire base contour, flame orientation and maximum flameheight [8–11]. These techniques extract information from a sceneusing multiple viewpoints (in general, frontal and lateral views)and synthesize the data in subsequent steps. This type ofprocessing cannot be used in all experimental situations. Forexample, it is difficult to obtain the height and angle of flamesbelonging to a curved fire front.
In computer vision, much interesting research has been donein the area of 3D object reconstruction. However, the majority ofthese works dealt with rigid objects [12–14]. Little work was donein the area of modeling 3D non-rigid objects. Also, for the latter,many hypotheses are made in order to achieve an acceptable3D reconstruction [15–18]. The lack of a framework for 3D
reconstruction of complex fire fronts is due to the dynamic andrandom nature of the fire, which makes it difficult for visionsystems to efficiently locate and match salient points.
In this paper, we present a new instrumentation system, basedon stereovision, for the visualization and quantitative character-ization of fire fronts. This framework resists to outdoor unstruc-tured conditions. It permits the extraction of important 3D dataand reconstructs the fire front by means of surface rendering. Thisshape permits the computation of important flame character-istics, like its dimensions, angles and volume.
2. Stereoscopic methodology
2.1. Camera geometric model
The following sections present the mathematical relationshipbetween scene points in 3D space and their corresponding imagepoints. A more detailed explanation of 3D geometry can be foundin [14,20,21].
The geometric camera model considered in this paper is apinhole model. This model assumes a perfect perspectivetransformation with center Oc.
Let (u,v) the image coordinates of a pixel p corresponding to apoint P of coordinates (Xc, Yc, Zc) in the camera reference frame(Fig. 1).
A projection is a transformation between the imagecoordinates and the coordinates in the 3D reference frameof the camera with the precision to a scale factor. The relation is
Contents lists available at ScienceDirect
journal homepage: www.elsevier.com/locate/firesaf
Fire Safety Journal
0379-7112/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.firesaf.2010.03.001
n Corresponding author. Tel./fax: +33 4 95 46 05 11.
E-mail address: lrossi@univ-corse.fr (L. Rossi).
Fire Safety Journal 46 (2011) 9–20
Author's personal copy
given by
u
v
1
0B@
1CA¼
au 0 u0 0
0 av v0 0
0 0 1 0
264
375
Xc
Yc
Zc
1
0BBB@
1CCCA ð1Þ
With
au ¼ fKu and av ¼ fKv
f is the focal length of the camera, Ku and Kv are, respectively, thescale factors of the image in u and v directions and (u0,v0) are thecoordinates in pixels of the image center.
Setting:
xc ¼Xc
Zc, yc ¼
Yc
Zc, zc ¼ 1
The relation (1) becomes
u
v
1
0B@
1CA¼
au 0 u0
0 av v0
0 0 1
264
375
xc
yc
1
0B@
1CA ð2Þ
The transformation between the camera coordinate frameðxc!
, yc!
, zc!Þ and the scene ðX
!, Y!
, Z!Þ is a rigid transformation
consisting of a rotation and a translation. Considering that thesame point P with space coordinates (Xc, Yc, Zc) in the cameracoordinate frame has the coordinates (X, Y, Z) in the scenecoordinate frame, the following relation is obtained:
Xc
Yc
Zc
0B@
1CA ¼
r11 r12 r13
r21 r22 r23
r31 r32 r33
264
375
X
Y
Z
0B@
1CA þ
tx
ty
tz
0B@
1CA ð3Þ
The relation (4) can be written as
Xc
Yc
Zc
1
0BBB@
1CCCA¼
r11 r12 r13
r21 r22 r23
r31 r32 r33
264
375
tx
ty
tz
0B@
1CA
0 1
26664
37775
X
Y
Z
1
0BBB@
1CCCA¼ R T
0 1
� � X
Y
Z
1
0BBB@
1CCCAð4Þ
R is the rotation matrix and T is the translation vector.Combining the relations (1) and (4), the relation that links the
coordinates of P in the scene frame with the one of its imageFig. 1. Geometric model of the camera.
Nomenclature
a, b, c coefficients of the epipolar lineAl, Ar matrix that links the coordinates of a point in the left
(resp. right) camera frame with the one of the scenein a stereoscopic system
B distance between the two optical centers of astereoscopic system
ci centroid of all the points xjASi
DLT direct linear transformationd disparityE matrix that describes the left–right epipolar trans-
formF fundamental matrixF0 fundamental matrix for a pair of rectified imagesf focal length of the cameraH1, H2 pair of rectification matricesH01, H02 first pair of rectified matricesKu, Kv scaling factors of the image in the direction of u (resp.
v)k number of clustersmi mean value for RGB color channel i
Oc optical center of a cameraOl, Or optical center of the left (resp. right) camera of a
stereoscopic system(pl, pr) couple of corresponding points in two images
(u, v) image coordinates of a pixel(u0, v0) coordinates in pixel of the center of the image(Xc,Yc,Zc) coordinates of a point P in the camera reference frame
ðxc!
,yc!
, zc!Þ camera coordinate frame
ðX!
, Y!
, Z!Þ scene coordinate frame
(xl, yl, zl) coordinate of a 3D point P in the left camera frame(xr, yr, zr) coordinate of a 3D point P in the right camera framem mean colorM matrix containing the intrinsic parameterspR, pG, pB color components of a pixel p
R rotation matrixRs rotation matrix in a stereoscopic systemrij element of the R matrixRsij element of the Rs matrixSi clusters iA{1,...,k}Tsx, Tsy, Tsz components of the vector Ts
tx,t y,t z components of the vector TV total intra-cluster varianceZp distance of a 3D pointDZp error in Zp
au, av focal length of the camera in terms of pixel dimensionin the u (resp. v) direction
si standard deviation for RGB color channel i
() denotes a vector[] denotes a matrix
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–2010
Author's personal copy
representation is
u
v
1
0B@
1CA ¼
au 0 u0 0
0 av v0 0
0 0 1 0
264
375 R T
0 1
� � X
Y
Z
1
0BBB@
1CCCA ¼M
X
Y
Z
1
0BBB@
1CCCA ð5Þ
The elements of the matrix M are called ‘‘intrinsic parameters’’.The issue of estimating the value of these parameters is called
‘‘camera calibration’’. For camera calibration we use a grid ofpoints with known positions. Knowing the 3D points correspond-ing to the projected points in the image, the parameters of theassociated transformation are calculated by direct linear trans-formation (DLT) and used as initial parameters for Levenberg–Marquardt optimization [14]. Bouguet [22] developed a Matlabcamera calibration toolbox using the above approach. It computesthe intrinsic parameters of a camera using a planar checkerboard.
2.2. Stereo geometric model
A stereovision system uses two cameras (Fig. 2). Each camerais considered as a pinhole and defined by its optical center Ol (leftcamera), its optical axis (Ol,zl) perpendicular to the image planeand its focal length.
Let P be a point in a 3D space with coordinates (X, Y, Z) in thescene coordinate frame. This point has (xl, yl, zl) coordinates in theleft camera coordinate frame ((xr, ,yr, zr) in the right cameracoordinate).
The relative position of the two cameras is determined from therelations between the coordinates of point P in each cameracoordinate frame and its coordinates in the scene coordinate frame
xr
yr
zr
1
0BBB@
1CCCA¼Ar
X
Y
Z
1
0BBB@
1CCCA,
xl
yl
zl
1
0BBB@
1CCCA¼ Al
X
Y
Z
1
0BBB@
1CCCA
Eliminating
X
Y
Z
1
0BBB@
1CCCAin the two previous equations, the following
relation is obtained:
xr
yr
zr
1
0BBB@
1CCCA¼ArAl ¼
Rs Ts
0 1
� �¼
Rs11 Rs12 Rs13 Tsx
Rs21 Rs22 Rs23 Tsy
Rs31 Rs32 Rs33 Tsz
0 0 0 1
26664
37775
xl
yl
zl
1
0BBB@
1CCCA ð6Þ
Setting xcl ¼
xlzl
, ycl ¼
ylzl
, xcr ¼
xrzr
and ycr ¼
yr
zr, we obtain
xcr ¼
xr
zr¼
Rs11xcl ZlþRs12yc
l ZlþRs13ZlþTsx
Rs31xcl ZlþRs32yc
l ZlþRs33ZlþTszð7Þ
ycr ¼
yr
zr¼
Rs21xcl ZlþRs22yc
l ZlþRs23ZlþTsy
Rs31xcl ZlþRs32yc
l ZlþRs33ZlþTszð8Þ
Thus, the coordinates of a point in the right image areexpressed by the coordinates of its correspondent point in theleft image, the sensor parameters and the depth of the point P.
Eliminating Zg in Eqs. (7) and (8) the equation of a line isobtained
axcrþbyc
rþc¼ 0 ð9Þ
This line is called ‘‘epipolar line’’, it describes the set of pointsof the right image that can be matched with a certain point of theleft image. The expression (9) can be written in a matrix form
a
b
c
0B@
1CA¼ E
xcl
ycl
1
0B@
1CA ð10Þ
The matrix E describes the left–right epipolar transform; itcontains the parameters of the epipolar line equation passingthrough the right image and associated with a left image point.Another equation can be written as
ðxcr yc
r 1ÞE
xcl
ycl
1
0B@
1CA¼ 0 ð11Þ
With relation (2) it is possible to obtain the image coordinates of apoint P from its coordinates in the cameras coordinate frame. FromEq. (11), considering the matrices Cl and Cr, which contain theintrinsic parameters of the two cameras and applying Eq. (2), weobtain
ðurvr1ÞðC�1r Þ
tEC�1l
ul
vl
1
0B@
1CA¼ 0 ð12Þ
The matrix F¼ ðC�1r Þ
tEC�1l is called ‘‘fundamental matrix’’. Its
computation can be done from a set of corresponding pointsbetween the left and right images. Applying Eq. (12) to eachcorresponding pixels, a system of nine equations with nineunknowns is obtained. Its resolution gives F. Matlab Calibrationtoolbox permits the computation of the fundamental matrix usingthe above equations [22].
When F is obtained, the epipolar lines can be determined. Allthe epipolar lines on the left image (resp. right) pass through theepipolar point el (resp. er). Fig. 3 shows the epipolar geometry for astereo system.
2.3. Epipolar rectification
The 3D position is depicted by a stereo disparity defined by thedifference between the projected points in two stereo images.A 2D search is thus needed in order to find corresponding pointsin both images along the epipolar lines.
The rectification procedure transforms each image plane suchthat pairs of conjugate epipolar lines become collinear andparallel to one of the image axes. The rectified images can bethought of as acquired by a new stereo rig obtained by rotatingthe original cameras. Thus, computing stereo correspondences isreduced to a 1D search problem along the horizontal raster linesof the rectified images.
The rectification procedure needs the computation of twomatrices H1 and H2, called ‘‘rectification matrices’’ that minimizethe distortion between the corresponding images.Fig. 2. Stereovision system.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–20 11
Author's personal copy
Considering a couple of corresponding points (pl, pr) in twoimages, the fundamental matrix F gives prFpl¼0. The fundamentalmatrix for a pair of rectified images has the following form:
F0 ¼
0 0 0
0 0 �1
0 1 0
264
375 ð13Þ
In order to obtain the first pair of rectification matrices H01 andH02, it is necessary to compute a singular value decomposition(SVD) of F. The fundamental matrix can be written as
F¼Ht01F0H02 ð14Þ
The set of rectification pairs (H1, H2) are computed from H01
and H02 with the relations:
H1 ¼ T1H01, H2 ¼ T2H02
With T1 and T2 matrices of the form
T1 ¼
a b c
0 e f
0 h 1
264
375, T2 ¼
a0 b0 c0
0 e0 f 0
0 h0 1
264
375
The set of T1 and T2 parameters are fixed as describedin [23].
2.4. Computation of 3D coordinates
Once the corresponding features are extracted, a triangulationtechnique is used to compute their 3D coordinates. A line l ispassed through the point in the left image and the left opticalcenter Cl and a line r is passed through its corresponding point inthe right image and the right optical center Cr. A mid-pointmethod finds the point which lies exactly at the middle of theshortest line segment joining the two projection lines (Fig. 4). Thispoint represents the 3D coordinate of the corresponding pixels.
There is a simple relation between the disparity d¼xl�xr andthe distance of the 3D point P in the case of parallel image planes.With distance B measured from the two optical centers calledbaseline and the focal length f, we have (Fig. 5)
Zp ¼Bf
dð15Þ
The theoretical error along the Z-axis is given by
DZp ¼Z2
p
f B
����������d ð16Þ
2.5. Detection and matching of feature points
Feature points are salient points of the image. The Harrisdetector is the most commonly used operator for corner pointextraction. It is based on local auto-correlation of the signal[21,23]. This operator measures local changes of grey levels.
Also, regions of interest detection is performed in a given colorsystem prior to salient points extraction. This segmentationpermits the extraction of connected parts of an image based oncolor criteria [24]. Then a contour detection follows using an edgedetector (Sobel, Prewitt or Canny) [13,25]. Feature points are thensearched along the obtained contours.
When feature points are located in the right and left images, amatching algorithm is used in order to find corresponding pointsin the two images. Matching algorithms can be classified ascorrelation-based and feature-based methods [14]. In correlation-based methods, used in this work, image windows of fixed sizeare matched, and the similarity criterion is a measure of thecorrelation between these windows.
3. Proposed approach
The proposed framework uses stereovision for 3D modeling ofparts extracted from fire regions. The steps involved in thisapproach are (see Fig. 6):
1. segmentation of fire images in order to extract fire regions;2. features detection algorithm extracts salient points from the
segmented regions;3. best features selection using a correlation-based matching
strategy. This step permits the refinement and selection ofsalient points and the construction of a set of correspondingpoints;
Fig. 3. Epipolar geometry.
Fig. 4. Triangulation method.
Fig. 5. Relation between depth and disparity for parallel image planes.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–2012
Author's personal copy
4. computation of 3D fire points using stereo correspondence;5. 3D surface rendering for volume reconstruction and fire
characteristics computation.
Since our objective is to reconstruct a 3D shape of fire fronts inan operational scenario, we need a system that can be deployedquickly and efficiently in outdoor unstructured scenes. A pair ofstereo cameras is the best choice in this case. We selected a pre-calibrated stereo camera. With this type of camera, there is noneed for calibration because all the intrinsic and extrinsicparameters are fixed and perfectly known. A Point Grey XB3Trinocular stereo system [26] was used in our experiments.
In order to derive the 3D data from the obtained images, weneed to find corresponding points in the left and right images.Since fire is of dynamic and random nature, we have developed anew approach to extract salient corresponding points present inthe two stereo images. The following sections give more detailsabout the new technique.
3.1. Two-level segmentation
The fire image is characterized by dominant colors in yellow,orange and red intervals. Also, color variations inside the fireflame give rise to homogeneous color regions (Fig. 7). Thesecharacteristics will be used in the proposed two-level
segmentation technique. The proposed approach is robust andcan handle large variations present in unstructured scenes.
3.1.1. First-level segmentation: a combination of YUV and RGB
information
Previous work conducted using different color spaces for abetter segmentation of fire regions in complex unstructuredscenes showed that ‘‘V’’ channel of the YUV color system [27,28] isinteresting for finding fire regions. However, in outdoor scenesother areas not corresponding to fire can appear close to fireregions (Fig. 8). The K-means clustering technique is applied to‘‘V’’ channel in order to extract the most interesting areascorresponding to fire.
K-means clustering is used to find k partitions based on a set ofattributes. It permits to find iteratively the centers of naturalclusters in the data. It assumes that the object attributes form avector space. The objective of the algorithm is to minimize totalintra-cluster variance
V ¼Xk
i ¼ 1
Xxj A Si
:xj�ci:2
ð20Þ
where k is the number of clusters, Si are the clusters iA{1, ..., k}and ci is the centroid of all the points xjASi.
We used the K-means clustering technique with k¼2 (1 clusterfor the fire regions and 1 cluster for the background). Fig. 4 showsthe results of this processing.
Fig. 7. Original fire front image.
Fig. 8. ‘‘V’’ channel of YUV fire image.
Fig. 6. Framework for 3D modeling of fire flames.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–20 13
Author's personal copy
Assuming that a fire region corresponds to the bigger extractedzone in the clustered ‘‘V’’ channel, we used a learning strategy inthe RGB color space to compute a reference model for colorclassification of fire pixels.
A 3D Gaussian model is used to represent the pixels presentwithin the fire zone. The 3D Gaussian model is defined by thefollowing mean and standard deviation
m¼ ðmR,mG,mBÞ
s¼maxðsR,sG,sBÞð21Þ
where mi is the mean for channel i and si the standard deviationfor channel i, iA{R,G,B}.
The pixels present in the white clustered ‘‘V’’ channel are thenverified in the RGB image in order to see if their colors are close tothe reference fire color.
A pixel is classified based on the model learned from areference area. A pixel is represented by a 3D vector defined by itscolor components: p¼(pR, pG, pB). It is classified using thefollowing formulation:
:p�m:rk� s zAFire
Otherwise z =2 Fire
(ð22Þ
where :p�m:¼ ½ðpR�mRÞ2þðpG�mGÞ
2þðpB�mBÞ
2�1=2, k is a
constant.The result of this first segmentation is a fire region as shown in
Fig. 9.
3.1.2. Second-level clustering and segmentation
After the first-level segmentation we obtain a fire region withdifferent homogeneous color zones. A second-level segmentationis then performed in the resulting image. K-means clusteringtechnique is used to extract the interior fire regions. Fig. 10 showsthe result of clustering the fire image in 4 clusters (3 clusters forfire regions and 1 for background).
3.2. Detection of feature points
3.2.1. Contour extraction
The obtained image in the first-level segmentation is used toextract the global contour of the fire region. The extracted regionis binarized. The obtained binary image highlights abruptdiscontinuities present in the image. A postprocessing step basedon mathematical morphology is then conducted in order toeliminate spurious pixels, such as residual burning embers, and to
correct imperfect segmentation results like holes that appear inthe fire area due to the presence of smoke. The final contour isthen obtained using Canny edge detection algorithm. The result isa list of points representing the bordering pixels along the globalfire region (Fig. 11a).
The two-level segmentation permits the labeling of interiorhomogeneous color regions. These regions are then processedseparately in order to extract their contours. A labeled region isextracted in another image and binarized. Canny edge detectionalgorithm is then applied in order to extract the labeled regioncontour (Fig. 11b).
3.2.2. Feature detections
Features like edges and textures are not easily found in fireimages. In our approach, we use peaks and valleys of a firecontour as feature points. A peak detector is used in order to findlocal positive and negative inflections along the extracted contourdenoted as f(x)
f ðxiÞo f ðxjÞ )Minima jAfi�1,iþ1g
f ðxiÞ4 f ðxjÞ )Maxima jAfi�1,iþ1g
(ð23Þ
Fig. 12 shows the result obtained with the peak detector: 943points were extracted from the left image and 745 points from theright image.
3.3. Matching and refinement of feature selection
The previous step permits the extraction of all feature pointssatisfying our extreme detection criteria. Not all of these pointscan be matched due to occlusion and local color variations.
The matching procedure selects the best features and findscorresponding points in the left and right images. The algorithmuses the following constraints during the matching process:epipolar, order and uniqueness constraints. Also, we have added adisparity constraint that restricts the search in a small intervalalong the epipolar search line.
For each detected feature in the rectified left image, we start a1D search for its corresponding feature point in a small area alongthe horizontal line in a rectified right image (epipolar anddisparity constraints). The search algorithm uses a normalizedcross-correlation pattern matching algorithm in a 33�33 win-dow around each potential corresponding point. The similaritycriterion is the correlation score between windows in the twoimages [21]. When two possible correspondences to the left point
Fig. 10. Clustering of interior fire regions.Fig. 9. Extracted fire region.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–2014
Author's personal copy
are present in the right image and have close matching scores,priority is given to point that is located at the extreme left, andthat is unmatched (order and uniqueness constraints).
Fig. 13 shows the remaining points after the matchingprocedure: 374 corresponding points have been detected.
3.4. 3D positions
Once the corresponding features are extracted, a triangulationmethod is used to compute their 3D coordinates. Fig. 14 showsthe sparse 3D representation of the obtained points.
Fig. 12. Features detection.
Fig. 13. Refinement of selected features and matching.
Fig. 11. Contour extraction: (a) global contour of the fire region and (b) interior fire regions contour.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–20 15
Author's personal copy
3.5. 3D surface rendering
In order to perform the 3D surface of a fire front, we performthe Crust algorithm in the set of obtained 3D points [29]. This
algorithm works with unorganized points and it is based on the3D Voronoi diagram and Delaunay triangulation. It produces a setof triangles called the crust of the sample points. All vertices ofcrust triangles are sample points. Fig. 15 shows the results of the
Fig. 14. 3D position of corresponding points.
Fig. 15. 3D surface reconstruction.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–2016
Author's personal copy
Crust algorithm. In this figure, we obtain a global form of theflame front in relation with the one appearing in the two stereoimages.
3.6. Reconstructed shape characteristics
3.6.1. Volume
From the 3D points, a convex hull algorithm [30] is used tocompute the volume of the reconstructed shape.
3.6.2. Perimeter, width and depth
The flame front perimeter is obtained by applying the convexhull algorithm only on the XZ coordinates of the 3D points. Thedotted line in Fig. 15 shows the perimeter of the reconstructedfront. From this data, we can compute width and depth.
3.6.3. Tilt angle and height
A lot of researchers model the flame front by approximating itto its tangential plan [31–33]. As we can see in the images above,a flame front has a complex form and it is impossible to represent
Fig. 16. Highest 3D point and lowest 3D point used for the computation of the tilt angle of the front.
Fig. 17. Spectral sensitivity of the ICX445AQ CCD sensor.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–20 17
Author's personal copy
it with only one height or one tilt angle. In order to provideexperimental information to compare with the numeric ones, the3D points are considered by sections along the X-axis. The heightand the tilt angle are computed for each portion of the flame front.For each section, the 3D points considered are the ones presentwithin a certain depth from the most advanced 3D point along theZ-axis.
The height of each section is obtained by computing thedistance between the highest 3D point located ahead of the front(Pf) and the lowest 3D point located at the rear of the front (Pb).
The tilt angle between the fire and the ground in the Z-axisdirection is obtained by computing the angle between the line(Pf Pb) and the Z-axis (Fig. 16).
4. Experimental results
Experiments were conducted in an operational scenario:outdoor unstructured environment. Tests were conducted in daytime in the region of Corte (Corsica Island, France). Since the firehas a dynamic and random nature, the acquisition of the twoimages must be synchronized. We chose to use a pre-calibratedstereo camera from Point Grey. A color XB3 Trinocular stereosystem was used. This system has a focal length of 3.8 mm with661HFOV, and a 1/300 Sony ICX445AQ CCD sensor with the spectralsensitivity represented in Fig. 17.
The XB3 system is pre-calibrated against distortion andmisalignment, the image resolution is 1280�960 and a framerate is16 FPS. The system has 3 cameras. The baseline between thenearest camera is 12 cm and the baseline between the two distantcameras is 24 cm.
Captured images were stored in RAW format and laterprocessed for segmentation and 3D reconstruction. Image acqui-sition code was developed in C++ and optimized for real timeperformance. Image processing was done offline using Matlab.Fig. 18 shows the experimental setup used for capturing firefronts in outdoor unstructured scenes. The arrow points to theXB3 stereo system. The figure shows also near infrared camerasused in our experiments. These cameras are part of ongoingresearch in multi-spectral processing of fire images.
This camera gives perfectly registered images (synchronizationtime is around 125 ns). The images were captured at fullresolution 1280�960. Brightness and white balance wereadjusted before acquisition and integration time was set to0.1 ms to avoid successive image averaging.
The stereo system was positioned at approximately 6 m fromthe fire front. The height of the fire was approximately 2.5 m.Fig. 15 shows that the reconstruction results are in concordancewith these data. The data estimated by computation are equal to4.1 m for the maximum width, 2.5 m for the depth, 15 m3 for thevolume. The tilt angle was estimated using 3 sections along theFig. 18. Experimental setup.
Fig. 19. Modeling of the flame front.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–2018
Author's personal copy
X-axis: we obtained 711 for the first section, 801 for the secondsection and 791 for the third section.
Fig. 19 shows a modeling of the flame front where the front isdivided in 3 sections of 1 m of width and depth and where eachplane has a tilt angle and a length, which corresponds to the onescomputed from the 3D points. For each section, the dotted line inthe (XZ) plane corresponds to the perimeter of the reconstructedshape obtained from the 3D points.
Fig. 20 gives the 3D surface reconstruction of the fire front at adifferent time during fire propagation.
Figs .15 and 20 show that the reconstructed fire front shapes areclose to the volume of the real ones. Parameters extracted form the3D surface reconstructions are close to parameters obtained byhuman observations on the ground during the fire propagation.Tests were also carried out successfully in indoor scenarios.
5. Conclusion
In this work we present a new instrumentation system, basedon stereovision, for the visualization and quantitative character-ization of fire fronts. The proposed approach deals with variousproblems arising from the dynamic nature of fires like occlusions,color variation, etc.
A new two-level robust fire segmentation technique is alsointroduced. The first level starts with a global segmentation of fireregions from the unstructured scene, and the second levelsegments the fire image in multiple interior regions of homogenouscolors. The two levels use K-means clustering technique for a
robust segmentation. Also, a new matching strategy is presented. Itefficiently handles mismatches due to occlusions and missing data.
From the 3D fire points obtained by stereovision, a 3D surfacerendering is computed in order to obtain a fire shape. From thismodel, information like position, orientation, dimensions andvolume of fire are computed.
Future work includes the extension of this work to very largefire fronts, the fusion of different feature detection strategies foran improved extraction of feature points and the use of multi-spectral images in order to give additional information to theprocessing steps.
Work is being conducted to optimize the image processing forreal time. This step will permit the deployment of the proposedsetup in operational scenarios during fire fighting in conjunctionwith a fire spread model.
Acknowledgement
The present work was supported in part by the Corsican regionunder Grant ARPR 2007-2008. The authors would like to thank Dr.Thierry Marcelli for his help with the preparation of the figures ofthis paper.
References
[1] H.B. Clements, Measuring fire behavior with photography, Photogramm. Eng.Remote Sensing 49 (10) (1983) 1563–1575.
[2] G. Lu, Y. Yan, Y. Huang, A. Reed, An intelligent monitoring and control systemof combustion flames, Meas. Control 32 (7) (1999) 164–168.
Fig. 20. 3D surface reconstruction of a fire front.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–20 19
Author's personal copy
[3] H.C. Bheemul, G. Lu, Y. Yan, Digital imaging based three-dimensionalcharacterization of flame front structures in a turbulent flame, IEEE Trans.Instrum. Meas. 54 (2005) 1073–1078.
[4] J. Huseynov, S. Baliga, A. Widmer, Z. Boger, 2008 special issue: anadaptive method for industrial hydrocarbon flame detection, NeuralNetworks 21 (2–3) (2008) 398–405.
[5] C. Abecassis-Empis, et al., Characterisation of Dalmarnock fire test one, Exp.Therm. Fluid Sci. 32 (7) (2008) 1334–1343.
[6] S.W. Hasinoff, K.N. Kutulakos, Photo-consistent reconstruction of semitran-sparent scenes by density-sheet decomposition, IEEE Trans. Pattern Anal.Mach. Intell. 29 (5) (2007) 870–885.
[7] G. Lu, G. Gilabert, Y. Yan, Three dimensional visualisation and reconstructionof the luminosity distribution of a flame using digital imaging techniques, J.Phy.: Conf. Ser. 15 (2005) 194–200.
[8] J.R. Martinez-de Dios, J.C. Andre, J.C. Gonc-alves, B.Ch. Arrue, A. Ollero, D.X.Viegas, Laboratory fire spread analysis using visual and infrared cameras, Int.J. Wildland Fire 15 (2006) 175–186.
[9] E. Pastor, A. �Agueda, J. Andrade-Cetto, M. Munoz, Y. Perez, E. Planas,Computing the rate of spread of linear flame fronts by thermal imageprocessing, Fire Saf. J. 41 (8) (2006) 569–579.
[10] L. Merino, F. Caballero, J.R. Martınez-de Dios, J. Ferruz, A. Ollero, A cooperativeperception system for multiple UAVs: application to automatic detection offorest fire, J. Field Robotics 23 (3) (2006) 165–184.
[11] J.R. Martinez-de Dios, B.C. Arrue, A. Ollero, L. Merino, F. Gomez-Rodriguez,Computer vision techniques for forest fire perception, Image Vision Comput.26 (4) (2008) 550–562.
[12] O. Faugeras, in: Three-Dimensional Computer Vision, MIT Press, 1993.[13] R. Horaud, O. Monga, in: Vision par Ordinateur, Outils Fondamentaux, 2nd
ed., Hermes Science Publication, 1995.[14] R. Hartley, A. Zissermann, in: Multiple View Geometry in Computer Vision,
2nd ed., Cambridge University Press, 2003.[15] A. Del Bue, L. Agapito, Non-rigid stereo factorization, Int. J. Comput. Vision
(IJCV) 66 (2) (2006) 193–207.[16] A. Del Bue, X. Llado, L. Agapito, Non-rigid face modelling using shape priors,
Analysis and Modelling of Faces and Gestures (AMFG) Springer LNCS 3723, 2005.
[17] A. Del Bue, L. Agapito, Non-rigid 3D shape recovery using stereo factorization,Asian Conf. Comput. Vision (ACCV2004) 1 (2004) 25–30.
[18] W.B. Ng, Y. Zhang, Stereoscopic imaging and reconstruction of the 3Dgeometry of flame surfaces, Exp. Fluids 34 (2003) 484–493.
[20] Z. Zhang, Determining the epipolar geometry and its uncertainty: a review,Int. J. Comput. Vision 27 (2) (1998) 161–198.
[21] E. Trucco, A. Verri, in: Introductory Techniques for 3-D Computer Vision,Prentice Hall, 1998.
[22] Y. Bouguet, Camera Calibration Toolbox for Matlab, /http://www.vision.caltech.edu/bouguetj/calib_doc/index.htmlS.
[23] F. Devernay, Vision Stereoscopique et Proprietes Differentielles des Surfaces,PhD Thesis, Ecole Polytechnique, 1997.
[24] R.C. Gonzales, R.E. Woods, S.L. Eddins, in: Digital Image Processing usingMatlab, Pearson Prentice Hall, 2004, pp. 237–240.
[25] C. Harris, M. Stephens, A combined corner and edge detector, in: Proceedingsof the 4th Alvey Vision Conference, 1998, pp. 189–192.
[26] Point Grey Research. /http://www.ptgrey.comS.[27] S. Westland, A. Ripamonti, in: Computational Colour Science, John Wiley,
2004.[28] A. Tremeau, C. Fernandez-Maloigne, P. Bonton, in: Image Numerique Couleur,
Dunod, 2004.[29] N. Amenta, M. Bern, M. Kamvysselisy, A new Voronoi-based surface
reconstruction algorithm, International Conference on Computer Graphicsand Interactive Techniques Archive Proceedings of the 25th AnnualConference on Computer Graphics and Interactive Techniques, 1998,pp. 415–421.
[30] C.B. Barber, D.P. Dobkin, H.T. Huhdanpaa, The Quickhull algorithm for convexhulls, ACM Trans. Math. Software 22 (4) (1996) 469–483.
[31] R.C. Rothermel, A Mathematical Model for Predicting Fire Spread in WildlandFuels, USDA Forest Service Research, Paper INT-115, 1972.
[32] D.X. Viegas, Slope and wind effects on fire propagation, Int. J. Wildland Fire13 (2004) 143–156.
[33] R.O. Weber, Modelling fire spread through fuel beds, Prog. Energy Combust.Sci. 17 (1990) 67–82.
L. Rossi et al. / Fire Safety Journal 46 (2011) 9–2020