Toward low-cost 3D automatic pavement distress surveying: the close range photogrammetry approach

13
Toward low-cost 3D automatic pavement distress surveying: the close range photogrammetry approach Mahmoud Ahmed, C.T. Haas, and Ralph Haas Abstract: The management of road networks requires accurate information on surface condition. Automated methods have been developed to collect road surface data, based on digital imaging systems with or without laser-profilers. While these represent state-of-the-art technology, the equipment is expensive and there are issues on accuracy and robustness. In a broad sense, the issues extend to whether user needs can be met with alternative, less-expensive technology, and whether this can be accomplished with modifications and (or) integrating a new technology with the existing equipment. This paper addresses the foregoing issues as a fundamental research and development question. It suggests that photogrammetric techniques have the potential to provide a unique and practical answer to the question. An overview and detailed formulation of the photo- grammetric technology is described, with examples from experiments on road surfaces. The technical and economic advan- tages of the new technology are pointed out, including its potential to use consumer-grade cameras. Key words: transportation and urban planning, highways, airports. Résumé : La gestion des réseaux routiers requiert de linformation précise sur létat des surfaces. Des méthodes automati- sées ont été développées afin de colliger des données sur les surfaces de roulement; elle sont basées sur les systèmes dima- gerie numérique avec ou sans profilage laser. Bien que ces méthodes représentent la technologie de pointe, léquipement est dispendieux et leur précision et leur robustesse sont remises en question. Au sens large, les questions portent sur les besoins des utilisateurs, sils peuvent être rencontrés en utilisant une autre technologie moins dispendieuse, et si cela peut être réalisé par des modifications et (ou) en intégrant une nouvelle technologie avec les équipements existants. Cet article traite de tou- tes ces questions sous forme dune recherche fondamentale et dune question à développement. Les techniques photogram- métriques pourraient potentiellement fournir une réponse unique et pratique à la question. Un aperçu et une formulation détaillée de la technologie photogrammétrique sont accompagnés dexemples dexpériences sur des surfaces de roulement. Les avantages techniques et économiques de la nouvelle technologie sont soulignés, dont le potentiel dutiliser des caméras de qualité grand public. Motsclés : planification des transports et urbaine, autoroutes, aéroports. [Traduit par la Rédaction] Introduction A study conducted in the USA on pavement surface dis- tress data collection indicated that data on cracks are col- lected automatically in 15 states, and 5 states had no automatic collection. Only 2 states evaluate all road lanes. Others may survey one outer lane or two lanes in each direc- tion of multi-lane roadways. Among some of the common problems that states are facing is trying to find a system that provides full automation for crack detection and classification stages. Also, the accuracy of current automated systems is considered an issue, the analysis of ratings showed that those of the rating crews were closer to the ground truth than the automated equipment ratings were(Mullis et al. 2005), Rutting should be evaluated with laser or ultrasonic technology, It was observed that ravelling, flushing and potholes were not detected by digital technology under semi- automated and automated analysis(Chamorro and Tighe 2010). The common techniques of analysing automatic 2D image-based outputs may produce overestimation (Timm and Mcqueen 2004), and may estimate lower severity levels in some cases as in Di Mascio et al. (2007). Automated techni- ques do not necessarily reproduce the same PCI values esti- mated using manual techniques. These differences can lead to substantially different pavement treatment recommendations, fund needs, and related information (Albitres et al. 2007). Most automated systems are more accurately considered semi-automated systems(Albitres et al. 2007). The effect of shadows and low or extra illumination among other factors need also more research. In essence the image processing techniques use approaches based on color or gray-shading statistical data like the mean, standard deviation, variance, Received 12 April 2011. Revision accepted 11 August 2011. Published at www.nrcresearchpress.com/cjce on 23 November 2011. M. Ahmed, C.T. Haas, and P.R. Haas. University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada. Corresponding author: Mahmoud Ahmed (e-mail: [email protected]). Written discussion of this article is welcomed and will be received by the Editor until 30 April 2012. 1301 Can. J. Civ. Eng. 38: 13011313 (2011) doi:10.1139/L11-088 Published by NRC Research Press Can. J. Civ. Eng. Downloaded from www.nrcresearchpress.com by University of Waterloo on 03/23/12 For personal use only.

Transcript of Toward low-cost 3D automatic pavement distress surveying: the close range photogrammetry approach

Toward low-cost 3D automatic pavement distresssurveying: the close range photogrammetryapproach

Mahmoud Ahmed, C.T. Haas, and Ralph Haas

Abstract: The management of road networks requires accurate information on surface condition. Automated methods havebeen developed to collect road surface data, based on digital imaging systems with or without laser-profilers. While theserepresent state-of-the-art technology, the equipment is expensive and there are issues on accuracy and robustness. In a broadsense, the issues extend to whether user needs can be met with alternative, less-expensive technology, and whether this canbe accomplished with modifications and (or) integrating a new technology with the existing equipment. This paper addressesthe foregoing issues as a fundamental research and development question. It suggests that photogrammetric techniques havethe potential to provide a unique and practical answer to the question. An overview and detailed formulation of the photo-grammetric technology is described, with examples from experiments on road surfaces. The technical and economic advan-tages of the new technology are pointed out, including its potential to use consumer-grade cameras.

Key words: transportation and urban planning, highways, airports.

Résumé : La gestion des réseaux routiers requiert de l’information précise sur l’état des surfaces. Des méthodes automati-sées ont été développées afin de colliger des données sur les surfaces de roulement; elle sont basées sur les systèmes d’ima-gerie numérique avec ou sans profilage laser. Bien que ces méthodes représentent la technologie de pointe, l’équipement estdispendieux et leur précision et leur robustesse sont remises en question. Au sens large, les questions portent sur les besoinsdes utilisateurs, s’ils peuvent être rencontrés en utilisant une autre technologie moins dispendieuse, et si cela peut être réalisépar des modifications et (ou) en intégrant une nouvelle technologie avec les équipements existants. Cet article traite de tou-tes ces questions sous forme d’une recherche fondamentale et d’une question à développement. Les techniques photogram-métriques pourraient potentiellement fournir une réponse unique et pratique à la question. Un aperçu et une formulationdétaillée de la technologie photogrammétrique sont accompagnés d’exemples d’expériences sur des surfaces de roulement.Les avantages techniques et économiques de la nouvelle technologie sont soulignés, dont le potentiel d’utiliser des camérasde qualité grand public.

Mots‐clés : planification des transports et urbaine, autoroutes, aéroports.

[Traduit par la Rédaction]

Introduction

A study conducted in the USA on pavement surface dis-tress data collection indicated that data on cracks are col-lected automatically in 15 states, and 5 states had noautomatic collection. Only 2 states evaluate all road lanes.Others may survey one outer lane or two lanes in each direc-tion of multi-lane roadways. Among some of the commonproblems that states are facing is trying to find a system thatprovides full automation for crack detection and classificationstages. Also, the accuracy of current automated systems isconsidered an issue, “the analysis of ratings showed thatthose of the rating crews were closer to the ground truththan the automated equipment ratings were” (Mullis et al.2005), “Rutting should be evaluated with laser or ultrasonictechnology”, “It was observed that ravelling, flushing and

potholes were not detected by digital technology under semi-automated and automated analysis” (Chamorro and Tighe2010). The common techniques of analysing automatic 2Dimage-based outputs may produce overestimation (Timm andMcqueen 2004), and may estimate lower severity levels insome cases as in Di Mascio et al. (2007). Automated techni-ques do not necessarily reproduce the same PCI values esti-mated using manual techniques. These differences can lead tosubstantially different pavement treatment recommendations,fund needs, and related information (Albitres et al. 2007).“Most automated systems are more accurately considered

semi-automated systems” (Albitres et al. 2007). The effect ofshadows and low or extra illumination among other factorsneed also more research. In essence the image processingtechniques use approaches based on color or gray-shadingstatistical data like the mean, standard deviation, variance,

Received 12 April 2011. Revision accepted 11 August 2011. Published at www.nrcresearchpress.com/cjce on 23 November 2011.

M. Ahmed, C.T. Haas, and P.R. Haas. University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada.

Corresponding author: Mahmoud Ahmed (e-mail: [email protected]).

Written discussion of this article is welcomed and will be received by the Editor until 30 April 2012.

1301

Can. J. Civ. Eng. 38: 1301–1313 (2011) doi:10.1139/L11-088 Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

correlation or cross-correlation, e.g., for feature detection andimage classification, or local relative relationship betweenneighbour pixels, e.g., for region growing. The robust detec-tion and modelling needed cannot be achieved directlythrough 2D image space analysis alone.Array of laser profilers for 3D data acquisition is an alter-

native, but it has limitations for enhancing or replacing im-age-based output and it is expensive.The basic purpose of this paper is to describe the research

involved with a new and innovative alternative for pavementsurface condition using close-range photogrammetry to gen-erate 3D surfaces. While photogrammetry is well establishedin such applications such as satellite, aerial, and terrestrialimagery, it has only been demonstrated very recently that 3Dsurface reconstruction of roads can be accomplished based ona combined geometric and mathematically robust solution(Ahmed and Haas 2010).

Motivation for the researchThere is a need for an accurate and cheaper alternative to

current working systems based on either image analysis or la-ser technologies. Despite the presence of photogrammetry asa robust and accurate image based approach for some time,most image based techniques in the literature related to pave-ment distress data collection are not a stereo-vision approach.There is no commercial system that uses the photogrammet-ric approach as a tool for pavement distress data collection.Dramatic advances in computing speed, parallel processing,high camera resolution has made close-range photogramme-try much more feasible for exploitation in many new applica-tions (Mikhail et al. 2001; Jiang et al. 2008).In this research, the photogrammetric approach is proposed

as accurate, reliable, and economic. Camera positions are nei-ther restricted to vertical nor located in one plane parallel tothe pavement surface. The proposed approach not only over-comes the disadvantages of traditional image-based ap-proaches, but also offers a low-cost alternative in the longterm. It can work as an integral module to a laser-based tech-nology in the short term as well. It can work using one, two,or more moving cameras. The output preserves the textureand pictorial data.In this research the pavement distress data is collected,

modeled, and processed in 3D space. One of the main advan-tages over the traditional 2D image processing techniques isthat geometrical relationships and geometrical constraints arerobustly incorporated in one and the same mathematicalmodel that overcomes a lot of ambiguity inherent in tradi-tional image processing; e.g., perspective projection distor-tions. The final product is thus not generated from anerroneous image space but from the 3D stereo model usingat least two or more images taken from different points inspace. This allows 3D reconstruction of real-world objects,in this case it is the pavement surface with all distresses likecracks, rutting, etc.The details of designing a system based on one, two or

more moving cameras are beyond the scope of this paper. Al-ternatively, a theoretical solution for any two or more over-lapped images taken by one or more cameras will bepresented, together with the experimental work using imagestaken with one moving camera.

Related work

Manual data collectionUntil the early 1980s, the collection of pavement distress

information was conducted visually (Hicks and Mahoney1981), which was done by a walking rating technician ac-cording to certain guidelines. It is reported that such “manualsurvey of pavements still represents the most widely usedmeans of inspecting and evaluating pavements” (Mraz et al.2006). Manual assessment is error prone, expensive, timeconsuming, tedious, sometimes dangerous, and it is subjec-tive. It has a high degree of variability and inconsistency be-tween raters, as found in Timm and Mcqueen (2004).

Automated image-based data collectionThe first study known to date in North America in this

area is described by Haas et al. (1984), in which a camerawas mounted on a vehicle and images recorded in analogueform. The results encouraged further future developmentsand refinement of approaches. Since video image processingto date has only provided 2D data about cracks (depth infor-mation has not been estimated), laser scanning technologiesbegan to be investigated in the late 1980s, among other tech-nologies, to measure 3D data (Haas and Hendrickson 1991).A human assisted machine vision algorithm was developedfor highly accurate crack mapping and representation in anautomated road maintenance machine (Kim and Haas 2002).Automated imaging can be categorized as analog and digi-

tal, the use of the analog technique is decreasing with time asdigital technology is dramatically advancing (Mcghee 2004;Di Mascio et al. 2007). Digital imaging has enabled a lot ofnew automated techniques for distress classification. How-ever, these techniques are mostly based on digital image anal-ysis. Digital images are captured and formed using eitherarea-scan or line-scan technologies. Area-scan uses 2D arraysof sensor pixels to capture each image. Line-scanning usesonly one line of sensor pixels (1D) at each snapshot; the im-age is formed line by line due to relative movement of thesensor with respect to the pavement surface. Line-scanningrequires a frame grabber and a control unit to control andsynchronize the image formation with the speed of the ve-hicle and illumination used to reduce shadow effects. Gener-ally, special precautions must be used with line scanning(Kertész et al. 2008). For example, a severe systematic errorby any shadow from the vehicle that falls onto the pavementsurface may appear as a continuous shadow in the whole im-age and make it useless (Di Mascio et al. 2007). Most of thecurrent image based crack detection systems are basedmainly on 2D image space analysis techniques to extract andclassify cracks. The 2D image space analysis use severalclassification and feature extraction approaches, e.g., edge de-tection, line detection, invariant moments (IM), neural net-works, genetic algorithms (GA), multi-layer perceptron(MLP), self organization map (SOM), rule based pattern rec-ognition, etc., A review can be found for example in Raba-baah et al. (2005). Such systems are semi-automatic andexpensive, they are based on interactive multi-stage ap-proaches so that the operator may need to define one ormore threshold values during noise suppression, seed selec-tions, edge point detection, edge element connection and linefollowing, region growing, and image segmentation. How-

1302 Can. J. Civ. Eng. Vol. 38, 2011

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

ever, autonomous image segmentation based on fuzzy logicand context driven rule techniques as Ahmed (2000, 2002)are not tested with close range or pavement images. Furtherdetails about related technologies, such as laser, acoustic andIR sensors used for roughness measurements can be found inMcghee (2004). It is likely that the success and economy ofmany of these approaches would be enhanced by applyingphotogrammetric techniques to generate the raw 2D and 3Dimage data.Structured lighting is a broadly recognized technique

(Slama 1980) to overcome line-scan imaging problems as ex-plained in the next section. A laser line projector is used togenerate structured light as lines. The distortion of the linegeometry is used to detect the variations of the pavement sur-face at successive sampled profiles, hence allowing the gen-eration of a good approximation of the 3D pavement surface(Kertész et al. 2008).An early prototype four-camera system with two com-

puters supported by parallel processing technology for pave-ment distress surveying is described in Wang (2004), inwhich each of two cameras were used to cover half a lane-width, approximately 2 m. Stereo-vision was in early stagesof this, however, with time the same system upgraded to twoline-scan cameras. The four-camera setup was necessary tocover a complete lane surface plus a portion of the shoulderarea. The system detected and classified cracks mainly by an-alyzing both 2D images taken by the two cameras. This sys-tem has several limitations, very intensive computations andassumptions that can be questioned.

Commercial image-based pavement survey systemsIt is not possible to list all the systems, both commercial

and experimental systems found in the literature and on websites. While these systems are extensive and used worldwide,and are very significant to the advancement of pavement en-gineering technology and measurement, it is only feasible tomention a few illustrated samples, see Table 1 for a very

short list. However, most image-based systems in the marketcan be considered as variations of the samples mentioned inTable 2.

Limitations of existing automated image based crackdetection systems and methodsMost, if not all, commercial automatic systems available

are based on image analysis. None of these systems claim touse photogrammetric techniques. Two dimensional imagebased systems lack robust sensor modelling and underutilizethe actual geometrical relationship between the 2D imagespace, camera space and real 3D road surface space. Instead,such systems rely heavily on global or in-context 2D pixeldata analysis. These have several drawbacks as follows:(a) false crack detection can occur in 2D image analysis dueto lane marks, shadows, different textures, different bright-ness of road lanes (lane joints), noise and tree leaves andbranches; (b) inaccurate 2D locations can occur due toknown errors such as perspective projection, lens distortions,difference in relative elevations and noise; (c) depth of crackscannot be measured; (d) an accurate 3D surface of the road isnot generated; and (e) special devices (special lights, parallelcomputers, control units, frame grabber, etc.) are required.

Development and applications of close-rangephotogrammetryThe art and science of photogrammetry, or metrophotogra-

phy as it was originally termed by its inventor Laussedat in1851, its bases have deeper roots in history see Ahmed andHaas (2010) for further details. Photogrammetry was devel-oped to find the correct metrical representations of an objectfrom ordinary photographs. With such a rich history, it mayseem curious that photogrammtry has been ignored in favourof 2D image analysis and direct 3D imaging techniques inthe last three decades; however, recent technical advances to-ward inexpensive computing power and high resolution digi-tal cameras have converged with traditional photogrammetric

Table 1. Data collection technology in commercial image-based pavement survey systems.

System Data collection technology SourceApplied Research Associates, Inc. (ARA) Camera and laser profiler http://www.ara.com/Capabilities/c_asset_manage-

ment.htmDynatest Two video cameras and laser profiler http://www.dynatest.com/software-sub-survey-

data-collection.phpGreenwood engineering Line-scan camera with special light system

and laser profilerwww.greenwood.dk

Digital Imaging System, InternationalCybernetics Corporation

Progressive scan interline video camera,infrared laser and lighting system

http://www.internationalcybernetics.com/imagi-ngvehicle.htm

Fugro PMS Video cameras and laser profiler Fugro (2009)Road assessment vehicle (RAV) Video cameras http://www.wdm.co.uk/surveyingTexas Department of Transportation(TxDOT) (Vcrack system)

Line-scan CCD Xu (2007) and Huang and Xu (2006)

Pure data Video cameras http://www.puredata.com.au/index.php?videosystemPathrunner, Pathway Services, Inc. Still and video cameras and laser http://www.pathwayservices.com/Fugro Roadware Group, Inc. (AutomaticRoad Analyzer (ARAN))

Dual video cameras, laser, panoramiccamera

Fugro Roadware (2010)

CSIRO's RoadCrack Video camera, custom designed andmanufactured hardware

http://www.csiro.au/solutions/psaa.html

Florida Department of Transportation(FDOT), Multipurpose Survey Vehicle(MPSV)

Line scan camera and special light Mraz et al. (2006)

Ahmed et al. 1303

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

theory to make photogrammetry a feasible approach for manyapplications. Currently, close-range photogrammetry is al-ready applied in many diverse applications in industry, ar-chaeology, architecture, automotive, aerospace, and forensicand car accident reconstruction. Also, it has applications inbiomechanics, chemistry, and biology. Applications are alsobeing explored by researchers in some automated construc-tion progress tracking (El-Omari and Moselhi 2008).

Metric vs. non-metric cameraMetric as well as non-metric cameras are used in close-

range photogrammetry. Metric cameras include single metriccameras and stereo-metric cameras. A major characteristic ofmetric cameras is that they are calibrated for lens distortionsand the most probable values of the inner orientation param-eters are also estimated from calibration. Moreover a numberof fiducial marks are permanent in the camera design and ac-

cordingly appear in images that help in the automation andaccuracy of estimating the inner orientation parameters andmaintain a fixed image coordinate system for all images. Incontrast to the metric camera the amateur (non-metric) cam-era, has un-calibrated inner orientation parameters which isnecessary data for most photogrammetric mathematical treat-ments. As well there is possible instability of inner orienta-tion parameters, in addition to the lack of fiducial marks.Low cost consumer cameras with changeable lenses add chal-lenges, zoom-lenses add more challenges for robust engineer-ing work because the changes in inner orientation parametersare along the whole zoom range.

Photogrammetric data processingTwo main mathematical conditions are used in most stereo

close-range applications, the collinearity condition equationand the coplanarity condition equation. The first one models

Table 2. Characteristics of sample commercial image-based pavement survey systems.

System CharacteristicsCSIRO's RoadCrack Produced byCSIRO Manufacturing Science &Technology (CSIRO 2008)

(1) Uses CCD cameras to capture two images of a 500 mm wide wheel path with a one milli-metre resolution; (2) uses a parallel computer and special hardware; (3) performs all data ana-lysis on the fly in the field, and no image data is kept after surveying; (4) only a quality reportis generated by the system at the end; (5) the analysis algorithm is proprietary intellectualproperty of CSIRO; (6) to re-check any part of the report a further manual inspection will berequired; (7) the system relies mainly on special illumination to overcome shadow problems;and (8) RoadCrack guarantees only a minimum of 10% distance coverage (Huang and Xu2006).

Automatic Road Analyzer (ARAN),Fugro Roadware, (previously Road-ware Group, Inc.)

(1) Data analysis is conducted with WiseCrax; a crack detection system; (2) uses two analogvideo cameras which are synchronized with a strobe illumination and the data is recorded as acontinuous series of non-overlapping images; (3) each image covers about half the width of apavement lane (1.5 m by 4 m); and (4) images are processed off-line at the office with operatorassistance (Fugro 2009).

Fugro PMS (previously PavementManagement Services from Australia)

(1) Uses no special illumination system and (2) the image produced may have shadows castacross pavement surface which presents challenges (Fugro 2009).

Vcrack used by Texas Department ofTransportation (TxDOT)

(1) The system has been used by the Texas Department of Transportation (TxDOT) for pavementinspections since 2003 (Xu 2007); (2) it uses a line-scan CCD camera and a frame grabber tocover the full width of the pavement (3.05m) and 100% of the pavement distance while travel-ing at speeds of 5 to 112 km/h; (3) the line-scan camera registers one-dimensional lines of datarepeatedly; and (4) the camera movement is controlled by a frame grabber.

Fig. 1. Coplanarity condition at one point (left) and at all control and check points (right).

1304 Can. J. Civ. Eng. Vol. 38, 2011

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

the relationship between any point in image space, the cam-era perspective centre, and the same point in the object space.The second condition equation models the relationship be-tween any point in one image space, its conjugate in anotheroverlapping image space, the actual point in object space,and the two perspective centres of the two images in space(Fig. 1). Data processing can be considered as the mathemat-ical transformation between an image point in one rectangu-lar coordinate system (image space) and an object point inanother rectangular coordinate system (object space). Thisbasic mathematical concept is valid for all applications ofphotogrammetry (e.g., terrestrial, aerial, non-topographicphotogrammetry) using any sensing device (frame camera,panoramic camera, etc.) to record directional information toobjects in any medium (air, water, etc.).

Collinearity conditionThis mathematical condition assumes that the light ray is a

straight line at the moment of exposure; i.e.; the exposure sta-tion and image point together with the object point all mustlie on one and the same straight line (ray). The mathematicalform of this condition can be represented as a system of threeequations in a matrix form as follows:

½1�x� xo

y� yo

0� f

2664

3775 ¼ l

m11 m12 m13

m21 m22 m23

m31 m32 m33

264

375

X � Xc

Y � Yc

Z � Zc

264

375

The above equation is a seven transformation parametersmodel that scale, translate, and rotate one vector from real-world space to image space. Dividing the first two rows (equa-tions) by the third and re-arranging terms will cancel the scalefactor and provide the following two equations for each point:

½2� x ¼ xo � fm11ðX � XcÞ þ m12ðY � YcÞ þ m13ðZ � ZcÞm31ðX � XcÞ þ m32ðY � YcÞ þ m33ðZ � ZcÞ

½3� y ¼ yo � fm21ðX � XcÞ þ m22ðY � YcÞ þ m23ðZ � ZcÞm31ðX � XcÞ þ m32ðY � YcÞ þ m33ðZ � ZcÞ

where x, y, 0 = image coordinates of any point; (xo, yo, f) =interior orientation parameters (coordinates of the principalpoint); m11,…m33 = elements of rotation matrix; Xc, Yc, Zc =coordinates of exposure station; X, Y, Z = ground coordinatesof imaged point; l = scale factor.The nonlinearity of the equations requires use of a Taylor

series approximation. In this case, we can reorganize into thefollowing form:

½4� F1 ¼ x� xo

þ fm11ðX � XcÞ þ m12ðY � YcÞ þ m13ðZ � ZcÞm31ðX � XcÞ þ m32ðY � YcÞ þ m33ðZ � ZcÞ ¼ 0

½5� F2 ¼ y� yo

þ fm21ðX � XcÞ þ m22ðY � YcÞ þ m23ðZ � ZcÞm31ðX � XcÞ þ m32ðY � YcÞ þ m33ðZ � ZcÞ ¼ 0

Accordingly, the condition equation can be written in line-arized form as

½6� AV þ BD ¼ Fo

where

A ¼

@F1

@x

@F1

@y

@F2

@x

@F2

@y

0BBB@

1CCCA

Vt = (Vx Vy) = residuals, B = (B1 B2 B3 B4),

B1 ¼

@F1

@xo

@F1

@yo

@F1

@f

@F2

@xo

@F2

@yo

@F2

@f

0BBB@

1CCCA

B2 ¼

@F1

@Xc

@F1

@Yc

@F1

@Zc

@F2

@Xc

@F2

@Yc

@F2

@Zc

0BBB@

1CCCA

B3 ¼

@F1

@u

@F1

@f

@F1

@k

@F2

@u

@F2

@f

@F2

@k

0BBB@

1CCCA

B4 ¼@F1

@X

@F1

@Y

@F1

@Z

@F2

@X

@F2

@Y

@F2

@Z

0BB@

1CCA

Dt = (dxo dyo df dXc dYc dZc d6 du dk dX dY dZ) = correc-tions: and Fo = constant term.While for any number of points (i) on the same image the

equation A·V + B·D = Fo would be

½7�

A1 0 ::: 0

0 A2 ::: 0

..

. ... ..

. ...

0 0 ::: Ai

0BBBBB@

1CCCCCA

V1

V2

..

.

Vi

0BBBBB@

1CCCCCAþ

B1

B2

..

.

Bi

0BBBBB@

1CCCCCA

D1

D2

..

.

Di

0BBBBB@

1CCCCCA ¼

F1

F2

..

.

Fi

0BBBBB@

1CCCCCA

o

Ahmed et al. 1305

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

The model can be expanded further to cover any numberof images.

Coplanarity conditionThis condition forces any two conjugate rays on two over-

lapped images coming from the same object point and thebase-line between these two images to be coplanar in the an-alytical solution. This condition can be expressed as followsby equating the scalar triple product of three vectors: baseline B, left vector pL, and right vector pR to zero, where twovectors are the conjugate rays and the third is the base line.

½8� Fi ¼ B � ðpL � pRÞ ¼BX BY BZ

xL yL zL

xR yR zR

��������������i

¼ 0

where the base line = B = [Bx By Bz]t

Bx, By, Bz = the components of the baseline in the direc-tions X, Y, and Z, these three components represent the differ-ences of coordinates between the two exposure stations.

The left vector ¼ pL ¼r11 r12 r13

r21 r22 r23

r31 r32 r33

264

375t

L

x� xo

y� yo

0� f

2664

3775L

¼x

y

z

264375L

The right vector ¼ pR ¼r11 r12 r13

r21 r22 r23

r31 r32 r33

264

375t

R

x� xo

y� yo

0� f

2664

3775R

¼x

y

z

264375R

r11 r12 r13

r21 r22 r23

r31 r32 r33

264

375t

L

and

r11 r12 r13

r21 r22 r23

r31 r32 r33

264

375t

R

¼ left and right rotation matrices

where (xoL, yoL, fL) = interior orientation parameters for leftimage; (xoR, yoR, fR) = interior orientation parameters forright image; and (xL, yL, xR, yR) = left and right image coor-dinates of the same object point.Similar to eq. [6] the linearized form concerning one

point would be A·V + B·D = Fo where A ¼ @Fi

@xL

@Fi

@yL

@Fi

@xR

@Fi

@yR

� �,

V t ¼ ðVxLVyLVxRVyRÞB = (B1 B2 B3 B4 B5 B6) where

B1 ¼ @Fi

@xoL

@Fi

@yoL

@Fi

@fL

� �andB4 ¼ @Fi

@xoR

@Fi

@yoR

@Fi

@fR

� �

B2 ¼ @Fi

@XcL

@Fi

@YcL

@Fi

@ZcL

� �andB5 ¼ @Fi

@XcR

@Fi

@YcR

@Fi

@ZcR

� �

B3 ¼ @Fi

@uL

@Fi

@fL

@Fi

@kL

� �andB6 ¼ @Fi

@uR

@Fi

@fR

@Fi

@kR

� �

One may notice that B1 and B4 are partial derivatives withrespect to the inner orientation elements (left and right); B2

and B5 are partial derivatives with respect to the outer trans-lation-elements (left and right); B3 and B6 are partial deriva-tives with respect to the outer rotation-elements (left andright).

½9� Dt ¼ ðDtL �Dt

RÞwhere

DtL ¼ ðdxoL � dyoL � dfL � dXcL � dYcL � dZcL � d6L � dfL

� dkLÞ

DtR ¼ ðdxoR � dyoR � dfR � dXcR � dYcR � dZcR � d6R

� dfR � dkRÞ

Fo ¼ constant term

Idealized and normalized images generation usingepipolar resamplingA simple calibration process is applied using a grid of

points printed on a sheet of paper and a group of images are

1306 Can. J. Civ. Eng. Vol. 38, 2011

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

taken from the four sides of the grid sheet so that at each sidethe camera orientation is changed by rotating the camera tothe right and to the left. The calibration can be conducted ei-ther before or after taking the images. Using camera calibra-tion data, the PM-ScannerTM software (Photomodeler 2009)can transform the real images to other ideal images free oflens distortion and have zero eccentricity of principal pointand square pixels. This step is useful for many applicationsand is generally applied before generating normalized im-ages, however it is not a mandatory step as the transforma-tion to the normalized images can be done directly as willbe shown in next section. The normalized images are gener-ated so that corresponding points in overlapping images fallalong one and the same column (or row). The two new im-ages are generated so that they fall in one and the same planeparallel to the base line connecting the two perspectivecentres (Kraus 1993). Consequently, the stereo-matching be-comes simpler and faster and the search space becomes a 1Dspace. Further details will be explained in next section. Sim-ilar results can be achieved using other softwares, e.g.,LPSTM (ERDAS 2009), Shape Scan_SMTM (Shapecapture2009), iWitnessPROTM (Iwitness 2009), and many others.Normalization transformation can be done on the fly duringgeneration of point clouds in some software, e.g., PM-Scan-nerTM.

Normalized image generation (epipolar resampling)Normalized image can be generated with different ap-

proaches (Schenk 1990; Cho et al. 1992; Mikhail et al.2001; Kraus 2007), however, the core concept is the same.Normalized image generation is an important transformationprocedure to enable automation for stereo image matching sothat the detection of conjugate points or features in two over-lapping images become simpler and a systematic one dimen-sional search problem along one known line. This procedurealso overcomes matching ambiguities in the image space. Im-age normalization can be conducted after the step of idealiz-ing the images; i.e., after removing the eccentricity of cameraaxis so that the principal point coordinate values became(0,0), removing the lens distortion effect and making the pix-els squared in shape. However, the image normalization canbe conducted directly without image idealization using thesame interior orientation parameters of the original image.To re-sample the taken tilted idealized images requires gener-ating new images with planes oriented parallel to the baseline connecting the two perspective centres of the overlappedimages. This makes the epipolar lines in both images as twogroups of parallel lines. Actually the two new images aregenerated falling in one and the same plane parallel to thebase line. The new two images must have the same arbitrarilychosen camera principal distance cn as a virtual focal lengthof a theoretical camera. For simplicity the focal length usedin the original images is preferred for generating both theidealized images and the normalized images.Accordingly, the old and new images will have the same

scale all the time. Also, it is necessary to constrain the orien-tation of the generated new plane because theoretically thereare infinite possibilities of generating a plane satisfying theabove condition if the plane is touching the cylinder forwhich the base-line coincides with its geometrical axis by

calculating the average value of rotation angles of the twoimages around either the axis X of the real world object orthe chosen scaled stereo-model.Further, we need to control the three rotation angles

around the axes X, Y, and Z. The rotation matrix that trans-forms the image to the new orientation is computed as

½10� ½Mn� ¼ ½Mfn� � ½Mkn�½Mun�where Mfn, Mkn, Mun are the 3 by 3 rotation matricesaround Y, Z, and X in sequence.The main three rotations around X, Y, and Z are computed

in the following order in three steps:

1. The new rotation angle fn can be computed as follows toorient the new images in a plane parallel to the base line:

½11� fn ¼ �arctanBZ

BX

� �

where BX and BZ are the base line components.2. To compute the new orientation angle kn both images must

be oriented so that the rotation angles kL and kR around theaxis Z are the same as kn. This is important to guaranteethat the conjugate points and epipolar lines lie along thesame row (or column); accordingly, the stereo matchingprocess is simplified to a one dimensional search problemalong, for example, only one conjugate epipolar line in theright image for each point in the left image.

½12� kn ¼ arctanBYffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

B2X þ B2

Z

p !

One may notice that fn and kn can be selected as de-scribed above as in Cho et al. (1992) using the positive-image model or in a different way as in Kraus (2007)based on negative image model. The order of the imagerotation about the X, Y, and Z axes will be different andthe formulas of computing fn and kn will change as well,further details can be found in the references mentionedabove.

3. To minimize the scale distortion in the corresponding di-rection we select the rotation angle as follows:

½13� un ¼ uL þ uR

2

� �Then we fix the new plane containing the new images to

this orientation angle un.Based on the collinearity condition equation explained in

the next section; for each object point in real-world spacewith X, Y, Z coordinates the following equation models thetransformation relationship between the point in image spaceand real world space:

½14�X

Y

Z

0B@

1CA ¼

Xc

Yc

Zc

0B@

1CAþ 1=l½MðufkÞ�

x� xo

y� yo

�f

0BB@

1CCA

Similarly, the following equation models the transforma-tion relationship between the point in normalized imagespace and real world space:

Ahmed et al. 1307

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

½15�X

Y

Z

0B@

1CA ¼

Xc

Yc

Zc

0B@

1CAþ 1=ln½MðufkÞ�n

x� xo

y� yo

�f

0BB@

1CCA

n

Equating eqs. [14] and [15] we get the transformation rela-tionship between the point in normalized image space andthe idealized or original image space as follows based onwhether we have idealized the images or not:

½16�x� xo

y� yo

�f

0BB@

1CCA

n

¼ ln½MtðufkÞ�n �

Xc

Yc

Zc

0B@

1CAþ 1=l½MðufkÞ�

x� xo

y� yo

�f

0BB@

1CCA�

Xc

Yc

Zc

0B@

1CA

2664

3775

Accordingly,

½17�x� xo

y� yo

�f

0BB@

1CCA

n

¼ ln=l½MtðufkÞ�n½MðufkÞ�

x� xo

y� yo

�f

0BB@

1CCA

Equations [16] and [17] model the direct relationship be-tween the normalized image and the image we use as inputto generate the generalized one, either the original images orthe idealized images. In either case the only three unknownsin the above three equations will be (xn, yn, ln/l). Moreover,if the perspective centres of the old and new images aremaintained to coincide, then the ratio ln/l can be eliminatedby dividing the first two rows in eq. [17] by the third row.Hence the final form can be as follows:

½18a� x ¼ xo

� fðm11Þðxn � xonÞ þ ðm21Þðyn � yonÞ þ ðm31Þð�fnÞðm13Þðxn � xonÞ þ ðm23Þðyn � yonÞ þ ðm33Þð�fnÞ

½18b� y ¼ yo

� fðm12Þðxn � xonÞ þ ðm22Þðyn � yonÞ þ ðm32Þð�fnÞðm13Þðxn � xonÞ þ ðm23Þðyn � yonÞ þ ðm33Þð�fnÞ

where u, f, k, un, fn, and kn are the rotation angles of theoriginal and normalized images around X, Y, and Z axes;m11, m12,…m33 are the elements of the 3D rotation matrix be-tween the old and new images

½19� M ¼ MtnM ¼

m11 m12 m13

m21 m22 m23

m31 m32 m33

264

375

Based on the above equations; the resampled image can begenerated by starting from any pixel location in the normal-ized image (xn, yn) and compute the conjugate point coordi-nates in the original image (x, y). The gray value, g(xn, yn)for each pixel in the new images is estimated using an appro-priate interpolation technique in the input image and the re-sulting gray value assigned to the pixel in the normalizedimage at the pixel (xn, yn) so that g(xn, yn) = g(x, y). Themost common interpolation techniques are nearest neighbor,bi-linear interpolation and cubic convolution.

Experiments with tilted imagesThis research has exploited close range photogrammetry

and tested a non-metric consumer grade CMOS camera, andinvestigated freely tilted non-standard spatial camera posi-tions. The study used the camera's built-in simple flash innatural daylight. Free positions for camera station are arbitra-rily chosen so that the taken images maintain a commonoverlapping area. The camera was handheld above the pave-ment surface, the average height above the ground was be-tween 150 cm and 160 cm, however, the distance betweeneach point on the pavement surface and the camera is varia-ble because of camera tilt. The distance at each point de-pends on the angle of the ray passing through this point andthe lens of the camera. In many cases the closer points wereat about 2 m and the distant points were at about 7 to 10 m.Reconstruction of 3D models requires that each point sat-

isfies the coplanarity condition (epipolar constraint). For au-tomatic epipolar matching and image re-sampling, the tiltedimages are transformed to normalized images. An accurate3D point cloud is generated, then processed to produce 3Dmeshing surface. The surface is rendered using either of theimages so that a virtual reality style surface is reconstructedwith original texture or by using color shades. The cameraused was a Canon EOS DIGITAL REBEL 450D Xsi. Usingcoded targets technology enabled a fully automatic relativeorientation procedure, no measurements were required on theimage surface. The advantage of this technology is that eachcoded target (point) can be recognized by the software andmatched in both images automatically. In the case of cali-brated cameras, at least five matched points must be recog-nized and matched to enable relative orientation and hencestereo-model generation. Hence, any other point in the com-mon overlap area will satisfy automatically the coplanaritycondition equation. For uncalibrated cameras, at least eightcommon points are required (Ahmed 2007). Before imagingthe selected asphalt study area, a number of coded targetswere fixed. For the sake of checking the output accuracy,their relative positions were measured carefully with repeatedmeasurements; the accuracy of the raw XYZ observations wasin a range between 0.1 and 2 mm. The two millimetres werethe worst case. To further enhance the overall accuracy, thecomputations were divided into two steps, in the first step,the coded targets were exploited to compute the relative ori-entation, very accurate identification of targets in both imageswere autonomously achieved, hence, at least, the three orien-tation angles were determined very accurately (see Fig. 2).This step does not use the ground coordinates of the control

1308 Can. J. Civ. Eng. Vol. 38, 2011

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

points, only the image coordinates are required, the use ofcoded targets automate this measurement of image coordi-nates to very high accuracy, however, the scale and the exte-rior orientation need refinement. In the second step, theground coordinate is mainly used to estimate the exterior ori-entation of the whole model generated in the first step, i.e.,the role of the ground control was to rotate, translate andscale the whole model. Using the accurate relative orientationvalues computed based on sub-pixel measurements of thecoded targets, the iterative least squares technique were ableto best fit the XYZ coordinates of the control points in themodel, and hence; estimated the most probable values of thecheck points that are assumed to carry out the randomlypropagated errors using the above approach.After camera calibration, the bundle adjustment solution

converged after three iterations, and Table 3 shows a sampleof the estimated orientation elements and their accuracy.The following figures represent a few samples of input and

output data, the two overlapping images with common con-trol points (Fig. 3) represents a sample of two tilted imagesof pavement surface, the constrained 3D relationships be-tween image spaces and real world and zoom-in samples ofgenerated point cloud and meshing surface. The case repre-sented in Fig. 4 presents sample two overlapping images after

transformation to normalized version, 3D relationships be-tween image spaces and real world surface space from differ-ent perspectives, and a zoom-in to generated surface. Moresamples are demonstrated in (Fig. 5).The advantages of the photogrammetric point cloud gener-

ation approach may include among others: (a) lower costthan other technologies; (b) an accurate 3D model can auto-matically be generated; (c) cameras can work from an unsta-ble platform using affordable lenses with anti-shake imagestabilization technology; (d) the texture of the objects can bereconstructed, the level of details to be constructed increasesas the camera resolution increase; (e) camera distance abovethe pavement surface can be changed by changing the lens orthe focal length in the case of zoom-lens; and (f) consumer-grade cameras are much cheaper than laser scanners, metriccameras or professional-grade cameras, and in time con-sumer-grade cameras will acquire more capabilities due tothe continuous advances in technology. Existing image proc-essing approaches can then be applied to 3D images with farfewer of the artifacts that create errors in 2D image process-ing. At the same time, colour and other 2D image character-istics can still be exploited for image processing based onfusion techniques. Ten points out of 30 are used as checkpoints. The positions and accuracy of the check points are

Fig. 2. Sample accuracy reports generated by the system.

Table 3. Sample interior and exterior orientation parameters (data of Fig. 1).

Image 1 St.D. Image 2 St.D. Calibration dataOmega1 (deg) –0.002 0.008 Omega2 (deg) 4.412998 0.012 Focal length 18.0131Phi1 (deg) –0.006 0.013 Phi2 (deg) 7.493210 0.014 Format size 22.2192×14.8336Kappa1 (deg) 0.000 0.002 Kappa2 (deg) –1.616054 0.002 Resolution 4272×2848Xc1 (m) 0.0 2.30E-04 Xc2 (m) 0.446846 2.30E-04 Principal point xo = 11.2369, yo = 7.4849Yc1 (m) 0.0 1.40E-04 Yc2 (m) –0.093069 1.90E-04 p1, p2 –2.628E-005, 9.668E-005Zc1 (m) 0.0 7.80E-05 Zc2 (m) 0.046465 7.90E-05 k1, k2 5.915E-004, –1.186E-006Note: St.D., standard deviation. Omega, Phi and Kapa are calculated rotations around X-axis, Y-axis, and Z-axis at different spatial positions. Xc, Yc,

and Zc are spatial positions of the cameras, p1, p2, k1, and k2 are calculated lens distortion parameters.

Ahmed et al. 1309

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

explained in Fig. 6. One may conclude that X and Y coordi-nates are relatively better in accuracy; however, the accuracyof Z coordinates ranges from 0.115 mm to 0.183 mm whichis still very high accuracy especially when the costs of thesystem used are taken into account. The differences in accu-racy values between X, Y, and Z coordinates are all withinthe sub-millimetre range.

Discussion

The investigated close range photogrammetry approachshowed not only a high potential for 3D distress detection,measurement, and modelling but also numerous advantagesover existing techniques, e.g., less expensive, more robust, pro-viding high accuracy and spatial precision. Moreover, the 3Doutput can be visualized along with its real world texture. Thesystem can be constructed using off-the-shelf hardware and

software products available in the market, and this advantageenables seamless upgrading of the system capability over timeas technologies advance and as budgets allow. The CAD out-put of automatic photogrammetric systems can be further ex-ploited directly in updating the content of any GIS and PMSsystems with better analysis and superior visualization.The use of non-metric photogrammetry provides the freedom

to use available cost effective cameras and software versus thetraditional metric photogrammetry that uses very expensivecalibrated cameras and expensive software. Moreover, metricphotogrammetry traditionally uses certain distribution of con-trol points and even restricts the camera tilts to under three de-grees to maintain the verticality of camera axis during imageacquisition. Non-metric photogrammetry provides more flexibleopportunities for cheaper and practical solutions.Capabilities even in this off-the-shelf software for produc-

ing ortho-rectified maps, which are images corrected for lens

Fig. 3. Two overlapping images (top), constrained 3D relationships between image spaces and generated surface in real world space (middle),and zoom-ins of rendered point cloud including and crack meshing (bottom).

1310 Can. J. Civ. Eng. Vol. 38, 2011

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

and perspective projection distortions and then re-projectedusing parallel projection and scaled as any other maps. Gridor contour lines can also be overlaid on the ortho-image map.In summary however, the most important function is the gen-eration of accurate 3D models. In a quick comparison with2D image techniques, one may add that the presented techni-que will overcome all the previously mentioned limitations,also, 3D measurements and quantification of distresses suchas rutting, and for example computing the slopes for waterflow will be easier as a direct application. Generating profilesand computing volumes will be easier as well.Two important aspects must be taken into consideration.

The first is the need for real time processing. The second isthe possibility of image blurring due to fast camera motion.However, fast camera motion is a common problem amongall image-based data collection techniques, and the speed ofdata collection in general is a common problem among all

data collection technologies. Hardware and software solutionsare continuously evolving to address these issues. Real timeprocessing technology is already available, investigated andunder continuous development as in Huang and Xu (2006)and Wang (2007). Such technology should be more andmore available at affordable prices in the future. Fortunately,recent generations of consumer grade digital cameras aregaining more capabilities, especially in processing speed,shaking and motion compensation and high resolution.

Conclusions and future workIn summary, the experimental results have demonstrated

that a low-cost photogrammetric system has the capability toeffectively and accurately reconstruct a detailed 3D model ofa pavement surface distress. Consequently, it can be used in-dependently or integrated with existing pavement distressdata collection systems and pavement management systems,

Fig. 4. Two overlapping and idealized images (top), generated point cloud from different perspectives (middle), and zoom-in to point cloud(bottom).

Ahmed et al. 1311

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

the numerous outputs can feed other information systemssuch as GIS plus other CAD systems that are currently inuse with many basic layers of spatial information, e.g., cracklocations, length, width and orientation, the rutting depth,width and location, similarly for all other pavement distresstypes. The classification and attributes can be stored in data-base records connected with the extracted distress type, theprofiles of the road surface can be generated automatically ateach predetermined distance, the texture can also be esti-mated, moreover, the directions, slope, and quantities offlow of water to internal pavement layers can be estimated.Any volume computation of required sealing or treatmentscan be estimated as well. This work may also open new di-rections of research not only aiming to build a unified low-cost data collection system for cracks, roughness, rutting,faulting, segregation, and texture, but also to exploit the 3Dmodelling of road surface in other operations like mainte-nance, sealing, etc.

Further research is under investigation to study other re-lated aspects, such as a quality control mechanism to assessthe reconstructed 3D models of road pavement, on-line se-quential processing, off-line batch processing, 2D and 3D im-age data fusion, and approaches to address the image blurdue to fast camera motion. Also, there is a high potential forrecent low-cost digital video cameras to be investigated usingthe 4D photogrammetric approach in the future.

ReferencesAhmed, M. 2000. Adaptive Expert System For Fully Automatic

Object-Based Photogrammetry, Ph.D. Thesis, Faculty of Engineer-ing, Cairo University, Egypt.

Ahmed, M. 2002. Intelligent Object-Based Airborne MappingSystem: The Stage of Autonomous Imagery Segmentation.Proceedings of The 32nd Annual Cairo Demographic CenterConference, Egypt.

Ahmed, M. 2007. Reconstruction of partially damaged human faces

Fig. 6. Sample accuracy of check points in millimetres (based on data of Fig. 3).

Fig. 5. Large and small potholes with very rough surface (top-left), part of complex and rough surface (top-right). Two mixed road surfaceincluding: pavement, soil, and grass (bottom).

1312 Can. J. Civ. Eng. Vol. 38, 2011

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.

using un-calibrated images. In Proceedings of the 11th WorldMulti-Conference on Systemics, Cybernetics and Informatics,WMSCI, Florida, USA.

Ahmed, M., and Haas, C. 2010. The potential of low cost close rangephotogrammetry towards unified automatic pavement distresssurveying. Proceedings of The 89th Annual Meeting Of TheTransportation Research Board (DVD), Washington, D.C., 11–15January.

Albitres, C., Smith, R.E., and Pendleton, O.J. 2007. Comparison ofautomated pavement distress data collection procedures for localagencies in San Francisco bay area, California. In TransportationResearch Record: Journal of The Transportation Research Board,No. 1990, Transportation Research Board of The NationalAcademies, Washington, D.C., pp. 119–126.

Chamorro, A., and Tighe, S. 2010. Development of Guidelines forEvaluation of the Pavements on Highway Ramps and Inter-changes. Highway Infrastructure Innovation Funding Program2008, Final Report ESB-001, For Ministry of Transportation,Engineering Standards Branch Report

Cho, W., Schenk, T., and Madani, M. 1992. Resampling digitalimagery to epipolar geometry. IAPRS International Archives ofPhotogrammetry and Remote Sensing, 29(B3): 404–408.

CSIRO. 2008. Automated detection of road cracks. Australia’sCommonwealth Scientific and Industrial Research Organisation.Available from www.Csiro.Au/Solutions/Psaa.html. [Accessed 15June 2009].

Di Mascio, P., Piccolo, I., and Cera, L. 2007. Automated DistressEvaluation. In Proceedings of 4th International SIIV Congress,Palermo, Italy, 12–14 September 2007.

El-Omari, S., and Moselhi, O. 2008. Integrating 3D laser scanningand photogrammetry for progress measurement of constructionwork. Automation in Construction, 18(1): 1–9. doi:10.1016/j.autcon.2008.05.006.

ERDAS. 2009. Available from http://www.erdas.com/products/erda-sproductinformation/tabid/84/currentid/1106/default.aspx [Ac-cessed 7 November 2009].

Fugro, P.M.S. 2009. Products and services. Available from www.Pavement.com.au. [Accessed 15 June 2009].

Fugro Roadware. 2009. Specifications and white papers. Availablefrom http://www.fugroroadware.com/related/pavement-video-pdf.[Accessed 5 April 2010].

Haas, C., and Hendrickson, C. 1991. Integration of diversetechnologies for pavement sensing. In Transportation ResearchRecord: Journal of The Transportation Research Board, No. 1311,Transportation Research Board Of The National Academies,Washington, D.C., pp. 92–102.

Haas, C., Shen, H., Phang, W.A., and Haas, R. 1984. Application ofimage analysis technology to automation of pavement conditionsurveys. In Proceedings, International Transport Congress,Montreal, Vol. 5, pp. C55–C74.

Hicks, R.G., and Mahoney, J.P. 1981. NCHRP Synthesis of highwaypractice 76: Collection and use of pavement condition data.Transportation Research Board of The National Academies,Washington, D.C.

Huang, Y., and Xu, B. 2006. Automatic inspection of pavementcracking distress. Publication FHWA-Tx-06-5-4975-0-1, FHWA,U.S. Department of Transportation.

Iwitness. 2009. Available from Http://www.Iwitnessphoto.com/.[Accessed 7 November 2009].

Jiang, R., Jauregui, D.V., and White, K.R. 2008. Close-range

photogrammetry applications in bridge measurement: literaturereview. Journal of the International Measurement Confederation.41(8): 823–834. doi:10.1016/j.measurement.2007.12.005.

Kertész, I., Lovas, T., and Barsi, A. 2008. Photogrammetric pavementdetection system. Available from http://www.isprs.org/congresses/beijing2008/proceedings/5_pdf/156.pdf. [Accessed 15 June 2009].

Kim, Y.S., and Haas, C. 2002. A man-machine balanced rapid objectmodel for automation of pavement crack sealing and maintenance.Canadian Journal of Civil Engineering, 29(3): 459–474. doi:10.1139/l02-018.

Kraus, K. 1993. Photogrammetry (volume 1). Dümmler Verlag,Bonn.

Kraus, K. 2007. Photogrammetry geometry from images and laserscans, 2nd ed. Translated by Ian Harley Stephen Kyle, Walter DeGruyter Berlin. New York. International SIIV Congress, Palermo,Italy, 12–14 September 2007.

Mcghee, K. 2004. NCHRP Synthesis 334: automated pavementdistress collection techniques. Transportation Research Board ofThe National Academies, Washington, D.C.

Mikhail, E.M., Bethel, J.S., and Mcglone, J.C. 2001. Introduction tomodern photogrammetry, John Wiley & Sons, Inc., New York.

Mraz, A., Gunaratne, M., Nazef, A., and Choubane, B. 2006.Experimental evaluation of a pavement imaging system, Floridadepartment of transportation’s multipurpose survey vehicle. InTransportation Research Record: Journal of the TransportationResearch Board, No. 1974, Transportation Research Board of theNational Academies, Washington, D.C., pp. 97–106.

Mullis, C., Reid, J., Brooks, E., and Shippen, N. 2005. Automateddata collection equipment for monitoring highway condition, FinalReport SPR 332 for Oregon Department of TransportationResearch Unit and Federal Highway Administration.

Photomodeler. 2009. Available from www.Photomodeler.com. [Ac-cessed 7 November 2009].

Rababaah, H., Vrajitoru, D., and Wolfer, J. 2005. Asphalt pavementcrack classification: a comparison of GA, MLP, and SOM.Proceedings of Genetic and Evolutionary Computation Confer-ence, Late-Breaking Paper, June 2005, USA.

Schenk, T. 1990. Digital Photogrammetry - Volume I, Terrascience,Laurelville, Ohio, USA, 428p.

Shapecapture. 2009. Available from www.Shapecapture.com. [Ac-cessed 7 November 2009].

Slama, C. 1980. Manual of photogrammetry fourth edition. AmericanSociety For Photogrammetry and Remote Sensing, Virginia, USA.

Timm, D.H., and Mcqueen, J.M. 2004. A study of manual vs.automated pavement condition surveys, tech report, IR-04-01,Highway Research Center, Auburn University Auburn, Alabama,For Alabama Department Of Transportation, http://www.Eng.Auburn.Edu/Users/Timmdav/Reports/Au-Hrc-Ir-04-01.Pdf. [Ac-cessed 15 June 2009].

Wang, K.C.P. 2004. Automated pavement distress survey throughstereovision. NCHRP-Idea, TRB, Final Report for Highway IdeaProject 88, August 2004.

Wang, K.C.P. 2007. Automated survey and visual database develop-ment for airport and local highway pavement. Final Report ForMack-Blackwell Transportation Center.

Xu, B. 2007. Summary of Implementation of an Artificial LightingSystem for Automated Visual Distress Rating System. PublicationFHWA-TX-08-5-4958-01-1, FHWA, U.S. Department of Trans-portation.

Ahmed et al. 1313

Published by NRC Research Press

Can

. J. C

iv. E

ng. D

ownl

oade

d fr

om w

ww

.nrc

rese

arch

pres

s.co

m b

y U

nive

rsity

of

Wat

erlo

o on

03/

23/1

2Fo

r pe

rson

al u

se o

nly.