Automatic Lens Distortion Correction Using One-Parameter Division Models

11
Published in Image Processing On Line on YYYY–MM–DD. Submitted on YYYY–MM–DD, accepted on YYYY–MM–DD. ISSN 2105–1232 c YYYY IPOL & the authors CC–BY–NC–SA This article is available online with supplementary materials, software, datasets and online demo at http://dx.doi.org/10.5201/ipol 2013/07/19 v0.4 IPOL article class Automatic Lens Distortion Correction Using One-Parameter Division Models MiguelAlem´an-Flores 1 , Luis Alvarez 1 Luis Gomez 2 , Daniel Santana-Cedr´ es 1 1 CTIM (Centro de Tecnolog´ ıas de la Imagen), Departamento de Inform´ atica y Sistemas Universidad de Las Palmas de Gran Canaria, Spain ({maleman,lalvarez,lgomez,dsantana}@ctim.es) 2 CTIM (Centro de Tecnolog´ ıas de la Imagen), Departamento de Ingenier´ ıa Electr´ onica y Autom´ atica Universidad de Las Palmas de Gran Canaria, Spain ([email protected]) PREPRINT October 31, 2013 Abstract We present a method to automatically correct the radial distortion caused by wide-angle lenses using the distorted lines generated by the projection on the image of 3D straight lines. Lens distortion is estimated by the division model using one parameter, which allows to state the problem into the Hough transform scheme by adding a distortion parameter to better extract straight lines from the image. This paper describes an algorithm which applies this technique, providing all the details of the design of an improved Hough transform. We perform experi- ments using calibration patterns and real scenes showing a strong distortion to illustrate the performance of the proposed method. Source Code The source code, the code documentation, and the online demo are accessible at the IPOL web page of this article 1 . In this page, an implementation is available for download. Compilation and usage instruction are included in the README.txt file of the archive. Keywords: lens distortion, wide-angle lens, Hough transform, line detection 1 *******

Transcript of Automatic Lens Distortion Correction Using One-Parameter Division Models

Published in Image Processing On Line on YYYY–MM–DD.Submitted on YYYY–MM–DD, accepted on YYYY–MM–DD.ISSN 2105–1232 c© YYYY IPOL & the authors CC–BY–NC–SAThis article is available online with supplementary materials,software, datasets and online demo athttp://dx.doi.org/10.5201/ipol

2013/07/19

v0.4

IPOL

article

class

Automatic Lens Distortion Correction Using

One-Parameter Division Models

Miguel Aleman-Flores1, Luis Alvarez1 Luis Gomez2, Daniel Santana-Cedres1

1 CTIM (Centro de Tecnologıas de la Imagen),Departamento de Informatica y Sistemas

Universidad de Las Palmas de Gran Canaria, Spain({maleman,lalvarez,lgomez,dsantana}@ctim.es)

2 CTIM (Centro de Tecnologıas de la Imagen),Departamento de Ingenierıa Electronica y Automatica

Universidad de Las Palmas de Gran Canaria, Spain([email protected])

PREPRINT October 31, 2013

Abstract

We present a method to automatically correct the radial distortion caused by wide-angle lensesusing the distorted lines generated by the projection on the image of 3D straight lines. Lensdistortion is estimated by the division model using one parameter, which allows to state theproblem into the Hough transform scheme by adding a distortion parameter to better extractstraight lines from the image. This paper describes an algorithm which applies this technique,providing all the details of the design of an improved Hough transform. We perform experi-ments using calibration patterns and real scenes showing a strong distortion to illustrate theperformance of the proposed method.

Source Code

The source code, the code documentation, and the online demo are accessible at the IPOL webpage of this article1. In this page, an implementation is available for download. Compilationand usage instruction are included in the README.txt file of the archive.

Keywords: lens distortion, wide-angle lens, Hough transform, line detection

1*******

Miguel Aleman-Flores, Luis Alvarez Luis Gomez, Daniel Santana-Cedres

1 Introduction

Due to radial and decentering distortion, the image of a photographed line will not be a straight lineonce projected onto the image plane. To correct such distortions (especially the radial distortion,which is considered as the most relevant one), becomes a major issue in camera calibration. Severalmethods have been proposed to correct radial distortion for lenses showing moderate distortion, suchas the radial model by Brown [4], the rational model by Claus et al. [6] or the division model (seeHartley et al. [10]).

Distortion correction is also closely related to professional photography fields such as close-rangephotography or wide-angle photography using wide-angle lenses. The use of a wide-angle lens makesit possible to acquire a large area of the surrounding space (up-to 180 degrees) in a single photo.Other Computer vision tasks also use wide-angle lenses for purposes such as surveillance, real-timetracking, or simply for aesthetic purposes.

Radial distortion is usually modeled by the radial model by Brown [4],(x′ − xcy′ − yc

)= L(r)

(x− xcy − yc

), (1)

where (x, y) is the original (distorted) point, (x′, y′) is the corrected (undistorted) point, (xc, yc)is the center of the camera distortion model, L(r) is the function which defines the shape of thedistortion model and r=

√(x− xc)2 + (y − yc)2. According to the choice of function L(r), there

exist two widely accepted types of lens distortion models: the polynomial model and the divisionmodel, which is more suited for large distortions such as those caused by wide-angle lenses.

The polynomial used in simple radial distortion model is:

L(r) = 1 + k1r2 + k2r

4 + ..., (2)

where the set k = (k1, ...., kNk)T contains the distortion parameters estimated from image measure-

ments, usually by means of non-linear optimization techniques (see Alvarez et al. [2]).The division model has initially been proposed by Lenz [13], but it has received special attention

after the more recent research by Fitzgibbon [9]. It is formulated as:

L(r) =1

1 + k1r2 + k2r4 + .... (3)

Note that this model is obtained by inverting the above radial model. The main advantage of thedivision model is the requirement of fewer terms than the polynomial model for the case of severedistortion. Therefore, the division model seems to be more adequate for wide-angle lenses (see arecent review on distortion models for wide-angle lenses in Hughes et al. [11]). Additionally, whenusing only one distortion parameter, its inversion is simpler, since it requires finding the roots of asecond degree polynomial instead of a third degree polynomial. In fact, a single parameter versionof the division model is normally used.

Most algorithms estimate the radial distortion parameters by considering that 3D lines in theimage must be projected onto 2D straight lines and minimizing the distortion error. This residue isgiven by the sum of the squares of the distances from the points to the lines (Devernay et al. [8]).

The design of a correction method is complete by deciding how to apply it. There are basicallytwo alternatives: those methods which need human supervision and those which are automatic.Human-supervised methods rely on manually identifying some known straight lines within the scene(Alvarez et al. [3], Brown [4], Wang et al. [14]). These user-dependent methods are very robust,independent of the camera parameters and do not require a calibration pattern. However, due tohuman supervision, they are difficult to apply when dealing with large images.

2

Automatic Lens Distortion Correction Using One-Parameter Division Models

In Bukhari et al. [5], an automatic radial estimation method working on a single image withoutuser intervention is discussed. This method does not require any special calibration pattern. Themethod applies the one-parameter division model to estimate the distortion from a set of automati-cally detected non-overlapping circular arcs within the image.

In Cucchiara et al. [7], the authors propose an automatic method for radial lens distortion correc-tion using an adapted Hough transform including the radial distortion parameter to automaticallydetect straight lines within the image. The radial model with one parameter is used and an ex-haustive search is performed testing each distortion parameter in a discretized set (kmim, kmax) andthen selecting the one that provides the best fitting (i.e., maximizes the straightness of the candidatelines). The method can work automatically but better results are obtained with a semi-automaticversion, which includes the manual selection of a ROI (region of interest) containing some straightlines.

Aleman et al. [1], adapt the Hough transform to automatically correct radial lens distortionfor wide-angle lenses using the one-parameter division model. This method uses the informationprovided by all edge points, which allows a better identification of straight lines when applying theextended Hough transform. The distortion parameter which is obtained is finally optimized with asimple gradient-based method. The method is applied to the whole image, hence there is no need toselect a ROI as recommended in Cucchiara et al. [7].

A fully automatic distortion correction method, which also embeds the radial distortion parameterinto the Hough transform to better detect straight lines, is presented in Lee et al. [12]. It is appliedto endoscopic images captured with a wide-angle zoomable lens. In this case, the division modelwith one parameter is used to deal with the strong distortion caused by the wide-angle lens, and themethod is adapted to include the effects related to the variation of the focal length due to zoomingoperations. The method is intended for real time applications once mapped to a GPU computingplatform. The distortion parameter is estimated by optimizing the Hough entropy in the imagegradient space.

In this work, we propose an unsupervised method which makes use of the one-parameter divisionmodel to correct the radial distortion caused by a wide-angle lens just using a single image. First,we automatically detect the distorted lines within the image by adapting the Hough transform toour problem. By embedding the radial distortion parameter into the Hough parametric space, thelongest arcs (distorted lines) within the image are extracted. From the improved Hough transform,we obtain a collection of distorted lines and an initial approximation for the distortion parameter k1.Then, a numerical optimization scheme is applied to optimize this value by minimizing the Euclidiandistance from the corrected line points to the set of straight lines.

The organization of this article is as follows: Section 2 presents the proposed technique to estimatea one-parameter division model using the improved Hough transform. In Section 3, the proposedalgorithm is explained and in section 4 we present some experimental results for several imagesshowing significant radial distortion. Finally, some conclusions are discussed.

2 Estimation of a one-parameter division model using the

improved Hough transform

In what follows, we will focus on one-parameter lens distortion division models. Using equations (1)and (3), we can obtain that the variation of the distance from the distortion center to a point is givenby

r′ =r

1 + k1r2. (4)

3

Miguel Aleman-Flores, Luis Alvarez Luis Gomez, Daniel Santana-Cedres

To simplify the discussion we will consider that the distortion center (xc, yc) is the image center,which in general is a good approximation.

To correct the radial distortion, the distortion parameter k1 must be estimated from the availableimage information, that is, from the line primitives. Line primitives are searched in the edge imagewhich is computed using the Canny edge detector. Then, lines (straight lines) can be extracted fromthe set of edges applying the Hough transform, which searches for the most reliable candidates withina certain two-dimensional space corresponding to the orientation and the distance to the origin ofthe candidate lines. Each edge point votes for those lines which are more likely to contain this point.The lines which receive the highest scores are considered the most reliable ones and, consequently,taken into account in order to estimate the distortion.

Note that the classical Hough transform does not consider the influence of the distortion in thealignment of the edge points and, for that reason, straight lines are split into different segments. Thistranslates directly in a lost of information because some lines may not be properly recognized as linesand, therefore, the distortion correction must be performed without using the maximum number ofcandidate straight lines. In this paper we extend the usual two-dimensional Hough space adding anew dimension, namely the radial distortion parameter.

For practical reasons, instead of considering the distortion parameter value itself in the new Houghspace, we use a normalized distortion parameter p given by:

p = (rmax − rmax)/rmax, (5)

where rmax is the distance from the center of distortion to the furthest point in the original image, andrmax is the same distance, but after applying the distortion model. The above simple normalizationallows giving a geometrical meaning to the parameter p. Another advantage of using p as a newparameter in the Hough space is that it is independent of the image resolution. The relation betweenparameter p and k1 is straightforward and is given by the following expression:

k1 =−p

(1 + p)r2max. (6)

In the extended Hough voting score matrix, for each value of p, edge points are transformed usingthe associated lens distortion model and vote for lines only if the line orientation is similar to theedge point orientation. Moreover, the vote of a point for a line depends on how close they are, andis given by v = 1/(1 + d), where d is the distance from the point to the line.

Figure 1 illustrates how the maximum of the voting score varies within the Hough space accordingto the percentage of correction determined by the normalized distortion parameter p.

Once we have searched for the best value of p within the three-dimensional Hough space, we refineit to obtain a more accurate approximation. To this aim, by using standard optimization techniqueswe minimize the following average error function:

E (p) =

∑Nlj

∑Np(j)i dist (xji, linej)

2∑Nlj Np(j)

(7)

where Nl is the number of lines, Np(j) is the number of points of the jth line and xji are the pointsassociated to linej. This error measures how distant the points are from their respective lines, sothat the lower this value, the better the matching.

3 Algorithm

Table 1, summarizes the main stages of the proposed method.

4

Automatic Lens Distortion Correction Using One-Parameter Division Models

(a) Test image (b) Real image

Figure 1: Values of the maximum in the voting space with respect to the percentage of correctionusing the modified Hough transform and a division lens distortion model for (a) the test image infigure 2 and (b) the real image in figure 3.

Stage Description

1.

Edge detection using Canny method.in: Input imageout: Edge point coordinates and orientation.

2.

Initial estimation of the lens distortion parameter using improved Hough transformin: Edge point coordinates and orientationout: An estimation of the distortion parameter and the most voted lines

3.

Distortion parameter optimizationin: An estimation of the distortion parameter and the most voted linesout: An optimized lens distortion parameter.

4.

Image distortion correctionin: The input image and the optimized lens distortion modelout: A distortion-free output image obtained using the estimated model.

Table 1: Summary of the lens distortion correction algorithm.

In the first stage we compute the image edges using the Canny edge detector. Moreover, for eachedge point we estimate the edge orientation using the image gradient. The Canny method thresholdsare provided in terms of percentages of the gradient norm of the image.

The second stage is described in Algorithm 1. To reduce the computational effort of the algorithm,we define ”a priori” the maximun number of edge lines (denoted by N) we deal with. Provided Nis big enough to include the most significant edge lines, this parameter is not very relevant. In theexperiments we present in this paper we have selected N = 30.

To define the range of the normalized distortion parameter p in the Hough space, we first have totake into account that the transformation (4) should be bijective in the interval [0, rmax], where rmaxis the maximum distance from the distortion center to the image boundary corners. In particular,the derivative with respect to r of the transformation (4) should be positive, so that:

(1 + k1r2)− 2k1r

2

(1 + k1r2)2 > 0 ⇒ 1− k1r2 > 0 ⇒ k1 <

1

r2max. (8)

5

Miguel Aleman-Flores, Luis Alvarez Luis Gomez, Daniel Santana-Cedres

Now, replacing k1 by p (using (6)), an straightforward computation yields to the condition

p > −0.5 (9)

In practice, to include the distortion parameter in the Hough space we choose an interval[pmin, pmax] (with pmin > −0.5) and we take

pk = pmin + k · δp k = 0, 1, ...,pmax − pmin

δp(10)

In the Hough line voting step, we impose that an edge point votes for a line only if the edge pointorientation is coherent with the line orientation. To do that, we first correct the edge orientationaccording to the lens distortion model we are using and then we impose that the angular differencebetween the edge orientation and the line orientation be smaller than a certain value δα. This valueis a parameter of the method and, in the numerical experiments we have performed, we have chosenδα = 2 degrees.

Once the voting step is over, the best distortion parameter is computed by maximizing thefollowing criterium:

maxpk∈[pmin,pmax]

{N∑j=1

votes(dist(j), β(j), pk)

},

where {(dist(j), β(j), pk)}Nj=1 represents the N most voted lines for the normalized lens distortionparameter pk.

In the third stage of the method, the normalized distortion parameter p is optimized by minimizingthe error function (7). In Algorithm 2, we describe the minimization algorithm we use.

Finally, in the stage 4 of the method we create a distortion-free image using the estimated lensdistortion model. To do that, we need to compute the inverse of the lens distortion transformation.Using equation (4) we obtain that :

r′ + k1 · r′ · r2 − r = 0, (11)

which is a 2-degree polynomial in r. We compute the 2 polynomial roots and we choose the onewhich is always positive (independently of the value of k1) and smaller in magnitude, that is :

r =1−

√1− 4k1(r′)2

2 · k1 · r′. (12)

Notice that if r′ = 0 then r = 0. To speed up the algorithm, instead of estimating the above inversetransformation for each pixel, we first build a vector InverseV ector where we store the result offormula (12) in the range [0, r′max]. In Algorithm 3 we describe the complete algorithm to computethe distortion-free image.

6

Automatic Lens Distortion Correction Using One-Parameter Division Models

Data: Image edge point coordinates {(x, y)}, image edge point orientations {α (x, y)} andmaximun number of expected lines N

Result: An estimation of the normalized distortion parameter p0 and the most voted linesbegin

maxHough = −1;for p = pmin to pmax do

k1 = −pr2max(1+p)

;

for (x, y) ∈ Image edges do

r =√

(x− xc)2 + (y − yc)2;

x′ = xc +(

11+k1r2

)(x− xc);

y′ = yc +(

11+k1r2

)(y − yc);

// Next we update the edge point orientation according to lens

distortion correction.

αnew = Update orientation (x, y, α(x, y), k1);for β ∈ [αnew − δα, αnew + δα] do

d = Floor (− (cos (β)x′ + sin (β) y′));for dist ∈ [d− 2, d+ 2] do

dl = | cos (β)x′ + sin (β) y′ + d|;votes (dist, β, p) = votes (dist, β, p) + 1

1+dl;

end

end

end// For the N most voted lines given by {(dist(j), β(j), p)}Nj=1 we compute

sum =∑N

j=1 votes(dist(j), β(j), p);

if (sum > maxHough) thenp0 = p;maxHough = sum;// We update the N most voted lines to {(dist(j), β(j), p0)}Nj=1

end

end

end

Algorithm 1: Line detection using improved Hough transform algorithm.

7

Miguel Aleman-Flores, Luis Alvarez Luis Gomez, Daniel Santana-Cedres

Function(Error Minimization(p0,h,TOL))popt = p0;γ = 1.0;p = p0 + TOL+ 1;

while (|p− popt| > TOL) do// E is defined in equation (7)

E ′ (popt) = E(popt+h)−E(popt−h)2h

;

E ′′ (popt) = E(popt+h)−2E(popt)+E(popt−h)h2

;

pnew = popt − E′(popt)E′′(popt)+γ

;

while (E (pnew) > E (popt)) doγ = γ × 10;

pnew = popt − E′(popt)E′′(popt)+γ

;

endγ = γ/10;p = popt;popt = pnew;

endreturn popt;

Algorithm 2: Algorithm to optimize p0 by minimizing E (p)

Data: Input image and the estimated lens distortion modelResult: A distortion free output imagebegin

// We compute the InverseV ector in [0, r′max] (as explained in the text).

for (x′, y′) ∈ Output mage pixels do

r′ =√

(x′ − xc)2 + (y′ − yc)2;index = Floor(r′);weight = r′ − index;r = (1− weight) · InverseV ector[index] + weight · InverseV ector[index+ 1];x = xc + r (x′ − xc);y = yc + r (y′ − yc);OutputImage(x′, y′) = InputImage(x, y);// In the above expression we use bilinear interpolation to estimate

InputImage(x, y)

end

end

Algorithm 3: Free distortion image computation

8

Automatic Lens Distortion Correction Using One-Parameter Division Models

(a) (b)

(c) (d)

Figure 2: Lens distortion correction for an image of a calibration pattern : (a) original image,(b) detected edges using the Canny method, (c) lines detected using the proposed method, and (d)undistorted image using the proposed method.

4 Experimental Results

We have tested our model in some images showing wide-angle lens distortion. Images have beenacquired with a Nikon D90 camera and a Tokina 11-16mm Ultra-wide Angle Zoom Lens. Figure2 shows the results for a calibration pattern and figure 3 presents the results for an image of abuilding. We have represented each line using a different color to identify them. In both cases,from the detected arcs or distorted lines, the distortion is estimated and the images are corrected.Numerical results are summarized in table 2.

Images p0 E(p0) popt E(popt) Nlines Npoints

pattern 0.9 0.86598 0.957329 0.62965 29 9101building 0.4 0.88286 0.383621 0.82366 27 6096

Table 2: Numerical results.

9

Miguel Aleman-Flores, Luis Alvarez Luis Gomez, Daniel Santana-Cedres

(a) (b)

(c) (d)

Figure 3: Lens distortion correction for an image of a building: (a) original image, (b) detectededges using the Canny method, (c) lines detected using the proposed method, and (d) undistortedimage using the proposed method.

5 Conclusions

In this paper we present a method to automatically correct wide-angle lens distortion. We use a one-parameter division model and an improved Hough space where the distortion parameter is addedto be able to find distorted lines in the image. We optimize the resulting distortion parameter byminimizing the distance from the lines to their associated points. We have presented the details ofthe algorithms of the different stages, as well as some experiments which show the ability of themethod to correct wide-angle lens distortion.

Acknowledgment: This work has been partially supported by the MICINN project referenceMTM2010-17615 (Ministry of Science and Innovation, Spain).

References

[1] Aleman-Flores, M., Alvarez, L., Gomez, L., Santana-Cedres, D.: Wide-angle lens distortioncorrection using division models. In: LNCS Progress in Pattern Recognition, Image Analysis,Computer Vision, and Applications. Havana, Cuba (to appear in 2013)

10

Automatic Lens Distortion Correction Using One-Parameter Division Models

[2] Alvarez, L., Gomez, L., Sendra, R.: Algebraic lens distortion model estimation. Image ProcessingOn Line. http://www.ipol.im (2010), http://dx.doi.org/10.5201/ipol.2010.ags-alde

[3] Alvarez, L., Gomez, L., Sendra, R.: Accurate depth dependent lens distortion models: anapplication to planar view scenarios. Journal of Mathematical Imaging and Vision 39(1), 75–85(2011), http://dx.doi.org/10.1007/s10851-010-0226-2

[4] Brown, D.C.: Close-range camera calibration. Photogrammetric Engineering 37(8), 855–866(1971)

[5] Bukhari, F., Dailey, M.: Automatic radial distortion estimation from a single image. Jour-nal of Mathematical Imaging and Vision 45(1), 31–45 (2012), http://dx.doi.org/10.1007/s10851-012-0342-2

[6] Claus, D., Fitzgibbon, A.W.: A rational function lens distortion model for general cameras. In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 213–219(Jun 2005), http://dx.doi.org/10.1109/CVPR.2005.43

[7] Cucchiara, R., Grana, C., Prati, A., Vezzani, R.: A hough transform-based method for radiallens distortion correction. In: ICIAP. pp. 182–187. IEEE Computer Society (2003), http://dx.doi.org/10.1109/ICIAP.2003.1234047

[8] Devernay, F., Faugeras, O.: Straight lines have to be straight. Machine Vision and Applications13(1), 14–24 (2001), http://dx.doi.org/10.1007/PL00013269

[9] Fitzgibbon, A.W.: Simultaneous linear estimation of multiple view geometry and lens distortion.In: Proc. IEEE International Conference on Computer Vision and Pattern Recognition pp. 125–132 (2001), http://dx.doi.org/10.1109/CVPR.2001.990465

[10] Hartley, R.I., Zisserman, A.: Multiple view geometry in computer vision. Cambridge UniversityPress (2004), http://dx.doi.org/10.2277/0511188951

[11] Hughes, C., Glavin, M., Jones, E., Denny, P.: Review of geometric distortion compensation infish-eye cameras. In: Signals and Systems Conference, 208. (ISSC 2008). IET Irish. pp. 162–167.Galway, Ireland (2008)

[12] Lee, T.Y., Chang, T.S., Wei, C.H., Lai, S.H., Liu, K.C., Wu, H.S.: Automatic distortion correc-tion of endoscopic images captured with wide-angle zoom lens. Biomedical Engineering, IEEETransactions on 60(9), 2603–2613 (2013)

[13] Lenz, R.: Linsenfehlerkorrigierte Eichung von Halbleiterkameras mit Standardobjektiven furhochgenaue 3D - Messungen in Echtzeit. In: Paulus, E. (ed.) Mustererkennung 1987, Informatik-Fachberichte, vol. 149, pp. 212–216. Springer Berlin Heidelberg (1987)

[14] Wang, A., Qiu, T., Shao, L.: A simple method to radial distortion correction with centreof distortion estimation. Journal of Mathematical Imaging and Vision 35(3), 165–172 (2009),http://dx.doi.org/10.1007/s10851-009-0162-1

11