Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid...

10
EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel (Guest Editors) Volume 21 (2002), Number 3 Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid Motion Dana Cobza¸ s, Keith Yerex and Martin Jägersand Department of Computing Science, University of Alberta, Canada {dana,keith,jag}@cs.ualberta.ca Abstract The problem of capturing real world scenes and then accurately rendering them is particularly difficult for fine- scale 3D structure. Similarly, it is difficult to capture, model and animate non-rigid motion. We present a method where small image changes are captured as a time varying (dynamic) texture. In particular, a coarse geometry is obtained from a sample set of images using structure from motion. This geometry is then used to subdivide the scene and to extract approximately stabilized texture patches. The residual statistical variability in the tex- ture patches is captured using a PCA basis of spatial filters. The filters coefficients are parameterized in camera pose and object motion. To render new poses and motions, new texture patchesare synthesized by modulating the texture basis. The texture is then warped back onto the coarse geometry. We demonstrate how the texture modula- tion and projective homography-based warps can be achieved in real-time using hardware accelerated OpenGL. Experiments comparing dynamic texture modulation to standard texturing are presented for objects with com- plex geometry (a flower) and non-rigid motion (human arm motion capturing the non-rigidities in the joints, and creasing of the shirt). Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Image Based Rendering 1. Introduction Many graphics rendering and animation techniques gener- ate images by texture mapping a 3D scene model. This ap- proach works well if the geometric model is sufficiently de- tailed but photorealism is still hard to achieve. Computer vi- sion structure-from-motion techniques can be used to recon- struct the model from a set of sample images. The model can be reconstructed based on points, planar patches 16 or lines 3 but the process is in general tedious and requires a lot of human intervention. For texture mapping the model is de- composed into small planar patches that are then textured from the sample view that is closest to the desired position. An advantage of metric reconstruction is that the model can be correctly reprojected under perspective projection. Non- euclidean models are more easy to construct from a set of input images 4 19 , but without additional metric information it is difficult to specify physically correct novel views. Recent research in image-based rendering (IBR) offers an alternative to these techniques by creating a non-geometric model of the scene from a collection of images. The idea be- hind the IBR techniques is to sample the plenoptic function 12 for the desired scene. There are two main approaches to this problem. One is to sample the plenoptic function under some viewing constraints that limit the camera scene to bounding box 5 10 or to only rotations 18 . Another approach is presented in 9 , where a photoconsistent volumetric model is recon- structed from a set of input images by organizing the rays. These methods will theoretically generate correct renderings but are practically hard to apply for real scenes and require calibrated cameras. Some work towards combining model-based and image- based has been performed. Debevec et. al. 3 renders novel views with view dependent textures, chosen either from the closest real sample image, or as a linear combination of the nearest images. We extend this work by functionally repre- senting and parameterizing the view dependent texture by combining image-plane based synthesis 8 7 11 with tracking and scene geometry estimation. Buehler 2 proposed a gener- alization of the lumigraph by reconstructing an approximate geometric proxy for a scene from a collection of images. A novel view is obtained by combining rays from input images using a “camera blending field”. Instead of focusing on the ray set here we represent the variability in the texture, which also allow us to extend applications to non-rigid and time- varying scenes. c The Eurographics Association and Blackwell Publishers 2002. Published by Blackwell Publishers, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA.

Transcript of Dynamic Textures for Image-based Rendering of Fine-Scale 3D Structure and Animation of Non-rigid...

EUROGRAPHICS2002/ G. DrettakisandH.-P. Seidel(GuestEditors)

Volume21 (2002), Number3

Dynamic Texturesfor Image-based Renderingof Fine-Scale3D Structur eand Animation of Non-rigid Motion

DanaCobzas,KeithYerex andMartin Jägersand

Departmentof ComputingScience,Universityof Alberta,Canada{dana,keith,jag}@cs.ualberta.ca

AbstractTheproblemof capturingreal world scenesandthenaccurately renderingthemis particularly difficult for fine-scale3D structure. Similarly, it is difficult to capture, modelandanimatenon-rigid motion.We presenta methodwhere small image changesare captured as a time varying (dynamic)texture. In particular, a coarsegeometryis obtainedfrom a samplesetof imagesusingstructure from motion.This geometry is thenusedto subdividethe sceneand to extract approximately stabilizedtexture patches.Theresidualstatisticalvariability in the tex-ture patchesis capturedusinga PCAbasisof spatialfilters. Thefilters coefficientsare parameterizedin cameraposeandobjectmotion.To rendernew posesandmotions,new texture patches are synthesizedby modulatingthetexturebasis.Thetexture is thenwarpedback ontothecoarsegeometry. Wedemonstratehowthetexturemodula-tion andprojectivehomography-basedwarpscanbeachievedin real-timeusinghardware acceleratedOpenGL.Experimentscomparingdynamictexture modulationto standard texturing are presentedfor objectswith com-plex geometry(a flower)andnon-rigid motion(humanarm motioncapturingthenon-rigiditiesin thejoints,andcreasingof theshirt).

CategoriesandSubjectDescriptors(accordingto ACM CCS): I.3.3 [ComputerGraphics]:ImageBasedRendering

1. Intr oduction

Many graphicsrenderingand animationtechniques gener-ateimagesby texturemappinga 3D scenemodel.This ap-proachworkswell if thegeometricmodelis sufficiently de-tailedbut photorealismis still hardto achieve.Computervi-sionstructure-from-motiontechniquescanbeusedto recon-structthemodelfrom asetof sampleimages.Themodelcanbereconstructedbasedon points,planarpatches16 or lines3

but the processis in generaltediousand requiresa lot ofhumanintervention.For texture mappingthe model is de-composedinto small planarpatches that are then texturedfrom thesampleview that is closestto thedesiredposition.An advantage of metricreconstructionis thatthemodelcanbe correctlyreprojectedunderperspective projection.Non-euclidean modelsaremoreeasyto constructfrom a setofinput images4 � 19, but without additionalmetric informationit is difficult to specifyphysically correctnovel views.

Recentresearchin image-basedrendering(IBR) offersanalternative to thesetechniques by creatinga non-geometricmodelof thescenefrom acollectionof images.Theideabe-hindtheIBR techniquesis to sampletheplenopticfunction12

for thedesiredscene.Therearetwo mainapproachesto this

problem.Oneis to sampletheplenopticfunctionundersomeviewing constraintsthat limit thecamerasceneto boundingbox5 � 10 or to only rotations18. Anotherapproachis presentedin 9, where a photoconsistentvolumetric model is recon-structedfrom a setof input imagesby organizingthe rays.Thesemethodswill theoretically generatecorrectrenderingsbut arepracticallyhardto apply for realscenesandrequirecalibratedcameras.

Somework towardscombiningmodel-basedandimage-basedhasbeenperformed.Debevec et. al.3 rendersnovelviews with view dependenttextures,choseneitherfrom theclosestrealsampleimage,or asa linearcombinationof thenearestimages.We extendthis work by functionally repre-sentingand parameterizingthe view dependenttexture bycombiningimage-planebasedsynthesis8 � 7 � 11 with trackingandscenegeometryestimation.Buehler2 proposeda gener-alizationof thelumigraphby reconstructinganapproximategeometricproxy for a scenefrom a collectionof images.Anovel view is obtainedby combiningraysfrom input imagesusinga “camerablendingfield”. Insteadof focusingon theraysetherewerepresentthevariability in thetexture,whichalsoallow us to extendapplications to non-rigid andtime-varyingscenes.

c�

TheEurographicsAssociationandBlackwellPublishers2002.Publishedby BlackwellPublishers,108 Cowley Road,Oxford OX4 1JF, UK and350 Main Street,Malden,MA02148,USA.

Cobzas,Yerex, Jagersand/ Dynamictextures

x 1 xt

y y1 t

Texture Basis B

I I1 t

==

TRAINING REPRESENTATION ANIMATION

x

GMotion parameters

New desired pose

SStructure

+ +

W

Warped patches

Texture coefficients

New view

Figure 1: A sequenceof training imagesI1 ����� It is decomposedinto geometric shapeinformationand dynamictexture for asetof quadrilateral patches.Thescenestructure S and motionparameters x1 ����� xt are determinedfrom the projectionof thestructure. Thedynamictexture for each quadrilateral is decomposedinto its projectiony1 ����� yt on an estimatedbasisB. Fora givendesired positionx, a novel image is generatedby warping new texture synthesizedfrom the basisB on the projectedstructure.

Thereareseveral practical challengeswith currenttech-niques:(1) For the model-basedapproaches, generatingadetailedmodelfrom imagesrequiresdensepoint correspon-dences.In practice,usuallyonly a few pointscanbetrackedreliablyandaccurately andobjectstructureestimatedatbestcoarsely. (2) In conventional texture-basedrendering,theplacement of trianglesso that real edgeson the objectarealignedwith edgesin the model is critical. However, witha sparsemodel obtainedfrom imagesof an otherwiseun-known objectthis is difficult to ensure.(3) Without impos-ing a geometricstructureon the scene,renderingarbitraryviews requiresa densesamplingof the plenopticfunctionwhich is practicallyhardto accomplishfor real scenes.(4)IBR techniquesaredifficult to applyin dynamicscenes.

In this paperwe presenta methodinbetween image-andmodel-basedrendering.Usingacoarsegeometricmodel,wecanapproximatelystabilizeimagemotionsfrom largeview-point changes. The residualimagevariability is accountedfor usinga varying texture image,dynamictexture, synthe-sizedfrom aspatialtexturebasis,which is describedin Sec-tion 2.1. Section2.3 describesthe geometricmodels,andhow they arecapturedusingstructure-from-motion19. Boththe structureand dynamic texture variation is parameter-izedin termsof object-camerapose,andobjectarticulation,henceallowing renderingof new positionsandanimationofobjectmotions.Weshow how to implementthedynamictex-ture renderingusinghardwareacceleratedOpen-GLin Sec-tion 3, andin Section4 we verify practicalperformancein

qualitative andquantitative experimentscapturingandani-matingahouse,aflower, andahumanarm.

2. Theory

From a geometricview, IBR techniquesrelate the pixel-wise correspondencebetweensampleimagesIt and a de-siredviews Iw. This canbe formulatedusinga warp func-tion w to relateIt

�w��� Iw. If It Iw thenw is closeto the

identity function.However, to relatearbitraryviewpoints,wcanbequitecomplex, andcurrentIBR methodsgenerallyre-quirecarefullycalibratedcamerasin orderto have a precisegeometricknowledgeof therayset5 � 10.

In our methodthe approximatetexture imagestabiliza-tion achievedusinga coarsemodelreducesthedifficulty ofapplyingIBR techniques.Theresidualimage(texture)vari-ability canthenbecodedasa linearcombination of a setofspatialfilters.(Figure1).Moreprecisely, givenatrainingse-quenceof imagesIt andtracked points ut � vt � , a simplifiedgeometricstructureof thesceneSandasetof motionparam-etersxt that uniquelycharacterizeeachframe is estimatedfrom thetrackedpoints(section2.3).Thereprojectionof thestructuregivenasetof motionparametersx is obtainedby

u � v � �� � S� x � (1)

where is aprojectionfunctionfor aparticulargeometryofthescene.Usingthere-projectedstructure ut � vt � eachsam-ple imageIt is divided into Q quadrilateral regions Iqt (Iqt

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

having zerosupportoutsidethequad).Eachquadis warpedto astandardshape(rectangle)Iwqt (Section2.1).

It �Q

∑q� 1

Iqt (2)

Iwqt � Iqt� ���

ut � vt ��� (3)

Using the algorithmdescribedin section2.1 we thencom-putefor eachquadrilateralasetof basisimagesBq thatcap-ture the imagevariability causedby geometricapproxima-tionsandillumination changes andthesetof correspondingblendingcoefficientsyqt

Iwqt � Bqyqt � Iq (4)

whereIq representsthemeanimage.To generateanew viewwe first estimatetheprojectionof thestructureS in thede-siredview (specifiedby motionparametersx in Equation1).Then,texturemodulationcoefficientsy arecomputed andatexturecorrespondingto thenew view is blended(Equation4). Finally the texture is warpedto the projectedstructure(Equations3,2).Thefollowing sectionsdescribethisprocessin detail.

2.1. Dynamic textures

The purposeof the dynamictexture is to allow for a non-geometrybasedmodelingandsynthesisof intensityvaria-tion duringanimationof acapturedmodel.To illustratehowmotion canbe generatedusinga spatialbasisconsidertwosimpleexamples: 1/ A drifting grating I � sin

�u � at � can

be synthesizedby modulatingonly a sin andcosbasisI �sin�u � at ��� sin

�u� cos

�at � � cos

�u� sin � at ��� sin

�u� y1 �

cos�u� y2, wherey1 andy2 aremixing coefficients.2/ Small

image translationscan be generated by I � I0 � ∂I∂u∆u �

∂I∂v∆v. Herethe imagederivatives ∂I

∂u and ∂I∂v form a spatial

basis.Below wefirst extendthis to 6 and8 parameter warpsrepresentingtexturetransforms,with depth,non-rigidityandlighting compensation.Having shown generatively basedononeimagethat thesekindsof intensityvariationcanindeedberepresentedusingalinearbasiswethendescribehow esti-matethisbasisfrom actualimagevariability in amorestableway from thestatisticsover longsequencesof images.

2.1.1. Parameterizing imagevariability

Formally, consideranimagestabilizationproblem.Underanimageconstancy assumptionthe light intensityfrom anob-ject point p is independentof the viewing angle6. Let I

�t �

beanimage(patch)at time t, andIw astabilized(canonical)representationfor thesameimage.In general,thenthereex-istssomecoordinateremappingw s.t.

Iw�p ��� I

�w�p � � t � (5)

Hence, Iw representsthe image from some hypotheticalviewing direction,andw is a function describingthe rear-rangementof theraysetfrom thecurrentimageI

�t � to Iw. In

principlew couldbe found if accuratemodelsareavailablefor thescene,cameraandtheir relativegeometry.

In practice, at best an approximatefunction w can befound, which may be parameterized in time (e.g. in moviecompression)or pose(e.g.in structurefrom motionandpose

tracking).Below we develop mathematically the effectsofthisapproximation.In particularwestudytheresidualimagevariability introducedby theimperfectstabilizationachievedby w, ∆I � I

�w � t ��� Iw. Let w � w � ∆ f andrewrite asanap-

proximateimagevariability to thefirst order(droppingt):

∆I � I�w � ∆w��� Iw � I

�w� � ∂

∂w I�w� ∆w � Iw �

∂∂w I�w� ∆w

(6)

The above equationexpressesan optic flow type constraintin anabstractformulationwithoutcommittingto aparticularform or parameterization of w. In practice,the function wis usually discretizedusing e.g. triangularor quadrilateralmeshelements.Next wegiveexamplesof how to concretelyexpressimagevariability from thesediscreterepresentationsunderdifferentcamerageometries.

2.1.2. Structural imagevariability

Affine Undera weakperspective (or orthographic)camerageometry, plane-to-planetransformsareexpressedusinganaffine transformof theform:

uwvw

� � a�p � a��� a3 a4

a5 a6p � a1

a2(7)

This is alsothestandardimage-to-imagewarpsupportedinOpenGL.Now we can rewrite the imagevariability Eq. 6resultingfrom variationsin the six affine warp parametersas:

∆Ia � ∑6i � 1

∂∂ai

Iw∆ai � ∂I∂u � ∂I

∂v

∂u∂a1����� ∂u

∂a6∂v∂a1����� ∂v

∂a6

∆ a1 ����� a6 � T

(8)

Let � I � discr � I beadiscretizedimageflattenedalongthecolumninto a vector, and let ’ � u’ and ’ � v’ indicatepoint-wisemultiplicationwith columnflattenedcameracoordinateu andv index vectors.Rewrite theinnerderivativesto getanexplicit expressionof the6 parametervariability in termsofspatialimagederivatives:

∆Ia � ∂I∂u � ∂I

∂v1 0 � u 0 � v 00 1 0 � u 0 � v y1 � ����� � y6 � T �

B1 ����� B6 � y1 � ����� � y6 � T � Baya(9)

where B1 ����� B6 � canbeinterpretedasa variability basisfortheaffine transform.

Projective For a perspective camerathe physically cor-rect plane-to-planetransform is representedby a projec-tivehomography,4 � 18. Usinghomogeneouscoordinatesph � u � v� 1� T points in two planarsceneviews I1 and I2 arere-latedby:

uvλ� � h

�ph � h ���

h1 h2 h3h4 h5 h6h7 h8 1

ph � Hph (10)

Thistransformationis upto ascale,soin generalH haseightindependent parameters and it canbe computedfrom fourcorrespondingpoints in the two views. Analogouslyto theaffinecasewecanrewrite theresultingfirst orderimagevari-ability dueto thehomography parametersh1 ����� h8 in terms

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

of aspatialderivatives:

∆Ih � I ∑8i � 1

∂I∂hi

∆hidiscr

� ����� �B1 ����� B8 � y1 � ����� � y8 � T � Bhyh

(11)

Depth compensationWhile commonmeshdiscretizationsmodel objectsas piece-wiseplanar, real world objectsareseldomperfectlyplanar. This non-planaritywill introduceaparallaxerror in the dependent on the relative depthdiffer-encebetweenthe true scenepoint and the appproximatingplane.For small depthvariationsthis resultsin small im-ageplanetranslations.Hence,similar to the small coordi-natetranslationscausedby trackingandmeshgeometry, wecanwrite the resultingintensityvariationdue to the depthparallaxby modulatinga linearbasis:

∆Id ��Bd1 � Bd2 � yd1 � yd2 � T (12)

Non-rigidity We consider only non-rigidities where theshapechangeis a function of somemeasurablequantityx � � n. In this paperwe choosex from poseand articu-lation parameters. Let g

�x � (= u � v� T ) representthe image

planeprojectionof the non-rigid warp. We can then writetheresultingfirst orderimagevariationas:

∆In � ∑ni � 1

∂I∂xi

∆xidiscr

�∂I∂u � ∂I

∂v∂

∂x1g�x� ∆x1 � ����� � ∂

∂xng�x� ∆xn

discr�

B1 ����� Bn � y1 � ����� � yn � T � Bnyn

(13)

Illumination variation It hasbeenshown that for a convexLambertianobject,the imagevariability dueto differentil-lumination can be expressedas a threedimensionallinearbasis15� 6. For a generalobject,the illumination componentcanbeapproximated with a low dimensionalbasis.

∆I l �!B1 ����� B3 � y1 ����� y3 � T � Bl yl (14)

2.1.3. Statistical imagevariability

In a real, imperfectlystabilizedimagesequencewe canex-pectall of theabove typesof imagevariation,aswell asun-modeledeffectsandnoise∆Ie. Hence,total residualimagevariability canbewrittenas:

∆I � ∆Is � ∆Id � ∆In � ∆I l � ∆Ie �Bsys � Bdyd � Bnyn � Bl yl � ∆Ie � By � ∆Ie

(15)

where∆Is is either∆Ia or ∆Ih dependingon the geometryused.Approximationsto Ba andBh canbecomputedfromimagederivatives(this is usedto bootstraptracking),whileBdyd � Bnyn � Bl yl � all requiredetailedgeometricknowl-edgeof the scene,cameraand lighting. Note in Eq. 15 itslinearform,whereasetof basistextures,B �"B1 ����� Bm� , aremixed by a correspondingsetof blendingcoefficients,y � y1 ����� ym� T . To avoid explicit modelingwe insteadobserveactualimagevariability from animagesequenceI

�t � , which

hasbeenstabilizedusinganapproximatef , sowenow havea sequenceof small variations∆I . We computea referenceimageasthe pixel-wisemeanimageI � ∑M

t � 01M Iw

�t � , and

form a zeromeandistribution of the residualvariability as

Iz�t ��� Iw

�t �#� I . FromthesewecanusestandardPCA(prin-

cipal component analysis)to computea basisB which cap-tures(up to a linear coordinatetransform)the actuallyoc-curringimagevariationin thetrueB (Eq.15)

Briefly we performthePCA asfollows.Form a measure-ment matrix A �$ Iz

�1� � ����� � Iz

�M � � . The principle compo-

nentsare the eigenvectorsof the covariancematrix C �AAT . A dimensionalityreduction is achieved by keepingonly the first k of the eigenvectors.For practical reasons,usuallyk % M % l , wherel is the numberof pixels in thetexturepatch,andthecovariancematrixC will berankdefi-cient.Wecanthensavecomputational effort by insteadcom-puting L � ATA andeigenvectorfactorizationL � VDVT ,whereV is anortho-normalandD a diagonal matrix. Fromthe k first eigenvectors V �& v1 ����� vk � of L we form a k-dimensionaleigenspaceB of C by B � AV. Using the es-timatedB wecannow write a leastsquaresoptimalestimateof any intensityvariationin thepatchas

∆I � By � (16)

the sameformat as Eq. 15, but without using any a-prioriinformation to model B. While y capturesthe samevaria-tion as y, it is not parameterized in the samecoordinates,but relatedby an(unknown) lineartransform.Wearenotex-plicitly interestedin y for therendering,but insteadneedtoknow how to relatethe estimatedglobal posex to y. Thisis in generala non-linearfunction f (e.g.dueto the angu-lar parameterization of cameraposein the view synthesis,Fig. 8, or the non-linearkinematicsin the arm animation,Fig. 11).To relatethesewe notethat trackingandposefac-torization(Eq.23)givesusposes x1 � ���'� � xM � for all trainingimagesin thesamplesequence,andfrom theorthogonalityof V we have that thecorrespondingtexturemixing arethecolumnsof y1 � ����� � yM � � VT . Hence,many standardfunc-tion approximation techniquesareapplicable. Weusemulti-dimensionalspline interpolationto estimate f . Comparingcubicandlinearsplineswe foundthat linearsplinessuffice.During animationthesecanbe computedquickly andeffi-ciently from only thenearestneighborpoints.

2.2. Inter pretation of the variability basis

In ourapplication,thegeometricmodelcapturesgrossimagevariationcausedby largemovements.Theremainingvaria-tion in therectifiedpatchesis mainly dueto:

1. Trackingerrorsaswell aserrorsdueto geometricapprox-imations(e.g.weakperspectivecamera)causethetextureto be sourcedfrom slightly inconsistentlocationsin thetraining images.Theseerrorscanbemodeledasa smalldeviation ∆ h1 � ����� � h8 � T in the homography parametersfrom thetruehomography, andcausesimagedifferencesaccordingto Eq. 11. The weak perspective approxima-tions,aswell asmany trackingerrorsarepersistent,anda functionof objectpose.Hencethey will becapturedbyB andindexedin posex by f .

2. Depthvariationis capturedby Eq.12.Notethatprojecteddepthvariationalongthecameraoptic axischangesasafunctionof objectpose.

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

3. Assumingfixedlight sourcesandamoving cameraor ob-ject, the light variation is a function of relative camera-objectposeaswell.

From the form of Equations11 and 12 we expect thatposevariationsin the imagesequencewill result in a tex-ture variability describedby combinationsof spatialimagederivatives.In Fig.2wecomparenumericallycalculatedspa-tial imagederivatives to theestimatedvariability basisB.

Figure 2: Comparisonbetweenspatial derivatives∂Iw∂x and

∂Iw∂y (left two texture patches)and two vectors of the esti-

matedvariability basis B1 � B2 � (right) for housepictures.

In synthesizingtextureto rendera sequenceof novel im-agesthe function f modulatesthe filter bankB so that thenew texture dynamically changes with posex accordingtoIw � Bf

�x � � I .

2.3. Geometricmodel

Thereareseveraltechniquesfor estimatinggeometricstruc-ture from a set of images.Most of the methodsassumeastaticsceneandestimatethe structurefrom a setof corre-spondingimagepoints.Dependingonthecameramodelandcalibrationdatatheestimatedmodelcanvary from aprojec-tive, affine to a metric or 3D Euclideanmodel.For a fewimages,underperspective projectionassumption,thestruc-ture can be determinedusing epipolargeometry(two im-ages)or trilinear tensor(threeimages)4 � 14. In the caseofvideo,i.e. long motionsequences,methodswhich utilize allimagedatain a uniform way are preferred.Suchmethodsrecover affine 19� 20� 13 or projective 17 structureusingfactor-ization approaches. Anotherapproachto this problemis tostartwith aninitial estimateof themodelandrefineit usingsampleimages16� 3.

For a dynamicsceneor a nonrigidobjecttheproblembe-comesmuchmorecomplicated.A solutionis to decomposethe sceneinto rigid, static componentsand estimateeachstructureseparately. Bregler 1 proposeda generalization ofthe Tomasi-Kanadefactorizationusinglinear combinationsof asetof estimatedbasisshapes.

We proposeheretwo structure-from-motionalgorithms.Thefirst oneassumesrigid objectsandestimatesametricge-ometryof thesceneusingamodifiedversionof theTomasi-Kanadefactorizationalgorithm19. Thesecondalgorithmes-timatesthe2D geometryof anarticulatedarmassumingim-ageplanemotion.

2.3.1. Metric structur eunder weakperspectiveassumption

A generalstructure-from-motionalgorithmstartswith a setof correspondingfeatures(point, lines, line segments)in a

sequenceof imagesof a sceneor object and recovers theworld coordinatesof thesefeaturesandthepositionsof thecameras(rotationandtranslation)relative to this representa-tion undersomeviewing constraints(seeFigure3). Thecor-respondingfeaturescanbeestablishedusingtracking,corre-lationalgorithmsor by manualselection.In themostgeneralcasea 3D Euclidianstructureis estimatedassumingprojec-tive modelof the camera.Thesealgorithmsrequireprecisecalibrationof thecamera,numerouscorrespondingfeaturesandare in generalhighly nonlinearand sensitive to track-ing errors.Assuminga moresimplified modelof the cam-era (ortographic,weak perspective or paraperspective pro-jection4 � 14� 13), theproblemis linearizedandanaffinestruc-tureof thesceneis estimatedusingfactorization.Usingmul-tiple imagesallowsfor stablesolutionsdespiterelatively fewtrackedpointsandtypical trackingerrors.

Structurefrom motion

algorithm

poses

+

tracked points

structure

Figure 3: A general structure from motion algorithm ex-tractsthestructure andcamera posesfroma setof trackedpoints.

Here we have usedan extensionof the Tomasi-Kanadefactorizationalgorithm19 for weakperspective camerapro-jection modelsimilar to WeinshallandKanade20. First, thealgorithmrecoversaffinestructurefrom asequenceof uncal-ibratedimages.Then,a relationbetween theaffine structureandcamera coordinates is established.This is usedto trans-form the estimatedscenestructureto an orthogonalcoor-dinateframe.Finally, usingsimilarity transformsexpressedin metric rotationsandtranslations,thestructurecanbe re-projectedinto new, physically correctposes.Sincewe useonly imageinformationour metric unit of measureis pixelcoordinates.Wenext describeamoredetailedmathematicalformulationof theproblem.

Affine structur e fr om motion Underweakperspectivepro-jection, a point Pi � � X i � Y i � Z i � T is relatedto the corre-spondingpoint pti � � uti � vti � T in imageframe I

�t � by the

following affine transformation:

uti � st iTt Pi � at

vti � st jTt Pi � bt(17)

whereit and j t arethe components alongthe camerarowsandcolumnsof therotationRt , st isascalefactorand

�at � bt �

are the first componentst1t of the translationtt (Rt and ttaligns the cameracoordinatesystemwith the world refer-encesystemandrepresentsthecamerapose).

Rewriting 17 for multiple points (N) tracked in several

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

frames(M)

W � RP� t1 (18)

whereW is a2M ( N matrix containsimagemeasurements,R representsbothscalingandrotation,P is theshapeandt1is thetranslationin theimageplane20.

If theimagepointsareregisteredwith respectto theircen-troid in the imageplaneandthecenterof theworld coordi-nateframeis thecentroidof theshapepoints,theprojectionequationbecomes:

W � RP where W � W � t1 (19)

Following 19, in theabsenceof noisewe have rank�W ���

3. Undermostviewing conditionswith a realcameratheef-fective rankis 3. Consideringthesingularvaluedecomposi-tion of W � O1ΣO2 we form

R � O)1P � Σ) O)2 (20)

whereO)1 � Σ) � O)2 are respectively definedby the first threecolumnsof O1, thefirst 3 ( 3 matrix of Σ andthefirst threerows of O2 (assumingthesingularvaluesareorderedin de-creasingorder).

Metric constraints The structureand cameraposerecov-eredusingthe factorizationalgorithmis in an affine framereferencesystem.To control the new positionsof the ren-deredimagesin a metric referencesystemthey needto bemappedto anorthogonalcoordinateframewhichin ourcaseisalignedwith thepixel coordinatesystemof thecamerarowandcolumn.

The matricesR and P area linear transformationof themetricscaledrotationmatrix R andthemetricshapematrixP. Morespecificallythereexist a3 ( 3 matrixQ suchthat:

R � RQP � Q * 1P

(21)

Q canbedeterminedby imposingconstraintson thecompo-nentsof thescaledrotationmatrixR:

iTt QQT it � jTt QQT j t� � s2

t �iTt QQT j t � 0 t �+� 1�,�M � (22)

where R �& i1 ����� iM j1 ����� jM � T The first constraintassuresthat thecorrespondingrows st iTt , st jTt of thescaledrotationR in Eq. 18 areunit vectorsscaledby the factorst andthesecondequationconstrainthemto orthogonalvectors.Thisgeneralizes 19 from an orthographicto a weak perspectivecase.Theresultingtransformationis up to a scaleanda ro-tationof theworld coordinatesystem.To eliminatetheam-biguity we align theaxisof thereferencecoordinatesystemwith thefirst frameandestimateonly eightparameters in Q(fixing ascale).

To extract poseinformationfor eachframewe first esti-matethescalefactorst androtationcomponentsit andj t bycomputingthenormof therows in R thatwill representthescalefactorsandthennormalizingthem.Consideringthat itandj t canbeinterpretedastheorientationof theverticalandhorizontalcamera imageaxesin theobjectspace,we com-putethedirectionof thecameraprojectionaxiskt � it ( j t .

We now have a completerepresentationfor themetric rota-tion that we parametrizewith Euler anglesr t �-ψt � θt � ϕt � .Eachcameraposeis representedby the motion parametervector

xat �! r t � st � at � bt � (23)

Thegeometricstructureis representedby

Sa � P (24)

andits reprojectiongivena new posexa �� r � s� a � b� is esti-matedby

u � v � �� a � Sa � xa ��� sR�r � Sa � a

b (25)

whereR�r � representsthe rotation matrix given the Euler

anglesr .

2.3.2. Planar motion of a kinematic arm

Oneway to estimatetheshapeof a nonrigidobjectis to de-composeit into rigid partsandusestandardstructurefrommotionalgorithmsto estimatethegeometryof eachcompo-nent.They have to belaterunifiedinto a commonreferenceframeusingadditionalconstraints(for examplekinematicsconstraints).An interestingsolution,proposedby Bregler 1,approximatesa non-rigidobjectby a linearcombinationofasetof estimatedbasicshapes.

As an illustrationof this ideawe estimatedthegeometryof anarmassumingthemotionis parallelto theimageplane,sotheproblembecomesplanar. Knowing thepositionof thepalm

�hu � hv � , shoulder

�su � sv � andthelengthof thetwo arm

segmentsa1, a2, the position of the elbow�eu � ev � can be

estimatedusinginversekinematicsequationsfor a two linkplanarmanipulator(seeFigure5)

(s , s )u v

(h ,h )vu

(e ,e )vu

θ φ

ψ

aa1

22

1

θv

u

Figure5: Armkinematics

eu � a1cosθ1 � suev � a2cosθ1 � sv

(26)

whereθ1 � φ � ψφ � atan2

�hv � sv � hu � su �

ψ � atan2�a2sinθ2 � a1 � a2cosθ2 �

θ2 �". 2tan* 1 / a1 0 a2 1 2 * / h2u 0 h2

v 1/ h2u 0 h2

v 1 * / a1 * a2 1 2(27)

In practicewe tracked the shoulderandthe palm andcom-putedthepositionof theelbow usingtheinversekinematicsformulas26,27.The lengthsof the two links areestimatedfrom a subsetof training imagesusingmanualselectionofthe elbow joint. Assumingthe shoulderis not moving weestimateits position

�su � sv � by averagingthe valuesof the

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

3 5 82Resolution:

Figure4: Approximationto homographicwarpingin OpenGL with subdivisionat differentresolutions

tracked shoulderpoints.Thereare in generaltwo solutionfor theelbow joint but wekeptonly theonewith positiveθ2thatproducesanaturalmotionof thearm.

(su,s

v)

(eu,e

v)

(hu,h

v)

Figure 6: Quadrilaterals definedon the wire framearm toenclosethearm texture

For texture mappingthe arm we definedthreequadrilat-eralsthatfollow thewire frame

�su � sv � � � eu � ev � � � hu � hv � (see

Figure6). They have a uniquerepresentation k given thegeometryof the two link wire frame.We can expressthegeometryof thearmas

Sk �! a1 � a2 � su � sv � γ � (28)

whereγ representstheparametersof heuristicshapewe im-poseon thewire framearm.Themotionof thearm is con-trolledby thepositionof thepalm

xk � � hu � hv � (29)

3. Implementation

OursystemrunsentirelyonconsumergradePCsusinglinuxandotherfreesoftwarepackages.For thetrackingandmodelcaptureweuseaPyroieee1394webcam, interfacedto linuxon a patched2.4.4 kernel and using public ieee1394 digi-tal video librariesfrom www.sourceforge.net. Videocapture,trackingandgeometryanalysisareimplemented asseparateprocessesusing PVM (Parallel Virtual Machine).The systemsused for both capturing and renderingare1.4 GHz machines with the hardwarerenderingrunninginOpenGLonanNVidia GEForce2chipset.

3.1. Algorithm

Training data We useXVision6 to setup a video pipelineand implementreal time tracking.To initiate tracking, theuserselectsa number(abouta dozenor more)high-contrastregionsin theimagewith themouse.Thesearegrabbed,andtrackedby estimatingimagevariability y in Eq. 9 from theinput video images.Most scenesare difficult to tile fullywith trackableregions, while it is possibleto succesfullytracksmall subparts.Fromtrackingwe extract thefirst twoparametersfrom y, representingimageplaneposition,andusethesepointsto form largequadrilateralsover thewholescene.Fromthesequadrilateralstexturesaresampledatratesfrom 2 to 5 Hz, while the trackersrun in thebackgroundatabout30Hz(for up to 20 trackers).

Geometric Model We estimatethe geometryreprojectionfunction basedonthesamplesfrom thetrainingdatausingoneof thetechniquesdescribedin section2.3.

Dynamic Texture

1. Warp eachpatchIq�t � to a standardshapeIwq

�t � using

warping function�

(Equations7 or 10). The standardshapeis chosento be squarewith sidesof length2n toaccommodatehardwareaccelerated rendering(see3.2).

2. Form a zeromeansample,andperformthe PCA asde-scribedin section2.1. Keepthe first k basisvectorsBq,andthecorrespondingcoefficients for eachframein thetrainingsequenceyq

�t � .

NewView Animation

1. For eachframein theanimationcomputethereprojection u� v� from thedesiredposex asin Equation1.

2. Calculatetexture coefficients yq for eachtexture by in-terpolatingthecoefficientsof thenearestneighborsfromthetrainingdata.

3. Computethe new textures in the standardshapeusingEquation4 and rewarp the texturesonto the calculatedgeometry. (Thesetwo operationsareperformedsimulta-neouslyin hardwareasdescribedin thenext section)

3.2. HardwareRendering

TextureBlending To renderthedynamictextureweusethetexture blending featuresavailable on most consumer3D

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

Figure7: Housesequenceanimatedat differentviewpoints( deviationof about102 fromtheoriginal sampleimages)

graphicscards.Therenderinghardwareusedis designedfortexturescontainingpositivevaluesonly, while thespatialba-sis, Equation16 is a signedquantity. We rewrite this as acombinationof two textureswith only positivecomponents:

Iwq�t ��� B0q yq

�t ��� B *q yq

�t � � Iq

WhereB0q containsonly the positive elementsfrom Bq

(and0 in the placeof negative elements)andB *q containstheabsolutevaluesof all negative elementsfrom Bq. Then,beforedrawing eachbasistexture, the blendingmodecanbe set to eitherscaleby a coefficient andaddto the framebuffer, or scaleby a coefficient andsubtractfrom theframebuffer (dependingonthesignof thecoefficient). A new viewis renderedasin thefollowing pseudocode:

for(each q){

// draw the meanBindTexture(Iq);DrawQuad(q);

// add basis texturesfor(each i){SetBlendCoefficient( 3 yqi 4 t 563 );BindTexture(B7qi);if(yqi 4 t 5�8 0) SetBlendEquation(ADD);else SetBlendEquation(SUBTRACT);DrawQuad(q);

BindTexture(B 9qi );if(yqi 4 t 5�8 0) SetBlendEquation(SUBTRACT);else SetBlendEquation(ADD);DrawQuad(q);

}}

Implementingtextureblendingin thisway, thearmexam-ple (with threequadrilateralsusing100basistextureseach)eachcanberenderedat18Hz,whereasanotherimplementa-tion whichblendstexturesin softwarebut still usesOpenGLto performtexture mappingcanrenderthe sameanimationatonly 2.8Hzon thesamemachine.

Warping Thewarpingusedin hardwaretexturemappingis designedfor 3dmodelsand(for correctperspectivewarp-ing) requiresa depthcomponentfor eachvertex. The waywe currentlyform our normalizedquadsthereis no consis-tent3D interpretationof the texturepatches,sowe use2D-2D planarwarps,andfor these,thehardwareperformsonly

affine warping,which is sufficient in thecaseof thearmex-ample.However, in thestructurefrom motioncase,sincewedo not have anexactmodel,our methodusesplanarhoma-graphies(Eq.10) to preserveperspective textureprojection.Theseareapproximated by applyingtheaffine warpto sub-divided regions.Eachquadrilateral is first subdivided in auniform grid in texture space.Theseverticesare thenpro-jectedto imagespaceusingthehomography warpingfunc-tion, and eachof the smallerquadrilateralsis renderedinhardwarewithout depthvariation.By increasingtheresolu-tion of thesubdividing mesh,theaccuracy of theapproxima-tion canbeincreased(seeFigure4).

4. Experimental results

Figure 9: Geometricerrors on the housesequence. Top:Renderedimagesusingstatic(left) anddynamic(right) tex-turesrespectively. Bottom:Detail showingthegeometricer-rors.

We have testedour methodboth qualitatively andquan-titatively by capturingvariousscenesandobjectsandthenreanimatingnew scenesandmotionsusingdynamictexturerendering.Here we presentthe renderingsof a toy house,a flower andthe animationof non-rigid articulatedmotionfrom ahumanarm.

Many man-madeenvironmentsarealmostpiece-wisepla-nar. However, insteadof makinga detailedmodelof every

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

Figure8: Flowersequence:(left most)Oneof theoriginal imagesandtheoutlineof thequadrilateral patches

surface,it is more convenient to model only the large ge-ometric structure,e.g. the walls and roofs of houses,andavoid the complexity of the details,e.g. windows, doors,entry ways,trim, eavesandother trim. Figure7 shows therenderingof a toy housein differentposes.To capturethemodela realtimevideofeedwasdisplayedandtheuserini-tializedSSDtrackersby clicking on severalhousefeatures.The housewasturnedto obtainseveral sampleviews. Themotionrangein onecapturedsequenceis limited in our im-plementationsinceeachmarked tracking target hasto re-main visible. To obtain a large motion rangetwo separatesequenceswerepiecedtogetherto generateFig. 7. A totalof270example frameswereusedto computea texturebasisofsize50.

The geometryusedin the housesequenceonly coarselycapturesthescenegeometry. In conventional renderingtheseerrorswill bevisuallymostevidentasashearat thejunctionof meshelements,seeFig. 9 left. Compareto the oneren-deredusingthedynamictexture(right), wherethedynamictexture compensatesfor the depthinaccuracy of the meshandalignsthetextureacrosstheseam.

Unlikeman-madescenes,mostnaturalenvironmentscan-not easily be decomposedinto planar regions. To put ourmethodto test, we captureda flower using a very simplegeometryof only four quadrilaterals.This causesa signif-icant residualvariability in the texture images.A trainingsequenceof 512sampleimagesfrom motionsx with angu-lar variationof r �: 40� 40� 10� degreesaroundthecamerau-v- andz-axis respectively. A texture basisof size100 wasestimated,andusedto renderthe examplesequences seenin Fig. 8. We also captureda movie (flower.mpg) thatshows thereanimationof theflower.

As an exampleof a non-rigid objectanimationwe cap-turedthe geometryof anarm from the tracked positionsoftheshoulderandhandusingimage-planemotions(seeSec-tion 2.3.2).We recordeda 512sampleimagesandestimatea texturebasisof size100.Thearmmotionwascodedusingtheimagepositionof thehand.Figure11 shows someinter-mediatepositionreanimatedusingthedynamictexturealgo-rithm. To testtheperformanceof our algorithmwe generateparallelrenderingsusingstatictexture.To beconsistentwiththedynamictexturingalgorithmwesampledtexturefrom anequallyspaced100 imagessubsetof theoriginal sequence.Figure10 illustratestheperformanceof the two algorithmsfor re-generatingoneof the initial positionsandfor a newposition.We also createda movie (arm.mpg) that reani-matesthearmin thetwo cases.Notice thegeometricerrors

Figure 10: Geometricerrors on arm sequence. (top) Ren-deringsof a new positionusingstaticanddynamictexturesrespectively. (bottom)Rerenderingsfor oneframefrom thetrainingsequenceusingstaticanddynamictextures.

in the caseof static texture algorithmthat arecorrectedbythedynamictexturealgorithm.

0 5 9 13 150

5

10

15

Time

Mea

n er

r Static textureDynamic texture

Figure12: Intensitypixelerror in therenderedimages(com-paredto original image)

To quantify the geometricerrorsfor our dynamictextur-ing algorithmcomparedto a classictexturing algorithmwetook imagesof a patternand then regeneratesomeof theoriginal positions.We recordeddifferencesin pixel inten-sitiesbetweenthe renderedimagesandoriginal ones(Fig-ure12).Noticethattheerrorwasalmostconstantin thecaseof dynamictextureandveryuneven in thecaseof statictex-ture.For the static texture casewe usedframe0,5,9,13forsourcingthetexture(consistentwith usingthreetextureba-sisvectorsin thedynamiccase)sois expectedthattheerrordropsto zerowhenreproducingtheseframes.Themeanrel-ative intensityerror was1� 17% in the caseof statictextureand 0� 56% in the caseof dynamictexture. Thereare stillsomeartifactspresentin the reanimationmainly causedbythe impreciseestimatedgeometricmodel.An improvedso-lution would beto automatically estimatea non-rigidmodelfrom trackedpoints(similar to Bregler 1).

c�

TheEurographicsAssociationandBlackwellPublishers2002.

Cobzas,Yerex, Jagersand/ Dynamictextures

Figure11: Armanimation

Verticaljitter HorizontaljitterStatictexture 1.15 0.98Dynamictexture 0.52 0.71

Table1: Averagepixel jitter

For ananimationthereareglobalerrorsthroughthewholemovie thatarenotvisiblein oneframebut only in themotionimpressionfrom thesuccessionof theframes.Oneimportantdynamicmeasurementis motion smoothness.When usingstatictexturewesourcethetexturefrom asubsetof theorig-inal images(k � 1 if k is thenumberof texturebasis)sothereis significantjumpingwhenchangingthetexturesourceim-age.We tracked a point througha generatedsequenceforthepatternin the two casesandmeasurethesmoothnessofmotion.Table1 shows theaveragepixel jitter.

5. Discussion

We defineda dynamictexture, andshowed how to usethisto compensatefor inaccuraciesin acoarsegeometricmodel.With arelaxedaccuracy requirement,weshow how to obtaina coarsescenemodel using a structure-from-motionalgo-rithm on real-timevideo.Using themodel,rectifiedtexturepatchesareextracted,andtheresidualintensityvariationintheseis analyzedstatisticallyto computeaspatialtextureba-sis.In renderingandanimationnew imagesis generatedbymodulatingthetexturebasisandwarpingtheresulting,timevarying texture onto the imageprojectionof the geometricmodelunderthenew desiredpose.

An advantageof our methodis that all stepsin model-ing andre-renderingcanbe donewith just an uncalibratedcamera(webcam)andstandardPC, insteadof needingex-pensive 3D scanners.This alsoavoidsthecumbersomereg-istrationneededto aligna3D modelwith cameraimages.

Current graphicscardshave texture memory sizesde-signedfor conventional static texturing. In our applicationweuseatexturebasisof 10-100elements,whichallowsonlyoneobjectof the type we shown to be renderedat a time.We plan to investigatewaysof partitioning the texture ba-sissothatonly a subsetis neededfor eachview, andaddinga swappingstrategy to move texturesbetween graphicandmainmemoryaspredictedby thecurrentmotionbeingren-dered.Thiswehopewill extendourcapabilityto captureandrenderfor instancenot only anarm,but a wholeupperbodyof ahuman.

References

1. C. Bregler, A. Hertzmann,andH. Biermann.Recoveringnon-rigid 3D shapefrom imagestreams.In IEEEConferenceCom-puterVision andPattern Recognition (CVPR00), 2000.

2. C. Buehler, M. Bosse,S. Gortler, M. Cohen,andL. McMil-lan.Unstructuredlumigraphrendering.In ComputerGraphics(SIGGRAPH’01), 2001.

3. P. E. Debevec,C. J. Taylor, andJ. Malik. Modelingandren-deringarchitecturefrom phtographs.In ComputerGraphics(SIGGRAPH’96), 1996.

4. O. D. Faugeras.ThreeDimensionalComputerVision: A Geo-metricViewpoint. MIT Press, Boston,1993.

5. S. J. Gortler, R. Grzeszczuk,andR. Szeliski. Thelumigraph.In ComputerGraphics(SIGGRAPH’96), pages43–54,1996.

6. G. D. HagerandP. N. Belhumeur. Efficient region trackingwith parametricmodelsof geometryandillumination. IEEETransactionson Pattern Analysisand Machine Intelligence,20(10):1025–1039,1998.

7. Z. Hakura,J.Lengyel,andJ.Snyder. Parameterizedanimationcompression.In EurographicsWorkshoponRendering, 2000.

8. M. Jagersand. Image basedview synthesisof articulatedagents.In ComputerVision andPatternRecognition, 1997.

9. K. KutulakosandS.Seitz.A theoryof shapeby shapecarving.InternationalJournalof ComputerVision, 38:197–216, 2000.

10. M. Levoy andP. Hanrahan.Light field rendering.In ComputerGraphics(SIGGRAPH’96), pages31–42,1996.

11. T. Malzbender, D. Gelb,andH. Wolters. Polynomialtexturemaps.In ComputerGraphics(SIGGRAPH’01), 2001.

12. L. McMillan and G. Bishop. Plenoptic modeling: Amimage-basedrenderingsystem. In ComputerGraphics(SIG-GRAPH’95), pages39–46, 1995.

13. C. J.PoelmanandT. Kanade.A paraperspective factorizationmethodfor shapeand motion recovery. IEEE TransactionsonPatternAnalysisandMachineIntelligence, 19(3):206–218,1997.

14. M. Pollyfeys. Tutorial on 3D Modeling from Images. Lec-tureNotes,Dublin, Ireland(in conjunctionwith ECCV2000),2000.

15. A. Shashua.Geometry andPhotometryin 3D VisualRecogni-tion. PhDthesis,MIT, 1993.

16. I. Stamosand P. K. Allen. Integration of rangeand imagesensingfor photorealistic3dmodeling.In ICRA, 2000.

17. P. SturmandB. Triggs. A factorizationbasedalgorithmformulti-imageprojective structureand motion. In ECCV (2),pages709–720,1996.

18. R. Szeliski. Video mosaicsfor virtual environments. IEEEComputerGraphics and Applications, pages22–30,March1996.

19. C. Tomasi and T. Kanade. Shapeand motion from imagestreamsunderorthography: A factorizationmethod. Interna-tional Journalof ComputerVision, 9:137–154, 1992.

20. D. WeinshallandC. Tomasi. Linear andincrementalaquisi-tion of invariantshapemodelsfrom imagesequences.In Proc.of 4th Int. Conf. onComputeVision, pages675–682,1993.

c�

TheEurographicsAssociationandBlackwellPublishers2002.