Display of 3-D Digital Images: Computational Foundations and Medical Applications

8
These new approaches to 3-D medical imaging promise better and more cost-effective object identification, representation, and manipulation in space and time. Display of 3-D Digital Images: Computational Foundations and Medical Applications Gabor T. Herman and Jayaram K. Udupa Hospital of the University of Pennsylvania Three-dimensional digital scenes arise in a number of scientific, industrial, and medical applications. Effective display of the 3-D information in these scenes is impor- tant for various reasons; in medical applications, these are obvious. Complete perception of the 3-D structure of internal organs and their surroundings is critical to both diagnosis and treatment. Currently, a number of 3-D imaging tech- nologies produce 3-D digital scenes of internal structures within a specified region. Scanners, the devices that pro- duce such scenes, estimate certain physical properties of the material of the objects in the region within a rec- tangular parallelepiped-shaped volume element. Thus, a 3-D digitial scene is essentially an array of such volume elements, or voxels. Each voxel has an assigned value, called its density. The authors' approach A number of approaches' are available for the display of 3-D information in 3-D digital scenes. Our interest lies in techniques of computer-generated displays of surfaces or objects present in 3-D digital scenes. Our approach is inherently three-dimensional, in the sense that the set of slice images produced by scan- ners is treated as a 3-D digital scene. Further, a display of the surfaces of objects in this scene is derived from a se- quence of transformations of the 3-D digital scene. The transformations perform * object identification, * representation of the object, * detection/formation of its surfaces, and * display of the surfaces. In addition, a capability to manipulate the object is in- corporated at both the object identification and repre- sentation levels. In the present article, we discuss the computational aspects of all of these processing steps. We use the basic capabilities of surface display, com- bined with the ability to distinguish surfaces (e.g., through the transparency feature), to manipulate the ob- jects in a number of applications. Object identification, representation, and manipulation We describe voxel v by the coordinates x, y, z of its central point. A 3-D digital scene Vis a two-tuple (S,F) where S=[v / l1xX <XAl ysYA lsz Z3, A stands for logical AND, and F is a mapping from S into set I of densities. Usually, the densities estimated by scanners are inte- ger-valued; therefore, we consider I to be a set of in- tegers. If I consists of only 0 and 1, V is called a 3-D binary scene. In particular, Vk= (Sk, F), where 0272-1716/83/0800-0039$01.00 ( 1983 IEEE An earlier version of this article, "Display of 3-D Information in 3-D Digital Images: Computational Foundations and Medical Applications," appeared in Proc. Medcomp '82 (Philadelphia, Sept. 23-25, 1982), IEEE Computer Society Press, pp. 308-314. August 1983 39

Transcript of Display of 3-D Digital Images: Computational Foundations and Medical Applications

These new approaches to 3-D medical imaging promisebetter and more cost-effective object identification,representation, and manipulation in space and time.

Display of 3-D Digital Images:

Computational Foundations and

Medical Applications

Gabor T. Herman and Jayaram K. Udupa

Hospital of the University of Pennsylvania

Three-dimensional digital scenes arise in a number ofscientific, industrial, and medical applications. Effectivedisplay of the 3-D information in these scenes is impor-tant for various reasons; in medical applications, theseare obvious.Complete perception of the 3-D structure of internal

organs and their surroundings is critical to both diagnosisand treatment. Currently, a number of 3-D imaging tech-nologies produce 3-D digital scenes of internal structureswithin a specified region. Scanners, the devices that pro-

duce such scenes, estimate certain physical properties ofthe material of the objects in the region within a rec-

tangular parallelepiped-shaped volume element. Thus, a

3-D digitial scene is essentially an array of such volumeelements, or voxels. Each voxel has an assigned value,called its density.

The authors' approach

A number of approaches' are available for the displayof 3-D information in 3-D digital scenes.

Our interest lies in techniques of computer-generateddisplays of surfaces or objects present in 3-D digitalscenes. Our approach is inherently three-dimensional, inthe sense that the set of slice images produced by scan-

ners is treated as a 3-D digital scene. Further, a display of

the surfaces of objects in this scene is derived from a se-

quence of transformations of the 3-D digital scene.

The transformations perform

* object identification,* representation of the object,* detection/formation of its surfaces, and* display of the surfaces.

In addition, a capability to manipulate the object is in-corporated at both the object identification and repre-

sentation levels. In the present article, we discuss thecomputational aspects of all of these processing steps.We use the basic capabilities of surface display, com-

bined with the ability to distinguish surfaces (e.g.,through the transparency feature), to manipulate the ob-jects in a number of applications.

Object identification, representation, andmanipulation

We describe voxel v by the coordinates x, y, z of itscentral point. A 3-D digital scene Vis a two-tuple (S,F)where

S=[v / l1xX <XAl ysYAlsz Z3,

A stands for logical AND, and F is a mapping from Sinto set I of densities.

Usually, the densities estimated by scanners are inte-ger-valued; therefore, we consider I to be a set of in-tegers. If I consists of only 0 and 1, V is called a 3-Dbinary scene. In particular, Vk= (Sk, F), where

0272-1716/83/0800-0039$01.00 ( 1983 IEEE

An earlier version of this article, "Display of 3-D Information in 3-DDigital Images: Computational Foundations and Medical Applications,"appeared in Proc. Medcomp '82 (Philadelphia, Sept. 23-25, 1982), IEEEComputer Society Press, pp. 308-314.

August 1983 39

Sk=TV / 1 C X XAI<yCYAz=kg

is called the kth slice of V. A region of V is any subset ofvoxels in S.

In our applications, only certain regions of V con-stitute a medically meaningful object. The subset 0 ofvoxels in S, which constitutes such regions, is called adigital object, or simply an object in V.

Identification. Identification of the object of interestin V is a complex pattern recognition problem. Astraightforward approach to it has an analogy in com-puter vision.

This approach consists of first segmenting V intoregions of interest and then, from a knowledge of thestructure of the object, identifying only those regionsthat, together, form the object.

In general, segmentation itself is a pattern classifica-tion problem: A voxel v is assigned to the region of in-terest only if a certain predicate P(g(t) ) is true, where tis a feature vector associated with v and g(t) is an ap-propriate discriminant function used in classifying v.Two examples of such a feature vector are (1) the vectorconsisting of F(v), and (2) the vector consisting of F(v)and the gradient ofF at v. The result of segmentation is,thus, set Q of voxels given by

Q= [v / vESAP(g(t) )

Thresholding. For this simple way of segmentation,P(g(t) ) tET, where t is the single-element feature vec-tor, viz. density F( v), and Tis a range of density values.If V is relatively noise free, and if structures havingsimilar density values are not in close proximity,thresholding can effectively produce set Q such that theobject of interest 0 is a connected component-or aunion of such components-of Q. This is particularlytrue of the bony structures in X-ray CT images.2Under such circumstances, the object of interest can

be identified by single voxels in the components con-stituting 0. These voxels can be specified interactively bydisplaying a slice of Vand selecting, with a graphics inputdevice, a voxel within the component on this slice. Weemploy such an interactive technique for identifying theobject of interest.

Interactive segmentation. In general, one cannot finda feature vector t and a decision rule g (t) such that 0 is aunion of connected components of Q. However, it isalways possible to specify a subset S' of S such that,when the segmentation operation is restricted to S', theresulting

Q' = [v / veS'AP(g(t)) (1)

includes a set of connected components that constitute O. 3In this method, called interactive segmentation, S' is

specified interactively slice by slice. Trivially, if S' = 0,then Q' = 0. This implies that 0 is determined entirely byinteraction. Generally, a precise specification of S' iscritical only in regions where segmentation based on thecriterion P(g (t) ) is ambiguous. In other regions, thediscriminant g(t) successfully classifies voxels.

Representation. Having identified the set 0 of voxelsthat constitutes an object to be displayed, the next step isto represent 0 so as to facilitate further processing.

Computational simplicity and ease of interactive ma-nipulation are the key considerations in determining theform of representation. We have developed cuberille anddirected-contour representations.

Cuberille representation. In the cuberille representa-tion,4 an object is represented as a set of cubes. The vox-els in the 3-D digital image V are not generally cube-shaped, because the inter-slice interval-slice thick-ness-usually exceeds the size of a pixel on the slices. Infact, we make sure that at the time of segmentation, allvoxels in the 3-D digital image are cube-shaped. Weachieve this through linear interpolation of new slicessuch that the inter-slice interval for the new set of slicesequals the pixel size. Thus, the set Q of voxels resultingfrom segmentation always conforms to the cuberillerepresentation.

Directed-contour representation. In the directed-contour representation,3,5 an object is represented by aset of directed contours called 8-contours. For a precisedefinition of an 8-contour, see Tuy and Udupa.5 In-tuitively, an 8-contour is a discrete analog of a non-self-intersecting closed curve in the plane.The 8-contours are specified in each slice of a 3-D

binary image such that the region of interest lies to theleft of the 8-contours. In treating a given object 0 as a3-D binary image V(such that F(v) = 1 for all ve0, andF(v) =Oif vESand vlO), the Orepresented byaset Cof8-contours is the union of the regions in Vrepresented bythe contours in each of the slices of V.We have developed completely automatic algorithms

for deriving an 8-contour representation of an objectfrom a given 3-D digital image.5 An important propertyof the 8-contour representation is that it stores and per-mits quick access to information about both the bordersand the internal structures of an object. As noted belowthis is very useful for quick manipulation and display ofobjects.

Manipulation. The ability to manipulate objects andsubsequently visualize them has great clinical signifi-cance. The part of the object that corresponds to, say, anabnormal growth, must be removed graphically just as asurgeon would remove it in an operation. Of course, thecomputer procedure involves no risk or additional scan-ning of the patient. In our terminology, manipulationmeans the act of subtracting a specified subset Or of 0from 0. Here, we discuss two ways of specifying Or.

Method 1. The first, and more general, method isbased on the slice-by-slice technique of interactivesegmentation described previously in Equation 1. Oneach slice of a given 3-D digital image V, the segmenta-tion operation is restricted to a user-specified S' (whichexcludes Or) such that the resulting set Q' includes a setof connected components constituting the manipulatedobject 0' = 0- Or, This technique allows any part ofthe object to be removed. However, it lacks three-dimen-sional flexibility.

IEEE CG&A40

Method 2. The second method provides better three-dimensional interaction, as demonstrated in the follow-ing case.

First, a display of the surfaces of the object is pro-duced on a display monitor. The form of interaction con-sists of intersecting the object by a plane perpendicular tothe monitor screen. The part of the object in one of thehalf spaces constitutes Or, and that in the other half con-stitutes 0'. The directed-contour representation de-scribed previously is most appropriate for this form ofmanipulation, since the 3-D manipulation problem hereis equivalent to a number of independent 2-D manipula-tion problems. The intersection of the plane with eachslice is a digital line dividing the slice into two regions.Knowing this line and the set of 8-contours representingthe object 0 to be manipulated, we can quickly deter-mine the modified set of 8-contours on the current slicefor the manipulated object.5 This is made possible by aproperty of the directed-contour representation: it storesinformation about internal structures of the object in areadily available form.The general idea outlined here can be extended to

include more complex forms of manipulation.

Surface detectionlformation and display

For the case of representing an object 0 as a set of vox-els, a boundary surface B of the object is a connected setof the faces separating a voxel in 0 from a voxel not in 0.It is possible to define adjacency of the faces in B in aconsistent way, so that for each facef in B there are ex-actly two faces adjacent tof. Further, any given face in Bis adjacent to two other faces in B.2,6

In graph-theoretic terms, a boundary surface can berepresented as a directed graph in which each node, rep-resenting a face f, has two outgoing arcs representingadjacency of two faces tofand two incoming arcs repre-senting adjacency off to two other faces. An importantproperty of such a graph, which represents a boundarysurface, is that it is strongly connected. Consequently,the graph has a spanning binary tree rooted at any givennode in the graph. The boundary surface detection prob-lem is thus translated into a binary tree traversalproblem.To commence surface detection, an initial face (repre-

senting the root of the binary tree) on the surface to bedetected is specified. Even for surfaces made up of sev-eral hundred thousand faces, the auxiliary storage re-quired by the surface detection algorithm hardly exceedsa couple of thousand memory locations.

Another approach. An entirely different approach towhat can appropriately be called surface formation isbased on the directed-contour representation of an ob-ject.3 An important difference between this and the sur-face detection approach is that the former produces allthe surfaces of the object, with no tracking of the faces.

Since the directed-contour representation has readilyavailable border information of the object on each slice,the faces on the border of each slice lying perpendicularto the plane of the slice can be easily determined if the

8-contours on the slice are known. The faces lying paral-lel to the plane of the slice are determined by knowing the8-contours on the previous and next slices. The algorithmis inherently slice-by-slice, and the computations done onone slice are independent of those done on others. This isa major computational difference between this methodand the surface detection algorithm, which is basically se-quential. Even for a sequential implementation, thealgorithm is about six to eight times faster than the sur-face detection algorithm. An important drawback, how-ever, is that in its present form, it cannot isolate con-nected surfaces.

Display of surfaces. For the display of 3-D informa-tion based on surfaces, we rely heavily on four computer-generated depth cues:

* hidden surface removal,* shading,* transparency effect, and* motion parallax by rotation.

These are fairly established techniques in the computergraphics literature, but there is a major departure in thealgorithms that generate these effects. The surfaces wedeal with have very desirable properties,4,7'8 and thisresults in significant computational savings. Time and

Time and cost-effectiveness are important indetermining the usefulness of displayprocedures for medical applications

cost-effectiveness are important in determining the use-fulness of display procedures for medical applications.In fact, these factors are critical even for an effectiveclinical evaluation of such procedures.

Hidden surface removal. A unit normal to a face inboundary surface B of an object represented either by acuberille or by a set of directed contours has one of sixpossible values. The normal points in one of six direc-tions: -x, +x, -y, +y, -z, or +z; the direction deter-mines the value.

Based on the value of the unit normal, the faces in Bcan be classified into six groups. For a fixed viewingdirection and for any given orientation of the surface,faces in at most three of the six groups are potentiallyvisible.7 Thus, at least three groups of faces can bediscarded globally.Our second result is characteristic of surfaces made up

of faces of cubic voxels; it makes possible a fast z-bufferalgorithm.4 If two faces fi and f2 lying on the same sideof an observer located at P have PI and P2 as their respec-tive central points, and if PP1 < PP2, then for any pointsQi and Q2 in the facesf1 andf2, respectively, PQ1 < PQ2.This results in a great simplification: Each potentiallyvisible face in B requires only one distance-from-observercalculation. Compare this with a surface composed of gen-eral polygonal elements where, in principle, every pointmust be examined for its distance from observer.

August 1983 41

An important drawback of z-buffer-type algorithms isthat they are heavily I/O bound. This difficulty can besignificantly reduced by partitioning the object space in-to independent blocks, such that all the auxiliary storagerequired to apply the z-buffer algorithm to each of theseblocks separately can be accommodated in main mem-ory. I If the partitioning is done cleverly, combining theresults of applying the algorithm to the various blocksshould require minimum additional work.

For the simple case of rotation about, say, the x-axis,assuming the viewing direction to be parallel to thez-axis, the object space can be partitioned by planesplaced one voxel apart perpendicular to the x-axis.Clearly, faces in one layer (formed by a pair of adjacentplanes) do not obscure faces in other layers. Within eachlayer, hidden surface removal can be done by using aone-line z-buffer.l

Shading and transparency. In order to create an imagethat accurately depicts 3-D surfaces of the displayed ob-ject, each point on the display screen onto which pointson visible faces project must be assigned a brightness or ashading value. The shading values assigned to all pointsof a face are assumed to be identical. The shading valueassigned to face f is determined by

(1) the normal n tof,(2) the distance df of the central point off from the

observer,(3) the normals nI, n2, n3, n4 of the four adjacent faces

off on the surface.

Here again are several important observations. Ageneral expression of the shading value s (f) assigned tofcan be written as

s(f) =g1 (n, ni, n2, n3, n4)-g2(df).

We process the faces in the potentially visible groups ina group-by-group fashion. While dealing with faces inone group, n is fixed for all the faces in the group. Fur-ther, at each of the four edges off, the face adjacent tofat that edge can have one of three possible orientations.Thus, in all, there are 3x3x3x3=81 possible con-figurations of the four neighboring faces for the faces ina particular group.The shading computation is based on a face and its

four neighbors. Through three-dimensional smoothing,it alleviates the staircase effect resulting from consideringonly the normal of the face.8 Note that for any choice ofthe smoothing function g1, during processing of a par-ticular group, g1 (n, n I, n2, n3, n4) can assume one of only81 possible values. These values of g, can be precom-puted and stored in a table before processing the faces inthe group. Thus, essentially, the shading computation in-volves one multiplication by the value of g2 (df) for eachface f.

Transparency. To impart the transparency effect, wedivide the surfaces into two groups: one consists of sur-faces to be displayed as opaque and another consists ofsurfaces to be displayed as transparent. The images pro-

duced by hidden surface removal and shading are com-puted for the two groups separately. To bring about trans-parency, the two images are combined appropriately.9

In the final image, the intensities of correspondingpoints in the two images are combined linearly only if theface on the transparent surface is nearer to the observerthan the opaque surface. Otherwise, the intensity of thepoint in the image produced by the opaque surface isretained as its final intensity.

Surfaces of several hundred thousand faces requireone to two minutes on a 32K-core minicomputer for hid-den surface removal and shading to create one image.Our long-term objective is to fully exploit the inherentparallelism associated with the process of first partition-ing the object space into independent blocks and thendealing with each block separately. We intend to com-bine this with directed-contour representation for sur-face formation and object manipulation and achievereal-time display and manipulation of objects in 3-Ddigital scenes.

Six projects

Six current projects at the Hospital of the University ofPennsylvania illustrate the concepts described above.

Bone graft application. The aim of the first project isnoninvasive quantitation of bone graft volumes and de-termination of the amount of graft resorption aftercraniofacial surgery. This project is being pursued incooperation with Dr. J. E. Zins of the department ofsurgery.

These techniques will eventuallyprovide useful tools in preoperative

planning and postoperative follow-up.

Figure 1 shows a CT slice from a rabbit's skull. (A rab-bit model has been used in this project.) Using our in-teractive segmentation program, Zins identified theregion in which the graft lies; this is set S' of Equation 1.Within range T of density values (for bone), we havespecified the Q' of Equation 1, using property F(v)e T.The location of elements of Q' in two consecutive slicesis indicated at the bottom of Figure 2.The same rabbit was rescanned a few months later.

Due to bone resorption, the Q' determined by the sameS' and T is much smaller, as indicated in Figure 3. Thevolume enclosed by Q' can, of course, be easilycalculated.

These techniques will eventually provide useful tools inpreoperative planning and postoperative follow-up incraniofacial surgery.

Joint disease application. The second project, directedtoward the study of joint disease, is being done in con-junction with D. Roberts, B. Hirsch, and C. Ram of thehospital's clinical research center. Currently, we areusing a horse model and studying the joint commonly

IEEE CG&A42

known as the fetlock, which corresponds to the firstknuckle joint in the human hand.The two three-dimensional displays in Figure 4 are of

different surfaces. The one on the left was created byselecting the set Q purely by thresholding. Thus, all thebone-containing voxels in S are included in Q. This

causes other bones to hide the interesting internal surfaceof the joint itself. With our interactive segmentationtechniques, these other bones can be removed so that thesurface of interest is displayed, as shown on the right ofFigure 4. The same method can, of course, be used insurgical planning to indicate the effect of a plannedprocedure.

NMR imaging application. In the third project we areapplying our procedures to data obtained not from X-rayCT (as in all other projects discussed here), but fromnuclear magnetic resonance (NMR) imaging. 10 This pro-ject is being pursued with L. Axel of the radiologydepartment, using the NMR machine built by P. A. Bot-tomley and W. A. Edelstein and their co-workers at theGeneral Electric Company.NMR imaging shows a strong dependence on blood

flow in large vessels, so that the lumens of the vessels andcardiac chambers appear relatively free of signal. Figure5 illustrates how 3-D display techniques can demonstratethe shapes and spatial relationships of cardiac structuresand great vessels.

Figure 1. CT slice of a rabbit's skull.

Figure 4. Two views of a fetlock joint.

Figure 2. The location of elements of Q' in two con-secutive slices.

Figure 3. The effects of bone resorption. Figure 5. NMR scan images.

August 1983 43

These images were produced from an NMR scan of alive dog. We have separately identified the voxels con-taining the myocardium (surface displayed top left) andvoxels containing blood (surface displayed top right).The two surfaces are displayed simultaneously on thebottom. On the left, they are displayed as if they wereboth opaque; on the right, opaque lumens are displayedin conjunction with a transparent myocardium. The factthat such images have been obtained without ionizingradiation and without contrast-medium injection sug-gests that NMR might become an important method ofcardiovascular imaging for diagnosis and treatmentplanning.

Radiation therapy application. In the fourth project,we are investigating the use of 3-D display for truly three-dimensional radiation therapy treatment planning. Fig-ure 6 is an example, based on data provided by P. A.Findlay of the radiation therapy department.

A patient had a radioactive implant embedded in thebrain. Figure 6 shows two views of the composite struc-ture, which consists of the opaque implant and trans-parent skull. Such displays can include tumor surfacesand radiation isodose surfaces for both completed andplanned radiation therapy procedures.

Diagnostic radiology applications. Our fifth examplecomes from diagnostic radiology.A child, only a few weeks old, had a type of dwarfism

and paralysis. The examining radiologist, H. Goldberg,wished to evaluate spinal disalignment and spinal canalbony compression as possible causes of the paralysis. Heexamined many three-dimensional views of different ver-tebra segments.

Figure 7 shows two such views near the top of the cer-vical spine. These are views of the right and left sides ofCl, the first cervical vertebra, and C2, the second cer-vical vertebra as seen from the direction of the spinal

Figure 6. Two views of transparent skull and opaque implant.

Figure 7. First (left) and second (right) cervical vertebrae.

IEEE CG&A44

Figure 8. Two views of the skull of a patient with Hurler's syndrome.

canal. The vertebra segments from the other side, whichwould hide the canal, are removed. (This figure wasreproduced from Polaroid shots; its quality is not quiteas good as that of the others, which were photographeddirectly from our display screen.)

Evaluation application. The sixth project is concernedwith evaluation of the usefulness of 3-D display for sur-gical planning. It was conducted in cooperation with ourdepartment of surgery, especially L. A. Whitaker and J.Bevivino.Figure 8 shows two views of the skull of one of the pa-

tients studied. This patient has Hurler's syndrome, oneof the craniofacial dysostoses. This includes supraorbitalrecession, telecanthus, bilateral exorbitism, and midfacehypoplasia. We intend to develop our 3-D display systemto the point that the computer can accurately simulatesurgery, so that the most promising procedure can beconfidently identified prior to actual surgery. O

Acknowledgments

Our research is supported by grants HL28438 andHL4664 from the National Heart, Lung and Blood In-stitute. We would also like to thank Robert Lehman forpreparing the manuscript.

References

1. G. T. Herman, R. A. Reynolds, and J. K. Udupa, "Com-puter Techniques for the Representation of Three-Dimen-sional Data on a Two-Dimensional Display," Proc. SPIE,Vol. 367, 1982, pp. 3-14.

2. E. Artzy, G. Frieder, and G. T. Herman, "The Theory,Design, Implementation and Evaluation of a SurfaceDetection Algorithm," Computer Graphics and ImageProcessing, Vol. 15, Jan. 1981, pp. 1-24.

3. J. K. Udupa, "Interactive Segmentation and BoundarySurface Formation for [hree-Dimensional DigitalImages," Computer Graphics and Image Processing, Vol.18, Mar. 1982, pp. 213-235.

4. G. T. Herman and H. K. Liu, "Three-DimensionalDisplay of Human Organs from Computed Tomograms,"Computer Graphics and Image Processing, Vol. 9, Jan.1979, pp. 1-21.

5. H. K. Tuy and J. K. Udupa, "Representation, Manipula-tion and Display of 3-D Discrete Scenes," Technical Re-port MIPG66, Medical Image Processing Group, Dept.of Radiology, University of Pennsylvania, Philadelphia,July 1982.

6. G. T. Herman and D. Webster, "Surfaces of Organs inDiscrete Three-Dimensional Space," in MathematicalAspects of Computerized Tomography, G. T. Hermanand F. Natterer, eds., Springer Verlag, Berlin, 1980, pp.204-224.

7. E. Artzy, "Display of Three-Dimensional Information inComputed Tomography," Computer Graphics and ImageProcessing, Vol. 9, Feb. 1979, pp. 196-198.

8. G. T. Herman and J. K. Udupa, "Display of 3-D DiscreteSurfaces," Proc. SPIE, Vol. 283, 1981, pp. 90-97.

9. G. T. Herman, J. K. Udupa, D. M. Kramer, P. C. t auter-bur, A. M. Rudin, and J. S. Schneider, "The Three-Dimensional Display of NMR Imaging," Proc. SPIE,Vol. 273, 1981, pp. 35-40.

10. 1. L. Pykett, "NMR Imaging in Medicine," Sci. Am., Vol.246, May 1982, pp. 78-88.

August 1983 45

7-

,Gabor T. Herman is a professor in andchief of the medical imaging section of theDepartment of Radiology at the Hospitalof the University of Pennsylvania. From1967-69, he was an instructor with IBM(UK) Ltd. From 1969-81, he taught in theDepartment of Computer Science, StateUniversity of New York at Buffalo; in1976, he became director of the medicalimaging processing group there. During

1975-76, he was a visiting professor in the biophysical sciencesunit at the Mayo Clinic in Rochester, Minnesota. He hasauthored over 150 scientific publications, mostly on biomedicalcomputer science. He has written and edited several books.Herman received an MS degree in engineering science from

the University of California in 1966 and a PhD degree in mathe-matics from the University of London in 1968. He is a memberof the American Association of Physicists in Medicine, ACM,the British Computer Society, the Mathematical ProgrammingSociety, the Radiological Society of North America, theAmerican Society of Neuro-Imaging, SIAM, the Society forPhoto-Optical Instrumentation Engineers, and the IEEE.

a

.N_i Jayaram K. Udupa is an adjunct assistantX professor and director of the medical im-

v age processing group with the Departmentof Radiology at the University of Penn-sylvania, Philadelphia. From 1976-78, hewas a scientific officer in the Electrical

Q Engineering Department of the Indian In-stitute of Science, Bangalore. From 1978-

~ .81, he was a research assistant professoriwith the Computer Science Department at

the State University of New York at Buffalo. His research in-terests include computer graphics, image and signal processingfor biomedical applications, and pattern recognition.Udupa received his PhD in computer science from the Indian

Institute of Science, Bangalore, in 1976, with a medal for bestresearch. He is a member of the IEEE and the National Com-puter Graphics Association.

I

oSemi-Automatic ContouringoCounting and SizingOArea and Density Statisticso Dimensional and Brightness Calibration

a

0 Resolution 640 x 480 x 9 BitsoColor Display OptionO Real Time Processing Option

-1

Log Etec AGIndustriestrasse 57CH-8152 Glattbrugg,ZURICH, SWITZERLAND(01) 810 66 10 Telex: 56 711

LogEtronics GmbHDieselstrasse 106242 Kronberg 2, West Germany49 6 1 73-6988 Telex: 418315

420 So. Fairview Ave * Goleta, CA 9311 7 0 TEL (805) 967-2383 TWX 658 336

Reader Service Number 13

a