Publications

2004
Finding Optimal Views for 3D Face Shape Modeling
Lee J, Moghaddam B, Pfister H, Machiraju R. Finding Optimal Views for 3D Face Shape Modeling. In: FGR' 04 IEEE international conference on Automatic face and gesture recognition. IEEE; 2004 p. 31-36.Abstract
A fundamental problem in multi-view 3D face modeling is the determination of the set of optimal views required for accurate 3D shape estimation for a generic face. There is no analytical solution to this problem, instead (partial) solutions require (near) exhaustive combinatorial search, hence the inherent computational difficulty. We build on our previous modeling framework which uses an efficient contour-based silhouette method and extend it by aggressive pruning of the view-sphere with view clustering and various imaging constraints. A multi-view optimization search is performed using both model-based (eigenheads) and data-driven (visual hull) methods, yielding comparable best views. These constitute the first reported set of optimal views for silhouette-based 3D face shape capture and provide useful empirical guidelines for the design of 3D face recognition systems.
Paper
Hardware-Accelerated Adaptive EWA Volume Splatting
Chen W, Ren L, Zwicker M, Pfister H. Hardware-Accelerated Adaptive EWA Volume Splatting. In: IEEE Visualization. 2004 p. 67-74.Abstract
We present a hardware-accelerated adaptive EWA (elliptical weighted average) volume splatting algorithm. EWA splatting combines a Gaussian reconstruction kernel with a low-pass image filter for high image quality without aliasing artifacts or excessive blurring. We introduce a novel adaptive filtering scheme to reduce the computational cost of EWA splatting. We show how this algorithm can be efficiently implemented on modern graphics processing units (GPUs). Our implementation includes interactive classification and fast lighting. To accelerate the rendering we store splat geometry and 3D volume data locally in GPU memory. We present results for several rectilinear volume datasets that demonstrate the high image quality and interactive rendering speed of our method.
Paper
Hardware-Accelerated Volume Rendering
Pfister H, Johnson C, Hansen C. Hardware-Accelerated Volume Rendering. In: The Visualization Handbook. Elsevier; 2004Abstract
In this chapter, we discuss methods for hardware-accelerated rendering of rectilinear volume datasets: ray casting, texture slicing, shear-warp and shear-image rendering, and splatting. We give sufficient background to enable a novice to understand the details and tradeoffs of each algorithm. As much as possible, we refer to the literature for more information.
Paper
Moderne Volumenvisualisierung (Modern Volume Visualization)
Pfister H. Moderne Volumenvisualisierung (Modern Volume Visualization). Information Technology 2004;Heft 3:117-123.Abstract
Over the last decade, volume rendering has become an invaluable visualization technique for a wide variety of applications. In this paper, we discuss modern, hardware-accelerated methods for the visualization of volume datasets: ray casting, texture slicing, shear-warp and shear-image rendering, and splatting.
Paper
Multiview User Interfaces with an  Automultiscopic Display
Matusik. W, Forlines C, Pfister H. Multiview User Interfaces with an Automultiscopic Display. In: AVI '08 Proceedings of the working conference on Advanced visual interfaces. 2004 p. 363-366.Abstract
Automultiscopic displays show 3D stereoscopic images that can be viewed from any viewpoint without special glasses. These displays are becoming widely available and affordable. In this paper, we describe how an automultiscopic display, built for viewing 3D images, can be repurposed to display 2D interfaces that appear differently from different points-of-view. For single-user applications, point-of-view becomes a means of input and a user is able to reveal different views of an application by simply moving their head left and right. For multi-user applications, a single-display application can show each member of the group a different variation of the interface. We outline three types of multi-view interfaces and illustrate each with example applications.
Paper
Unconstrained Free-Viewpoint Video Encoding
Lamborary E, Wurmlin S, Waschbusch W, Gross M, Pfister H. Unconstrained Free-Viewpoint Video Encoding. IEEE; 2004 p. 3261-3264.Abstract
In this paper, we present a coding framework addressing image-space compression for free-viewpoint video. Our framework is based on time-varying 3D point samples which represent real-world objects. The 3D point samples are obtained after a geometrical reconstruction from multiple pre-recorded video sequences and thus allow for arbitrary viewpoints during playback. The encoding of the data is performed as an off-line process and is not time-critical. The decoding however, must allow for real-time rendering of the dynamic 3D data. We introduce a compression framework which encodes multiple point attributes like depth and color into progressive streams. The reference data structure is aligned on the original camera input images and thus allows for easy view-dependent decoding. A novel differential coding approach permits random access in constant time throughout the entire data set and thus enables arbitrary viewpoint trajectories in both time and space.
Paper
Progressively-Refined Reflectance Functions from Natural Illumination
Matusik W, Loper M, Pfister H. Progressively-Refined Reflectance Functions from Natural Illumination. In: EGSR'04 Proceedings of the Fifteenth Eurographics conference on Rendering Techniques. 2004 p. 299-308.Abstract
In this paper we present a simple, robust, and efficient algorithm for estimating reflectance fields (i.e., a description of the transport of light through a scene) for a fixed viewpoint using images of the scene under known natural illumination. Our algorithm treats the scene as a black-box linear system that transforms an input signal (the incident light) into an output signal (the reflected light). The algorithm is hierarchical – it progressively refines the approximation of the reflectance field with an increasing number of training samples until the required precision is reached. Our method relies on a new representation for reflectance fields. This representation is compact, can be progressively refined, and quickly computes the relighting of scenes with complex illumination. Our representation and the corresponding algorithm allow us to efficiently estimate the reflectance fields of scenes with specular, glossy, refractive, and diffuse elements. The method also handles soft and hard shadows, inter-reflections, caustics, and subsurface scattering. We verify our algorithm and representation using two measurement setups and several scenes, including an outdoor view of the city of Cambridge.
Paper
2003
3D Reconstruction Using Labeled Image Regions
Ziegler R, Matusik W, Pfister H, McMillan L. 3D Reconstruction Using Labeled Image Regions. 2003 p. 248-259. Paper Images
A Data-Driven Reflectance Model
Matusik W, Pfister H, Brand M, McMillan L. A Data-Driven Reflectance Model. undefined; 2003 p. 759-770. Paper
Model-Based 3D Face Capture with Shape-from-Silhouettes
Moghaddam B, and LJ. Model-Based 3D Face Capture with Shape-from-Silhouettes. IEEE; 2003 p. 20-27. Paper
Silhouette-Based 3D Face Shape Recovery
Lee J, Moghaddam B, Pfister H, Machiraju R. Silhouette-Based 3D Face Shape Recovery. 2003 p. 21-30. Paper Images
Efficient Isotropic BRDF Measurement
Matusik W, Pfister H, Brand M, McMillan L. Efficient Isotropic BRDF Measurement. 2003 p. 241-248. Paper
Opacity Light Fields: Interactive Rendering of Surface Light Fields with View-Dependent Opacity
Vlasic D, Pfister H, Molinov S, Grzeszczuk R, Matusik W. Opacity Light Fields: Interactive Rendering of Surface Light Fields with View-Dependent Opacity. 2003 p. 65-74. Images
2002
Acquisition and Rendering of Transparent and Refractive Objects
Matusik W, Pfister H, Ngan A, Ziegler R, McMillan L. Acquisition and Rendering of Transparent and Refractive Objects. In: Eurographics Workshop on Rendering. 2002 p. 277-288.Abstract
This paper introduces a new image-based approach to capturing and modeling highly specular, transparent, or translucent objects. We have built a system for automatically acquiring high quality graphical models of objects that are extremely difficult to scan with traditional 3D scanners. The system consists of turntables, a set of cameras and lights, and monitors to project colored backdrops. We use multi-background matting techniques to acquire alpha and environment mattes of the object from multiple viewpoints. Using the alpha mattes we reconstruct an approximate 3D shape of the object. We use the environment mattes to compute a high-resolution surface reflectance field. We also acquire a low-resolution surface reflectance field using the overhead array of lights. Both surface reflectance fields are used to relight the objects and to place them into arbitrary environments. Our system is the first to acquire and render transparent and translucent 3D objects, such as a glass of beer, from arbitrary viewpoints under novel illumination.
Paper
EWA Splatting
Zwicker M, Pfister H, van Baar J, Gross M. EWA Splatting. IEEE Transactions on Visualization and Computer Graphics 2002;8(3):223-238.Abstract
In this paper, we present a framework for high quality splatting based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter, combining a reconstruction kernel with a low-pass filter. Because of the similarity to Heckbert’s EWA (elliptical weighted average) filter for texture mapping, we call our technique EWA splatting. Our framework allows us to derive EWA splat primitives for volume data and for point-sampled surface data. It provides high image quality without aliasing artifacts or excessive blurring for volume data and, additionally, features anisotropic texture filtering for point-sampled surfaces. It also handles nonspherical volume kernels efficiently; hence, it is suitable for regular, rectilinear, and irregular volume datasets. Moreover, our framework introduces a novel approach to compute the footprint function, facilitating efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in rendering surface and volume data.
Paper
Image-Based 3D Photography using Opacity Hulls
Matusik W, Pfister H, Beardsley P, Ngan A, Ziegler R, McMillan L. Image-Based 3D Photography using Opacity Hulls. In: ACM Transactions on Graphics (Proc. ACM SIGGRAPH). 2002 p. 427-437.Abstract
We have built a system for acquiring and displaying high quality graphical models of objects that are impossible to scan with traditional scanners. Our system can acquire highly specular and fuzzy materials, such as fur and feathers. The hardware set-up consists of a turntable, two plasma displays, an array of cameras, and a rotating array of directional lights. We use multi-background matting techniques to acquire alpha mattes of the object from multiple viewpoints. The alpha mattes are used to construct an opacity hull. The opacity hull is a new shape representation, defined as the visual hull of the object with view-dependent opacity. It enables visualization of complex object silhouettes and seamless blending of objects into new environments. Our system also supports relighting of objects with arbitrary appearance using surface reflectance fields, a purely image-based appearance representation. Our system is the first to acquire and render surface reflectance fields under varying illumination from arbitrary viewpoints. We have built three generations of digitizers with increasing sophistication. In this paper, we present our results from digitizing hundreds of models.
Paper
Object-Space EWA Splatting: A Hardware-Accelerated Approach to High-Quality Point Rendering
Ren L, Pfister H, Zwicker M. Object-Space EWA Splatting: A Hardware-Accelerated Approach to High-Quality Point Rendering. Computer Graphics Forum 2002;21:461-470.Abstract
Elliptical weighted average (EWA) surface splatting is a technique for high quality rendering of point-sampled 3D objects. EWA surface splatting renders water-tight surfaces of complex point models with high quality, anisotropic texture filtering. In this paper we introduce a new multi-pass approach to perform EWA surface splatting on modern PC graphics hardware, called object space EWA splatting. We derive an object space formulation of the EWA filter, which is amenable for acceleration by conventional triangle-based graphics hardware. We describe how to implement the object space EWA filter using a two pass rendering algorithm. In the first rendering pass, visibility splatting is performed by shifting opaque surfel polygons backward along the viewing rays, while in the second rendering pass view-dependent EWA prefiltering is performed by deforming texture mapped surfel polygons.
Paper
2001
EWA Volume Splatting
Zwicker M, Pfister H, van Baar J, Gross M. EWA Volume Splatting. In: Proceedings of IEEE Visualization. 2001 p. 29-36.Abstract
We present a novel framework for direct volume rendering using a splatting approach based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter combining a reconstruction with a low-pass kernel. Because of the similarity to Heckbert's EWA (elliptical weighted average) filter for texture mapping we call our technique EWA volume splatting. It provides high image quality without aliasing artifacts or excessive blurring even with non-spherical kernels. Hence it is suitable for regular, rectilinear, and irregular volume data sets. Moreover, our framework introduces a novel approach to compute the footprint function. It facilitates efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in reconstructing surface and volume data.
Paper
Surface Splatting
Zwicker M, Pfister H, van Baar J, Gross M. Surface Splatting. In: ACM Transactions on Graphics (Proc. ACM SIGGRAPH). 2001 p. 371-378.Abstract
Modern laser range and optical scanners need rendering techniques that can handle millions of points with high resolution textures. In this project we developed a point rendering and texture filtering technique called surface splatting which directly renders opaque and transparent surfaces from point clouds without connectivity. It is based on a novel screen space formulation of the Elliptical Weighted Average (EWA) filter. We extend the texture resampling framework of Heckbert to irregularly spaced point samples. To render the points, we develop a surface splat primitive that implements the screen space EWA filter. Moreover, we show how to optimally sample image and procedural textures to irregular point data during pre-processing. We also compare the optimal algorithm with a more efficient view-independent EWA pre-filter. Surface splatting makes the benefits of EWA texture filtering available to point-based rendering. It provides high quality anisotropic texture filtering, hidden surface removal, edge anti-aliasing, and order-independent transparency.
Paper
2000
Fast Re-Rendering of Volume and Surface Graphics by Depth, Color, and Opacity Buffering
Bhalerao A, Pfister H, Halle M, Kikinis R. Fast Re-Rendering of Volume and Surface Graphics by Depth, Color, and Opacity Buffering. Journal of Medical Image Analysis 2000;4:235-251.Abstract
A method for quickly re-rendering volume data consisting of several distinct materials and intermixed with moving geometry is presented. The technique works by storing depth, color and opacity information, to a given approximation, which facilitates accelerated rendering of fixed views at moderate storage overhead without re-scanning the entire volume. Storage information in the ray direction (what we have called super-z depth buffering), allows rapid transparency and color changes of materials, position changes of sub-objects, dealing explicitly with regions of overlap, and the intermixing or separately rendered geometry. The rendering quality can be traded-off against the relative storage cost and we present an empirical analysis of output error together with typical figures for its storage complexity. The method has been applied to visualization of medical image data for surgical planning and guidance, and presented results include typical clinical data. We discuss the implications of our method for haptic (or tactile) rendering systems, such as for surgical simulation, and present preliminary results of rendering polygonal objects in the volume rendered scene.
Paper

Pages