Past projects

GPU Computing

The GPU has evolved into a highly parallel multiprocessor machine over the last decades. The current GPU supports the high floating point precision and is fully programmable using high-level languages, such as NVIDIA CUDA or OpenCL. Due to their increasing computational power and flexibility, current GPUs are becoming a very powerful computational platform for general-purpose high-performance computation problems.In this project, we focus on developing parallel algorithms to tackle computationally-expensive scientific computing problems. We have developed several GPU-friendly parallel algorithms, such as distance transform and PDE-based image processing, and applied those methods in neuroscience applications.

In this project, we focus on developing GPU algorithms to tackle computationally-expensive scientific computing problems.

Sponsored by NVIDIA, Microsoft, NSF, NIH

Team members: Won-Ki Jeong; Amanda Peters; Hanspeter Pfister

Visualizing Biological Data

dots blue, greenWith the advent of high-throughput technologies biology is experiencing a data revolution. What started as the challenge to sequence the human genome has grown into the need to compare thousands of genomes, and to understand DNA sequences in the context of many other types of measured biological data.Biologists use visualizations in their everyday workflow to help them explore and understand this massive amount of data. In this research we work closely with biologists to design visualization tools that support efficient scientific inquiry and provide new biological insights.

We work closely with biologists working in the field of genomics and systems biology to design visualization tools that enable complex, exploratory data analysis.

Sponsored by NSF/CRA CIFellows

Team members: Miriah Meyer; Hanspeter Pfister; Tamara Munzner

Image and Video Compositing

We are building tools that analyse the visual appearance of images and videos, and allow users to easily create photo-realistic composites. Merging images and videos to create high-quality composites is a very difficult problem, and even professional artists using sophisticated tools can take many hours of work to create results that are photo-realistic. The goal of this project is to create tools that make image and video compositing easier by automating most of the process. To this end, we are developing algorithms that analyse and match the visual appearance of objects in images (i.e their color, contrast, noise, texture, and blur), and makes creating composites from diverse images very easy. We have also extended this work to video compositing where, by tracking and blending faces, we can replace them in video performances.

Team members: Kalyan Sunkavalli, Kevin Dale, Michah K. Johnson, Wojciech Matusik, Hanspeter Pfister

Analyzing Time-lapse data

Time-lapse photography, in which frames are captured at a lower rate than that at which they will ultimately be played back, can create an overwhelming amount of data. For example, a single camera that takes an image every 5 seconds will produce 17,280 images per day, or close to a million images per year. Image or video compression reduces the storage requirements, but the resulting data has compression artifacts and is not very useful for further analysis. In addition, it is currently difficult to edit the images in a time-lapse sequence, and advanced image-based rendering operations such as relighting are impossible. We have developed a new representation for time-lapse video that efficiently reduces storage requirements while allowing useful scene analysis and advanced image editing.

We are using physics-based models to analyze the variations observed in time-lapse data to reconstruct the geometry and appearance of outdoor scenes.

Team members: Kalyan Sunkavalli, Fabiano Romeiro, Wojciech Matusik, Todd Zickler, Hanspeter Pfister, Szymon Rusinkiewicz

Content-Specific Image Enhancement using Large Image Collections

Sky, tree and river

Due in large part to decreasing costs of image sensors and related technologies, point-and-shoot digital photography has become increasingly accessible over the past several years. Large digital photo collections are now common, making for a vast amount of digital image data stored in the home and across the web. For example, more than 28 million images are uploaded to Facebook daily.

The proliferation of digital images available online through these photo-sharing sites has allowed researchers to collect large databases of natural images and to develop data-driven methods for improving photographs. These methods typically use a nearest-neighbor search over compact, distinctive image descriptors, such as SIFT [Lowe 1999] and Gist [Oliva and Torralba 2001], and interpret distance in this feature space as a proxy for semantic similarity of image content. Recent results indicate that, despite the huge space of all images, such searches can robustly find semantically-related results for large but attainable database sizes [Torralba et al. 2007].

To improve a given image, the general notion is that its nearest neighbors inform us of what the query image should look like. We apply this approach for two applications -- restoring natural appearance to images taken with a camera that suffer from common artifacts; and enhancing the realism of computer-generated (CG) images. Moreover, we refine existing techniques for global appearance transfer of image characteristics, like color and tone, to region-based local transfers.

For image restoration, we use the recovered image matches as an image-specific prior and demonstrate its value in a variety of automatic restoration operations, including white balance correction, exposure correction, and contrast enhancement. Results from our evaluation show that such priors consistently out-perform generic or even domain-specific priors for these operations. In CG image enhancement, we utilize the recovered correspondences to make CG images look more realistic. Given region correspondences between a CG input and real matches from the database, the user can automatically transfer color, tone, and texture from matching regions to the CG image. Results of a user study show that our improved CG images appear more realistic than the originals.

In this work, we make use of a large collection of natural images gathered from online repositories for the task of image enhancement.

Sponsored by Adobe Systems, Inc.

Team members: Kevin Dale, Mica K. Johnson, Kalyan Sunkavalli, Wojciech Matusik, Shai Avidan, Hanspeter Pfister, and William T. Freeman