Publications

2017
Data-Driven Guides: Supporting Expressive Design for Information Graphics
Kim NW, Schweickart E, Liu Z, Dontcheva M, Li W, Popovic J, Pfister H. Data-Driven Guides: Supporting Expressive Design for Information Graphics [Internet]. IEEE Transactions on Visualization and Computer Graphics (InfoVis’16) 2017;PP(99):1-1. Publisher's Version

In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.

booc.io: An Education System with Hierarchical Concept Maps
Schwab M, Strobelt H, Tompkin J, Fredericks C, Huff C, Higgins D, Strezhnev A, Komisarchik M, King G, Pfister H. booc.io: An Education System with Hierarchical Concept Maps. IEEE Transactions on Visualization and Computer Graphics (Inf 2017;PP(99):1-1.

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present booc.io, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, booc.io is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into booc.io, and perform introductory qualitative evaluation with students.

Screenit: Visual Analysis of Cellular Screens
Dinkla K, Strobelt H, Genest B, Reiling S, Borowsky M, Pfister H. Screenit: Visual Analysis of Cellular Screens. IEEE Transactions on Visualization and Computer Graphics (InfoVis’16) 2017;PP(99):1-1.

High-throughput and high-content screening enables large scale, cost-effective experiments in which cell cultures are exposed to a wide spectrum of drugs. The resulting multivariate data sets have a large but shallow hierarchical structure. The deepest level of this structure describes cells in terms of numeric features that are derived from image data. The subsequent level describes enveloping cell cultures in terms of imposed experiment conditions (exposure to drugs). We present Screenit, a visual analysis approach designed in close collaboration with screening experts. Screenit enables the navigation and analysis of multivariate data at multiple hierarchy levels and at multiple levels of detail. Screenit integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures. In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed to match workflows of screening experts. We demonstrate analyses for a real-world data set, CellMorph, with 6 million cells across 20,000 cell cultures.

2016
Guidelines for Effective Usage of Text Highlighting Techniques
Strobelt H, Oelke D, Kwon BC, Schreck T, Pfister H. Guidelines for Effective Usage of Text Highlighting Techniques. IEEE Transactions on Visualization and Computer Graphics 2016;22(1):489-498.

Semi-automatic text analysis involves manual inspection of text. Often, different text annotations (like part-of-speech or named entities) are indicated by using distinctive text highlighting techniques. In typesetting there exist well-known formatting conventions, such as bold typeface, italics, or background coloring, that are useful for highlighting certain parts of a given text. Also, many advanced techniques for visualization and highlighting of text exist; yet, standard typesetting is common, and the effects of standard typesetting on the perception of text are not fully understood. As such, we surveyed and tested the effectiveness of common text highlighting techniques, both individually and in combination, to discover how to maximize pop-out effects while minimizing visual interference between techniques. To validate our findings, we conducted a series of crowd-sourced experiments to determine: i) a ranking of nine commonly-used text highlighting techniques; ii) the degree of visual interference between pairs of text highlighting techniques; iii) the effectiveness of techniques for visual conjunctive search. Our results show that increasing font size works best as a single highlighting technique, and that there are significant visual interferences between some pairs of highlighting techniques. We discuss the pros and cons of different combinations as a design guideline to choose text highlighting techniques for text viewers.

Vials: Visualizing Alternative Splicing of Genes
Strobelt H, Alsallakh B, Botros J, Peterson B, Borowsky M, Pfister H, Lex A. Vials: Visualizing Alternative Splicing of Genes. IEEE Transactions on Visualization and Computer Graphics 2016;22(1):399-408.

Alternative splicing is a process by which the same DNA sequence is used to assemble different proteins, called protein isoforms. Alternative splicing works by selectively omitting some of the coding regions (exons) typically associated with a gene. Detection of alternative splicing is difficult and uses a combination of advanced data acquisition methods and statistical inference. Knowledge about the abundance of isoforms is important for understanding both normal processes and diseases and to eventually improve treatment through targeted therapies. The data, however, is complex and current visualizations for isoforms are neither perceptually efficient nor scalable. To remedy this, we developed Vials, a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups. Our tool thus enables experts to (a) identify patterns of isoform abundance in groups of samples and (b) evaluate the quality of the data. We demonstrate the value of our tool in case studies using publicly available datasets.

Automatic Neural Reconstruction from Petavoxel of Electron Microscopy Data
Suissa-Peleg A, Haehn D, Knowles-Barley S, Kaynig V, Jones TR, Wilson A, Schalek R, Lichtman JW, Pfister H. Automatic Neural Reconstruction from Petavoxel of Electron Microscopy Data [Internet]. In: Microscopy and Microanalysis. 2016 p. 536-537. Publisher's Version

Connectomics is the study of the dense structure of the neurons in the brain and their synapses, providing new insights into the relation between brain’s structure and its function. Recent advances in Electron Microscopy enable high-resolution imaging (4nm per pixel) of neural tissue at a rate of roughly 10 terapixels in a single day, allowing neuroscientists to capture large blocks of neural tissue in a reasonable amount of time. The large amounts of data require novel computer vision based algorithms and scalable software frameworks to process this data. We describe RhoANA, our dense Automatic Neural Annotation framework, which we have developed in order to automatically align, segment and reconstruct a 1mm3 brain tissue (~2 peta-pixels).

Imaging a 1 mm3 Volume of Rat Cortex Using a MultiBeam SEM
Schalek R, Lee D, Kasthuri N, Suissa-Peleg A, Jones TR, Kaynig V, Haehn D, Pfister H, Cox D, Lichtman JW. Imaging a 1 mm3 Volume of Rat Cortex Using a MultiBeam SEM [Internet]. In: Microscopy and Microanalysis. 2016 p. 582-583. Publisher's Version

The rodent brain is organized with length scales spanning centimeters to nanometers --6 orders of magnitude [1]. At the centimeter scale, the brain consist of lobes of cortex, the cerebellum, the brainstem and the spinal cord. The millimeter scale have neurons arranged in columns, layers, or otherwise clustered. Recent technological imaging advances allow the generation of neuronal datasets spanning the spatial range from nanometers to 100s of microns [2,3]. Collecting a 1 mm3 volume dataset of brain tissue at 4 nm x-y resolution using the fastest signal-beam SEM would require ~6 years. To move to the next length and volume scale of neuronal circuits requires several technological advances. The multibeam scanning electron microscope (mSEM) represents a transformative imaging technology that enables neuroscientists to tackle millimeter scale cortical circuit problems. In this work we describe a workflow from tissue harvest to imaging that will generate a 2 petabyte dataset (> 300,000,000 images) of rat visual cortex imaged at a 4nm x 4nm x-y (Nyquist sampling of membranes) and 30nm section thickness in less than 6 months.

Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures
Gonda F, Kaynig V, Thouis R, Haehn D, Lichtman J, Parag T, Pfister H. Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures [Internet]. Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures 2016; arXivAbstract

We present an interactive approach to train a deep neural network pixel classifier for the segmentation of neuronal structures. An interactive training scheme reduces the extremely tedious manual annotation task that is typically required for deep networks to perform well on image segmentation problems. Our proposed method employs a feedback loop that captures sparse annotations using a graphical user interface, trains a deep neural network based on recent and past annotations, and displays the prediction output to users in almost real-time. Our implementation of the algorithm also allows multiple users to provide annotations in parallel and receive feedback from the same classifier. Quick feedback on classifier performance in an interactive setting enables users to identify and label examples that are more important than others for segmentation purposes. Our experiments show that an interactively-trained pixel classifier produces better region segmentation results on Electron Microscopy (EM) images than those generated by a network of the same architecture trained offline on exhaustive ground-truth labels.

Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
Strobelt H, Gehrmann S, Huber B, Pfister H, Rush AM. Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks [Internet]. CoRR 2016;abs/1606.07461 arXiv
Blind Image Deblurring Using Dark Channel Prior
Pan J, Sun D, Yang M-H, Pfister H. Blind Image Deblurring Using Dark Channel Prior. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2016Abstract
 
Abstract
We present a simple and effective blind image deblur-
ring method based on the dark channel prior. Our work is
inspired by the interesting observation that the dark chan-
nel of blurred images is less sparse. While most image
patches in the clean image contain some dark pixels, these
pixels are not dark when averaged with neighboring high-
intensity pixels during the blur process. This change in the
sparsity of the dark channel is an inherent property of the
blur process, which we both prove mathematically and val-
idate using training data. Therefore, enforcing the sparsity
of the dark channel helps blind deblurring on various sce-
narios, including natural, face, text, and low-illumination
images. However, sparsity of the dark channel introduces
a non-convex non-linear optimization problem. We intro-
duce a linear approximation of the
min
operator to com-
pute the dark channel. Our look-up-table-based method
converges fast in practice and can be directly extended to
non-uniform deblurring. Extensive experiments show that
our method achieves state-of-the-art results on deblurring
natural images and compares favorably methods that are
well-engineered for specific scenarios.
An Interaction-Aware, Perceptual Model For Non-Linear Elastic Objects
Piovarči M, Levin DIW, Rebello J, Chen D, Ďurikovič R, Pfister H, Matusik W, Didyk P. An Interaction-Aware, Perceptual Model For Non-Linear Elastic Objects. ACM Transactions on Graphics 35(4) (Proc. SIGGRAPH 2016, Anaheim, California, USA) 2016;Abstract

Everyone, from a shopper buying shoes to a doctor palpating a growth, uses their sense of touch to learn about the world. 3D printing is a powerful technology because it gives us the ability to control the haptic impression an object creates. This is critical for both replicating existing, real-world constructs and designing novel ones. However, each 3D printer has different capabilities and supports different materials, leaving us to ask: How can we best replicate a given haptic result on a particular output device? In this work, we address the problem of mapping a real-world material to its nearest 3D printable counterpart by constructing a perceptual model for the compliance of nonlinearly elastic objects. We begin by building a perceptual space from experimentally obtained user comparisons of twelve 3D-printed metamaterials. By comparing this space to a number of hypothetical computational models, we identify those that can be used to accurately and efficiently evaluate human-perceived differences in nonlinear stiffness. Furthermore, we demonstrate how such models can be applied to complex geometries in an interaction-aware way where the compliance is influenced not only by the material properties from which the object is made but also its geometry. We demonstrate several applications of our method in the context of fabrication and evaluate them in a series of user experiments.

2015
VESICLE: Volumetric Evaluation of Synaptic Interfaces using Computer Vision at Large Scale
Roncal WG, Pekala M, Kaynig-Fittkau V, Kleissas DM, Vogelstein JT, Pfister H, Burns R, Vogelstein JR, Chevillet MA, Hager GD. VESICLE: Volumetric Evaluation of Synaptic Interfaces using Computer Vision at Large Scale [Internet]. In: Xianghua Xie, Mark W. Jones, Tam GKL Proceedings of the British Machine Vision Conference (BMVC). BMVA Press; 2015 p. 81.1-81.13. Publisher's VersionAbstract


An open challenge at the forefront of modern neuroscience is to obtain a comprehensive mapping of the neural pathways that underlie human brain function; an enhanced understanding of the wiring diagram of the brain promises to lead to new breakthroughs in diagnosing and treating neurological disorders. Inferring brain structure from image data, such as that obtained via electron microscopy (EM), entails solving the problem of identifying biological structures in large data volumes. Synapses, which are a key communication structure in the brain, are particularly difficult to detect due to their small size and limited contrast. Prior work in automated synapse detection has relied upon time-intensive, error-prone biological preparations (isotropic slicing, post-staining) in order to simplify the problem. This paper presents VESICLE, the first known approach designed for mammalian synapse detection in anisotropic, non-poststained data. Our methods explicitly leverage biological context, and the results exceed existing synapse detection methods in terms of accuracy and scalability. We provide two different approaches - a deep learning classifier (VESICLE-CNN) and a lightweight Random Forest approach (VESICLE-RF), to offer alternatives in the performance-scalability space. Addressing this synapse detection challenge enables the analysis of high-throughput imaging that is soon expected to produce petabytes of data, and provides tools for more rapid estimation of brain-graphs. Finally, to facilitate community efforts, we developed tools for large-scale object detection, and demonstrated this framework to find ~50,000 synapses in 60,000 um^3 (220 GB on disk) of electron microscopy data.

PDF icon Paper.pdf
A Crowdsourced Alternative to Eye-tracking for Visualization Understanding
Kim NW, Bylinskii Z, Borkin MA, Oliva A, Gajos KZ, Pfister H. A Crowdsourced Alternative to Eye-tracking for Visualization Understanding [Internet]. In: CHI’15 Extended Abstracts. Seoul, Korea: ACM; 2015 p. 1349-1354 . Publisher's VersionAbstract

In this study we investigate the utility of using mouse clicks as an alternative for eye fixations in the context of understanding data visualizations. We developed a crowdsourced study online in which participants were presented with a series of images containing graphs and diagrams and asked to describe them. Each image was blurred so that the participant needed to click to reveal bubbles - small, circular areas of the image at normal resolution. This is similar to having a confined area of focus like the human eye fovea. We compared the bubble click data with the fixation data from a complementary eye-tracking experiment by calculating the similarity between the resulting heatmaps. A high similarity score suggests that our methodology may be a viable crowdsourced alternative to eye-tracking experiments, especially when little to no eye-tracking data is available. This methodology can also be used to complement eye-tracking studies with an additional behavioral measurement, since it is specifically designed to measure which information people consciously choose to examine for understanding visualizations.

PDF icon Paper
State-of-the-Art in GPU-Based Large-Scale Volume Visualization
Beyer J, Hadwiger M, Pfister H. State-of-the-Art in GPU-Based Large-Scale Volume Visualization [Internet]. Computer Graphics Forum 2015; Publisher's VersionAbstract

This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., “output-sensitive” algorithms and system designs. This leads to recent output- sensitive approaches that are “ray-guided,” “visualization-driven,” or “display-aware.” In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.

PDF icon Paper
Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images
Kaynig V, Vazquez-Reina A, Knowles-Barley S, Roberts M, Jones TR, Kasthuri N, Miller E, Lichtman J, Pfister H. Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images. Medical Image Analysis 2015;22(1):77-88.Abstract

Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new
insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling
to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation
hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions
of neuronal processes from a 27; 000 m3 volume of brain tissue over a cube of 30 m in each dimension corresponding to 1,000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors
based on sparse user scribbles.

PDF icon Paper
Computational Design of Walking Automata
Bharaj G, Coros S, Thomaszewski B, Tompkin J, Bickel B, Pfister H. Computational Design of Walking Automata [Internet]. In: ACM SIGGRAPH / Eurographics Symposium on Computer Animation. 2015 Publisher's VersionAbstract

Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs.

PDF icon Paper
Blind Video Temporal Consistency
Bonneel N, Tompkin J, Sunkavalli K, Sun D, Paris S, Pfister H. Blind Video Temporal Consistency [Internet]. ACM Transactions on Graphics (SIGGRAPH Asia) 2015; Webpage
Computational Design of Metallophone Contact Sounds
Bharaj G, Levin DIW, Tompkin J, Fei Y, Pfister H, Matusik W, Zheng C. Computational Design of Metallophone Contact Sounds [Internet]. ACM Transactions on Graphics (SIGGRAPH Asia) 2015; Webpage
Context-guided Diffusion for Label Propagation on Graphs
Kim KI, Tompkin J, Pfister H, Theobalt C. Context-guided Diffusion for Label Propagation on Graphs [Internet]. International Conference on Computer Vision (ICCV) 2015; Webpage
Generalizing Wave Gestures from Sparse Examples for Real-time Character Control
Rhodin H, Tompkin J, Kim KI, de Aguiar E, Pfister H, Seidel H-P, Theobalt C. Generalizing Wave Gestures from Sparse Examples for Real-time Character Control [Internet]. ACM Transactions on Graphics (SIGGRAPH Asia) 2015; Webpage

Pages