Publications

2017
Data-Driven Guides: Supporting Expressive Design for Information Graphics
Kim NW, Schweickart E, Liu Z, Dontcheva M, Li W, Popovic J, Pfister H. Data-Driven Guides: Supporting Expressive Design for Information Graphics [Internet]. IEEE Transactions on Visualization and Computer Graphics (InfoVis’16) 2017;PP(99):1-1. Publisher's Version

In recent years, there is a growing need for communicating complex data in an accessible graphical form. Existing visualization creation tools support automatic visual encoding, but lack flexibility for creating custom design; on the other hand, freeform illustration tools require manual visual encoding, making the design process time-consuming and error-prone. In this paper, we present Data-Driven Guides (DDG), a technique for designing expressive information graphics in a graphic design environment. Instead of being confined by predefined templates or marks, designers can generate guides from data and use the guides to draw, place and measure custom shapes. We provide guides to encode data using three fundamental visual encoding channels: length, area, and position. Users can combine more than one guide to construct complex visual structures and map these structures to data. When underlying data is changed, we use a deformation technique to transform custom shapes using the guides as the backbone of the shapes. Our evaluation shows that data-driven guides allow users to create expressive and more accurate custom data-driven graphics.

booc.io: An Education System with Hierarchical Concept Maps
Schwab M, Strobelt H, Tompkin J, Fredericks C, Huff C, Higgins D, Strezhnev A, Komisarchik M, King G, Pfister H. booc.io: An Education System with Hierarchical Concept Maps. IEEE Transactions on Visualization and Computer Graphics (Inf 2017;PP(99):1-1.

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present booc.io, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, booc.io is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into booc.io, and perform introductory qualitative evaluation with students.

Screenit: Visual Analysis of Cellular Screens
Dinkla K, Strobelt H, Genest B, Reiling S, Borowsky M, Pfister H. Screenit: Visual Analysis of Cellular Screens. IEEE Transactions on Visualization and Computer Graphics (InfoVis’16) 2017;PP(99):1-1.

High-throughput and high-content screening enables large scale, cost-effective experiments in which cell cultures are exposed to a wide spectrum of drugs. The resulting multivariate data sets have a large but shallow hierarchical structure. The deepest level of this structure describes cells in terms of numeric features that are derived from image data. The subsequent level describes enveloping cell cultures in terms of imposed experiment conditions (exposure to drugs). We present Screenit, a visual analysis approach designed in close collaboration with screening experts. Screenit enables the navigation and analysis of multivariate data at multiple hierarchy levels and at multiple levels of detail. Screenit integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures. In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed to match workflows of screening experts. We demonstrate analyses for a real-world data set, CellMorph, with 6 million cells across 20,000 cell cultures.

2016
Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures
Gonda F, Kaynig V, Thouis R, Haehn D, Lichtman J, Parag T, Pfister H. Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures [Internet]. Icon: An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures 2016; arXivAbstract

We present an interactive approach to train a deep neural network pixel classifier for the segmentation of neuronal structures. An interactive training scheme reduces the extremely tedious manual annotation task that is typically required for deep networks to perform well on image segmentation problems. Our proposed method employs a feedback loop that captures sparse annotations using a graphical user interface, trains a deep neural network based on recent and past annotations, and displays the prediction output to users in almost real-time. Our implementation of the algorithm also allows multiple users to provide annotations in parallel and receive feedback from the same classifier. Quick feedback on classifier performance in an interactive setting enables users to identify and label examples that are more important than others for segmentation purposes. Our experiments show that an interactively-trained pixel classifier produces better region segmentation results on Electron Microscopy (EM) images than those generated by a network of the same architecture trained offline on exhaustive ground-truth labels.

Automatic Neural Reconstruction from Petavoxel of Electron Microscopy Data
Suissa-Peleg A, Haehn D, Knowles-Barley S, Kaynig V, Jones TR, Wilson A, Schalek R, Lichtman JW, Pfister H. Automatic Neural Reconstruction from Petavoxel of Electron Microscopy Data [Internet]. In: Microscopy and Microanalysis. 2016 p. 536-537. Publisher's Version

Connectomics is the study of the dense structure of the neurons in the brain and their synapses, providing new insights into the relation between brain’s structure and its function. Recent advances in Electron Microscopy enable high-resolution imaging (4nm per pixel) of neural tissue at a rate of roughly 10 terapixels in a single day, allowing neuroscientists to capture large blocks of neural tissue in a reasonable amount of time. The large amounts of data require novel computer vision based algorithms and scalable software frameworks to process this data. We describe RhoANA, our dense Automatic Neural Annotation framework, which we have developed in order to automatically align, segment and reconstruct a 1mm3 brain tissue (~2 peta-pixels).

Imaging a 1 mm3 Volume of Rat Cortex Using a MultiBeam SEM
Schalek R, Lee D, Kasthuri N, Suissa-Peleg A, Jones TR, Kaynig V, Haehn D, Pfister H, Cox D, Lichtman JW. Imaging a 1 mm3 Volume of Rat Cortex Using a MultiBeam SEM [Internet]. In: Microscopy and Microanalysis. 2016 p. 582-583. Publisher's Version

The rodent brain is organized with length scales spanning centimeters to nanometers --6 orders of magnitude [1]. At the centimeter scale, the brain consist of lobes of cortex, the cerebellum, the brainstem and the spinal cord. The millimeter scale have neurons arranged in columns, layers, or otherwise clustered. Recent technological imaging advances allow the generation of neuronal datasets spanning the spatial range from nanometers to 100s of microns [2,3]. Collecting a 1 mm3 volume dataset of brain tissue at 4 nm x-y resolution using the fastest signal-beam SEM would require ~6 years. To move to the next length and volume scale of neuronal circuits requires several technological advances. The multibeam scanning electron microscope (mSEM) represents a transformative imaging technology that enables neuroscientists to tackle millimeter scale cortical circuit problems. In this work we describe a workflow from tissue harvest to imaging that will generate a 2 petabyte dataset (> 300,000,000 images) of rat visual cortex imaged at a 4nm x 4nm x-y (Nyquist sampling of membranes) and 30nm section thickness in less than 6 months.

Vials: Visualizing Alternative Splicing of Genes
Strobelt H, Alsallakh B, Botros J, Peterson B, Borowsky M, Pfister H, Lex A. Vials: Visualizing Alternative Splicing of Genes. IEEE Transactions on Visualization and Computer Graphics 2016;22(1):399-408.

Alternative splicing is a process by which the same DNA sequence is used to assemble different proteins, called protein isoforms. Alternative splicing works by selectively omitting some of the coding regions (exons) typically associated with a gene. Detection of alternative splicing is difficult and uses a combination of advanced data acquisition methods and statistical inference. Knowledge about the abundance of isoforms is important for understanding both normal processes and diseases and to eventually improve treatment through targeted therapies. The data, however, is complex and current visualizations for isoforms are neither perceptually efficient nor scalable. To remedy this, we developed Vials, a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups. Our tool thus enables experts to (a) identify patterns of isoform abundance in groups of samples and (b) evaluate the quality of the data. We demonstrate the value of our tool in case studies using publicly available datasets.

Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks
Strobelt H, Gehrmann S, Huber B, Pfister H, Rush AM. Visual Analysis of Hidden State Dynamics in Recurrent Neural Networks [Internet]. CoRR 2016;abs/1606.07461 arXiv
Blind Image Deblurring Using Dark Channel Prior
Pan J, Sun D, Yang M-H, Pfister H. Blind Image Deblurring Using Dark Channel Prior. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. 2016Abstract
 
Abstract
We present a simple and effective blind image deblur-
ring method based on the dark channel prior. Our work is
inspired by the interesting observation that the dark chan-
nel of blurred images is less sparse. While most image
patches in the clean image contain some dark pixels, these
pixels are not dark when averaged with neighboring high-
intensity pixels during the blur process. This change in the
sparsity of the dark channel is an inherent property of the
blur process, which we both prove mathematically and val-
idate using training data. Therefore, enforcing the sparsity
of the dark channel helps blind deblurring on various sce-
narios, including natural, face, text, and low-illumination
images. However, sparsity of the dark channel introduces
a non-convex non-linear optimization problem. We intro-
duce a linear approximation of the
min
operator to com-
pute the dark channel. Our look-up-table-based method
converges fast in practice and can be directly extended to
non-uniform deblurring. Extensive experiments show that
our method achieves state-of-the-art results on deblurring
natural images and compares favorably methods that are
well-engineered for specific scenarios.
An Interaction-Aware, Perceptual Model For Non-Linear Elastic Objects
Piovarči M, Levin DIW, Rebello J, Chen D, Ďurikovič R, Pfister H, Matusik W, Didyk P. An Interaction-Aware, Perceptual Model For Non-Linear Elastic Objects. ACM Transactions on Graphics 35(4) (Proc. SIGGRAPH 2016, Anaheim, California, USA) 2016;Abstract

Everyone, from a shopper buying shoes to a doctor palpating a growth, uses their sense of touch to learn about the world. 3D printing is a powerful technology because it gives us the ability to control the haptic impression an object creates. This is critical for both replicating existing, real-world constructs and designing novel ones. However, each 3D printer has different capabilities and supports different materials, leaving us to ask: How can we best replicate a given haptic result on a particular output device? In this work, we address the problem of mapping a real-world material to its nearest 3D printable counterpart by constructing a perceptual model for the compliance of nonlinearly elastic objects. We begin by building a perceptual space from experimentally obtained user comparisons of twelve 3D-printed metamaterials. By comparing this space to a number of hypothetical computational models, we identify those that can be used to accurately and efficiently evaluate human-perceived differences in nonlinear stiffness. Furthermore, we demonstrate how such models can be applied to complex geometries in an interaction-aware way where the compliance is influenced not only by the material properties from which the object is made but also its geometry. We demonstrate several applications of our method in the context of fabrication and evaluate them in a series of user experiments.

Guidelines for Effective Usage of Text Highlighting Techniques
Strobelt H, Oelke D, Kwon BC, Schreck T, Pfister H. Guidelines for Effective Usage of Text Highlighting Techniques. IEEE Transactions on Visualization and Computer Graphics 2016;22(1):489-498.

Semi-automatic text analysis involves manual inspection of text. Often, different text annotations (like part-of-speech or named entities) are indicated by using distinctive text highlighting techniques. In typesetting there exist well-known formatting conventions, such as bold typeface, italics, or background coloring, that are useful for highlighting certain parts of a given text. Also, many advanced techniques for visualization and highlighting of text exist; yet, standard typesetting is common, and the effects of standard typesetting on the perception of text are not fully understood. As such, we surveyed and tested the effectiveness of common text highlighting techniques, both individually and in combination, to discover how to maximize pop-out effects while minimizing visual interference between techniques. To validate our findings, we conducted a series of crowd-sourced experiments to determine: i) a ranking of nine commonly-used text highlighting techniques; ii) the degree of visual interference between pairs of text highlighting techniques; iii) the effectiveness of techniques for visual conjunctive search. Our results show that increasing font size works best as a single highlighting technique, and that there are significant visual interferences between some pairs of highlighting techniques. We discuss the pros and cons of different combinations as a design guideline to choose text highlighting techniques for text viewers.

2015
VESICLE: Volumetric Evaluation of Synaptic Interfaces using Computer Vision at Large Scale
Roncal WG, Pekala M, Kaynig-Fittkau V, Kleissas DM, Vogelstein JT, Pfister H, Burns R, Vogelstein JR, Chevillet MA, Hager GD. VESICLE: Volumetric Evaluation of Synaptic Interfaces using Computer Vision at Large Scale [Internet]. In: Xianghua Xie, Mark W. Jones, Tam GKL Proceedings of the British Machine Vision Conference (BMVC). BMVA Press; 2015 p. 81.1-81.13. Publisher's VersionAbstract


An open challenge at the forefront of modern neuroscience is to obtain a comprehensive mapping of the neural pathways that underlie human brain function; an enhanced understanding of the wiring diagram of the brain promises to lead to new breakthroughs in diagnosing and treating neurological disorders. Inferring brain structure from image data, such as that obtained via electron microscopy (EM), entails solving the problem of identifying biological structures in large data volumes. Synapses, which are a key communication structure in the brain, are particularly difficult to detect due to their small size and limited contrast. Prior work in automated synapse detection has relied upon time-intensive, error-prone biological preparations (isotropic slicing, post-staining) in order to simplify the problem. This paper presents VESICLE, the first known approach designed for mammalian synapse detection in anisotropic, non-poststained data. Our methods explicitly leverage biological context, and the results exceed existing synapse detection methods in terms of accuracy and scalability. We provide two different approaches - a deep learning classifier (VESICLE-CNN) and a lightweight Random Forest approach (VESICLE-RF), to offer alternatives in the performance-scalability space. Addressing this synapse detection challenge enables the analysis of high-throughput imaging that is soon expected to produce petabytes of data, and provides tools for more rapid estimation of brain-graphs. Finally, to facilitate community efforts, we developed tools for large-scale object detection, and demonstrated this framework to find ~50,000 synapses in 60,000 um^3 (220 GB on disk) of electron microscopy data.

Paper.pdf
Blind Video Temporal Consistency
Bonneel N, Tompkin J, Sunkavalli K, Sun D, Paris S, Pfister H. Blind Video Temporal Consistency [Internet]. ACM Transactions on Graphics (SIGGRAPH Asia) 2015; Webpage
Computational Design of Metallophone Contact Sounds
Bharaj G, Levin DIW, Tompkin J, Fei Y, Pfister H, Matusik W, Zheng C. Computational Design of Metallophone Contact Sounds [Internet]. ACM Transactions on Graphics (SIGGRAPH Asia) 2015; Webpage
Context-guided Diffusion for Label Propagation on Graphs
Kim KI, Tompkin J, Pfister H, Theobalt C. Context-guided Diffusion for Label Propagation on Graphs [Internet]. International Conference on Computer Vision (ICCV) 2015; Webpage
Generalizing Wave Gestures from Sparse Examples for Real-time Character Control
Rhodin H, Tompkin J, Kim KI, de Aguiar E, Pfister H, Seidel H-P, Theobalt C. Generalizing Wave Gestures from Sparse Examples for Real-time Character Control [Internet]. ACM Transactions on Graphics (SIGGRAPH Asia) 2015; Webpage
Joint 5D Pen Input for Light Field Displays
Tompkin J, Muff S, McCann J, Pfister H, Kautz J, Alexa M, Matusik W. Joint 5D Pen Input for Light Field Displays [Internet]. ACM User Interface Software and Technology (UIST) 2015; Webpage
Local High-order Regularization on Data Manifolds
Kim KI, Tompkin J, Pfister H, Theobalt C. Local High-order Regularization on Data Manifolds [Internet]. IEEE Computer Vision and Pattern Recognition (CVPR) 2015; Webpage
Semi-supervised Learning with Explicit Relationship Regularization
Kim KI, Tompkin J, Pfister H, Theobalt C. Semi-supervised Learning with Explicit Relationship Regularization [Internet]. IEEE Computer Vision and Pattern Recognition (CVPR) 2015; Webpage
Beyond Memorability: Visualization Recognition and Recall
Borkin M, Bylinskii Z, Kim N, Bainbridge C, Yeh C, Borkin D, Pfister H, Oliva A. Beyond Memorability: Visualization Recognition and Recall [Internet]. Visualization and Computer Graphics, IEEE Transactions on 2015;PP:1-1. Publisher's VersionAbstract

In this paper we move beyond memorability and investigate how visualizations are recognized and recalled. For this study we labeled a dataset of 393 visualizations and analyzed the eye movements of 33 participants as well as thousands of participantgenerated text descriptions of the visualizations. This allowed us to determine what components of a visualization attract people’s attention, and what information is encoded into memory. Our findings quantitatively support many conventional qualitative design guidelines, including that (1) titles and supporting text should convey the message of a visualization, (2) if used appropriately, pictograms do not interfere with understanding and can improve recognition, and (3) redundancy helps effectively communicate the message. Importantly, we show that visualizations memorable “at-a-glance” are also capable of effectively conveying the message of the visualization. Thus, a memorable visualization is often also an effective one.

Paper Supplemental Material Video Slides

Pages