NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects
Al-Awami AK, Beyer J, Haehn D, Kasthuri N, Lichtman JW, Pfister H, Hadwiger M. NeuroBlocks - Visual Tracking of Segmentation and Proofreading for Large Connectomics Projects. IEEE Transactions on Visualization and Computer Graphics 2015;to appearAbstract

In the field of connectomics, neuroscientists acquire electron microscopy volumes at nanometer resolution in order to reconstruct a detailed wiring diagram of the neurons in the brain. The resulting image volumes, which often are hundreds of terabytes in size, need to be segmented to identify cell boundaries, synapses, and important cell organelles. However, the segmentation process of a single volume is very complex, time-intensive, and usually performed using a diverse set of tools and many users. To tackle the associated challenges, this paper presents NeuroBlocks, which is a novel visualization system for tracking the state, progress, and evolution of very large volumetric segmentation data in neuroscience. NeuroBlocks is a multi-user web-based application that seamlessly integrates the diverse set of tools that neuroscientists currently use for manual and semi-automatic segmentation, proofreading, visualization, and analysis. NeuroBlocks is the first system that integrates this heterogeneous tool set, providing crucial support for the management, provenance, accountability, and auditing of large-scale segmentations. We describe the design of NeuroBlocks, starting with an analysis of the domain-specific tasks, their inherent challenges, and our subsequent task abstraction and visual representation. We demonstrate the utility of our design based on two case studies that focus on different user roles and their respective requirements for performing and tracking the progress of segmentation and proofreading in a large real-world connectomics project.

Paper Video
Computational Design of Walking Automata
Bharaj G, Coros S, Thomaszewski B, Tompkin J, Bickel B, Pfister H. Computational Design of Walking Automata [Internet]. In: ACM SIGGRAPH / Eurographics Symposium on Computer Animation. 2015 Publisher's VersionAbstract

Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs.

State-of-the-Art in GPU-Based Large-Scale Volume Visualization
Beyer J, Hadwiger M, Pfister H. State-of-the-Art in GPU-Based Large-Scale Volume Visualization [Internet]. Computer Graphics Forum 2015; Publisher's VersionAbstract

This survey gives an overview of the current state of the art in GPU techniques for interactive large-scale volume visualization. Modern techniques in this field have brought about a sea change in how interactive visualization and analysis of giga-, tera-, and petabytes of volume data can be enabled on GPUs. In addition to combining the parallel processing power of GPUs with out-of-core methods and data streaming, a major enabler for interactivity is making both the computational and the visualization effort proportional to the amount and resolution of data that is actually visible on screen, i.e., “output-sensitive” algorithms and system designs. This leads to recent output- sensitive approaches that are “ray-guided,” “visualization-driven,” or “display-aware.” In this survey, we focus on these characteristics and propose a new categorization of GPU-based large-scale volume visualization techniques based on the notions of actual output-resolution visibility and the current working set of volume bricks—the current subset of data that is minimally required to produce an output image of the desired display resolution. Furthermore, we discuss the differences and similarities of different rendering and data traversal strategies in volume rendering by putting them into a common context—the notion of address translation. For our purposes here, we view parallel (distributed) visualization using clusters as an orthogonal set of techniques that we do not discuss in detail but that can be used in conjunction with what we discuss in this survey.

Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images
Kaynig V, Vazquez-Reina A, Knowles-Barley S, Roberts M, Jones TR, Kasthuri N, Miller E, Lichtman J, Pfister H. Large-Scale Automatic Reconstruction of Neuronal Processes from Electron Microscopy Images. Medical Image Analysis 2015;22(1):77-88.Abstract

Automated sample preparation and electron microscopy enables acquisition of very large image data sets. These technical advances are of special importance to the field of neuroanatomy, as 3D reconstructions of neuronal processes at the nm scale can provide new
insight into the fine grained structure of the brain. Segmentation of large-scale electron microscopy data is the main bottleneck in the analysis of these data sets. In this paper we present a pipeline that provides state-of-the art reconstruction performance while scaling
to data sets in the GB-TB range. First, we train a random forest classifier on interactive sparse user annotations. The classifier output is combined with an anisotropic smoothing prior in a Conditional Random Field framework to generate multiple segmentation
hypotheses per image. These segmentations are then combined into geometrically consistent 3D objects by segmentation fusion. We provide qualitative and quantitative evaluation of the automatic segmentation and demonstrate large-scale 3D reconstructions
of neuronal processes from a 27; 000 m3 volume of brain tissue over a cube of 30 m in each dimension corresponding to 1,000 consecutive image sections. We also introduce Mojo, a proofreading tool including semi-automated correction of merge errors
based on sparse user scribbles.

A Crowdsourced Alternative to Eye-tracking for Visualization Understanding
Kim NW, Bylinskii Z, Borkin MA, Oliva A, Gajos KZ, Pfister H. A Crowdsourced Alternative to Eye-tracking for Visualization Understanding [Internet]. In: CHI’15 Extended Abstracts. Seoul, Korea: ACM; 2015 p. 1349-1354 . Publisher's VersionAbstract

In this study we investigate the utility of using mouse clicks as an alternative for eye fixations in the context of understanding data visualizations. We developed a crowdsourced study online in which participants were presented with a series of images containing graphs and diagrams and asked to describe them. Each image was blurred so that the participant needed to click to reveal bubbles - small, circular areas of the image at normal resolution. This is similar to having a confined area of focus like the human eye fovea. We compared the bubble click data with the fixation data from a complementary eye-tracking experiment by calculating the similarity between the resulting heatmaps. A high similarity score suggests that our methodology may be a viable crowdsourced alternative to eye-tracking experiments, especially when little to no eye-tracking data is available. This methodology can also be used to complement eye-tracking studies with an additional behavioral measurement, since it is specifically designed to measure which information people consciously choose to examine for understanding visualizations.

Lichtman JW, Pfister H, Shavit N. The Big Data Challenges of Connectomics [Internet]. Nature Neuroscience 2014;(17):1448–1454. Publisher's VersionAbstract

The structure of the nervous system is extraordinarily complicated because individual neurons are interconnected to hundreds or even thousands of other cells in networks that can extend over large volumes. Mapping such networks at the level of synaptic connections, a field called connectomics, began in the 1970s with a the study of the small nervous system of a worm and has recently garnered general interest thanks to technical and computational advances that automate the collection of electron-microscopy data and offer the possibility of mapping even large mammalian brains. However, modern connectomics produces 'big data', unprecedented quantities of digital information at unprecedented rates, and will require, as with genomics at the time, breakthrough algorithmic and computational solutions. Here we describe some of the key difficulties that may arise and provide suggestions for managing them.

Device Effect on Panoramic Video+Context Tasks
Pece F, Tompkin J, Pfister H, Kautz J, Theobalt C. Device Effect on Panoramic Video+Context Tasks. In: European Conference on Visual Media Production (CVMP 2014). London, UK: 2014Abstract

Panoramic imagery is viewed daily by thousands of people, and panoramic video imagery is becoming more common. This imagery is viewed on many different devices with different properties, and the effect of these differences on spatio-temporal task performance is yet untested on these imagery. We adapt a novel panoramic video interface and conduct a user study to discover whether display type affects spatio-temporal reasoning task performance across desktop monitor, tablet, and head-mounted displays. We discover that, in our complex reasoning task, HMDs are as effective as desktop displays even if participants felt less capable, but tablets were less effective than desktop displays even though participants felt just as capable. Our results impact virtual tourism, telepresence, and surveillance applications, and so we state the design implications of our results for panoramic imagery systems.

Paper Video
Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local Gaussian Processes
Kwon Y, Kim KI, Tompkin J, Kim JH, Theobalt C. Efficient Learning of Image Super-resolution and Compression Artifact Removal with Semi-local Gaussian Processes. Transactions on Pattern Analysis and Machine Intelligence 2014;Abstract

Improving the quality of degraded images is a key problem in image processing, but the breadth of the problem leads to domain-specific approaches for tasks such as super-resolution and compression artifact removal. Recent approaches have shown that a general approach is possible by learning application-specific models from examples; however, learning models sophisticated enough to generate high-quality images is computationally expensive, and so specific per-application or per-dataset models are impractical. To solve this problem, we present an efficient semi-local approximation scheme to large-scale Gaussian processes. This allows efficient learning of task-specific image enhancements from example images without reducing quality. As such, our algorithm can be easily customized to specific applications and datasets, and we show the efficiency and effectiveness of our approach across five domains: single-image super-resolution for scene, human face, and text images, and artifact removal in JPEG- and JPEG 2000-encoded images.

Paper Supplemental
Interactive Intrinsic Video Editing
Bonneel N, Sunkavalli K, Tompkin J, Sun D, Paris S, Pfister H. Interactive Intrinsic Video Editing. ACM Transactions on Graphics (Proc. SIGGRAPH) 2014;33(6)Abstract
Separating a photograph into its reflectance and illumination intrinsicimages is a fundamentally ambiguous problem, and state-of-the-art algorithms combine sophisticated reflectance and illumination priors with user annotations to create plausible results. However, these algorithms cannot be easily extended to videos for two reasons: first, naıvely applying algorithms designed for single images to videos produce results that are temporally incoherent; second, effectively specifying user annotations for a video requires interactive feedback, and current approaches are orders of magnitudes too slow to support this. We introduce a fast and temporally consistent algorithm to decompose video sequences into their reflectance and illumination components. Our algorithm uses a hybrid formulation that separates image gradients into smooth illumination and sparse reflectance gradients using look-up tables. We use a multi-scale parallelized solver to reconstruct the reflectance and illumination from these gradients while enforcing spatial and temporal reflectance constraints and user annotations. We demonstrate that our algorithm automatically produces reasonable results, that can be interactively refined by users, at rates that are two orders of magnitude faster than existing tools, to produce high-quality decompositions for challenging real-world video sequences. We also show how these decompositions can be used for a number of video editing applications including recoloring, retexturing, illumination editing, and lighting-aware compositing.
Paper Video
Characterizing Cancer Subtypes using Dual Analysis in Caleydo
Turkay C, Lex A, Streit M, Pfister H, Hauser H. Characterizing Cancer Subtypes using Dual Analysis in Caleydo [Internet]. IEEE Computer Graphics and Applications 2014;34(2):38-47. Publisher's VersionAbstract

The comprehensive analysis and characterization of cancer subtypes is an important problem to which significant resources have been devoted in recent years. In this paper we integrate the dual analysis method, which uses statistics to describe both the dimensions and the rows of a high dimensional dataset, into StratomeX, a Caleydo view tailored to cancer subtype analysis. We introduce significant difference plots for showing the elements of a candidate cancer subtype that differ significantly from other subtypes, thus enabling analysts to characterize cancer subtypes. We also enable analysts to investigate how samples relate to the subtype they are assigned and to the other groups. Our approach gives analysts the ability to create well-defined candidate subtypes based on statistical properties. We demonstrate the utility of our approach in three case studies, where we show that we are able to reproduce findings from a published cancer subtype characterization.

Guided visual exploration of genomic stratifications in cancer
Streit M, Lex A, Gratzl S, Partl C, Schmalstieg D, Pfister H, Park PJ, Gehlenborg N. Guided visual exploration of genomic stratifications in cancer [Internet]. Nature Methods 2014;11(9):884-885. Publisher's VersionAbstract

Cancer is a heterogeneous disease, and molecular profiling of tumors from large cohorts has enabled characterization of new tumor subtypes. This is a prerequisite for improving personalized treatment and ultimately achieving better patient outcomes. Potential tumor subtypes can be identified with methods such as unsupervised clustering or network-based stratification, which assign patients to sets based on high-dimensional molecular profiles. Detailed characterization of identified sets and their interpretation, however, remain a time-consuming exploratory process.

To address these challenges, we combined 'StratomeX', an interactive visualization tool that is freely available at, with exploration tools to efficiently compare multiple patient stratifications, to correlate patient sets with clinical information or genomic alterations and to view the differences between molecular profiles across patient sets. Although we focus on cancer genomics here, StratomeX can also be applied in other disease cohorts.

ConTour: Data-Driven Exploration of Multi-Relational Datasets for Drug Discovery
Partl C, Lex A, Streit M, Strobelt H, Wasserman A-M, Pfister H, Schmalstieg D. ConTour: Data-Driven Exploration of Multi-Relational Datasets for Drug Discovery. IEEE Transactions on Visualization and Computer Graphics (VAST '14) 2014;Abstract

Large scale data analysis is nowadays a crucial part of drug discovery. Biologists and chemists need to quickly explore and evaluate potentially effective yet safe compounds based on many datasets that are in relationship with each other. However, there is a is a lack of tools that support them in these processes. To remedy this, we developed ConTour, an interactive visual analytics technique that enables the exploration of these complex, multi-relational datasets. At its core ConTour lists all items of each dataset in a column. Relationships between the columns are revealed through interaction: selecting one or multiple items in one column highlights and re-sorts the items in other columns. Filters based on relationships enable drilling down into the large data space. To identify interesting items in the first place, ConTour employs advanced sorting strategies, including strategies based on connectivity strength and uniqueness, as well as sorting based on item attributes. ConTour also introduces interactive nesting of columns, a powerful method to show the related items of a child column for each item in the parent column. Within the columns, ConTour shows rich attribute data about the items as well as information about the connection strengths to other datasets. Finally, ConTour provides a number of detail views, which can show items from multiple datasets and their associated data at the same time. We demonstrate the utility of our system in case studies conducted with a team of chemical biologists, who investigate the effects of chemical compounds on cells and need to understand the underlying mechanisms.

Paper Video
Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets
Gratzl S, Gehlenborg N, Lex A, Pfister H, Streit M. Domino: Extracting, Comparing, and Manipulating Subsets across Multiple Tabular Datasets. IEEE Transactions on Visualization and Computer Graphics (InfoVis '14) 2014;Abstract

Answering questions about complex issues often requires analysts to take into account information contained in multiple interconnected datasets. A common strategy in analyzing and visualizing large and heterogeneous data is dividing it into meaningful subsets. Interesting subsets can then be selected and the associated data and the relationships between the subsets visualized. However, neither the extraction and manipulation nor the comparison of subsets is well supported by state-of-the-art techniques.

In this paper we present Domino, a novel multiform visualization technique for effectively representing subsets and the relationships between them. By providing comprehensive tools to arrange, combine, and extract subsets, Domino allows users to create both common visualization techniques and advanced visualizations tailored to specific use cases. In addition to the novel technique, we present an implementation that enables analysts to manage the wide range of options that our approach offers. Innovative interactive features such as placeholders and live previews support rapid creation of complex analysis setups. We introduce the technique and the implementation using a simple example and demonstrate scalability and effectiveness in a use case from the field of cancer genomics.

Paper Video
Mu-8: Visualizing Differences between Proteins and their Families
Mercer JD, Pandian B, Lex A, Bonnel N, Pfister H. Mu-8: Visualizing Differences between Proteins and their Families [Internet]. BMC Proceedings 2014;8(Suppl 2):s5. Publisher's VersionAbstract

A complete understanding of the relationship between the amino acid sequence and resulting protein function remains an open problem in the biophysical sciences. Current approaches often rely on diagnosing functionally relevant mutations by determining whether an amino acid frequently occurs at a specific position within the protein family. However, these methods do not account for the biophysical properties and the 3D structure of the protein. We have developed an interactive visualization technique, Mu-8, that provides researchers with a holistic view of the differences of a selected protein with respect to a family of homologous proteins. Mu-8 helps to identify areas of the protein that exhibit: (1) significantly different bio-chemical characteristics, (2) relative conservation in the family, and (3) proximity to other regions that have suspect behavior in the folded protein.

Our approach quantifies and communicates the difference between a reference protein and its family based on amino acid indices or principal components of amino acid index classes, while accounting for conservation, proximity amongst residues, and overall 3D structure.

We demonstrate Mu-8 in a case study with data provided by the 2013 BioVis contest. When comparing the sequence of a dysfunctional protein to its functional family, Mu-8 reveals several candidate regions that may cause function to break down.

UpSet: Visualization of Intersecting Sets
Lex A, Gehlenborg N, Strobelt H, Vuillemot R, Pfister H. UpSet: Visualization of Intersecting Sets. IEEE Transactions on Visualization and Computer Graphics (IEEE InfoVis '14) 2014;Abstract


Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains.

Paper Video
Time-Lapse Photometric Stereo and Applications
Shen F, Sunkavalli K, Bonneel N, Rsinkiewicz S, Pfister H, Tong X. Time-Lapse Photometric Stereo and Applications. In: Pacific Graphics. Seoul, Korea: Wiley & Sons Ltd.; 2014Abstract
This paper presents a technique to recover geometry from time-lapse sequences of outdoor scenes. We build upon photometric stereo techniques to recover approximate shadowing, shading and normal components allowing us to alter the material and normals of the scene. Previous work in analyzing such images has faced two fundamental difficulties: 1. the illumination in outdoor images consists of time-varying sunlight and skylight, and 2. the motion of the sun is restricted to a near-planar arc through the sky, making surface normal recovery unstable. We develop methods to estimate the reflection component due to skylight illumination. We also show that sunlight directions are usually non-planar, thus making surface normal recovery possible. This allows us to estimate approximate surface normals for outdoor scenes using a single day of data. We demonstrate the use of these surface normal for a number of image editing applications including reflectance, lighting, and normal editing.
NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity
Al-Awami AK, Beyer J, Strobelt H, Kasthuri N, Lichtman JW, Pfister H, Hadwiger M. NeuroLines: A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity. IEEE Transactions on Visualization and Computer Graphics 2014;20(12):2369-2378.Abstract

We present NeuroLines, a novel visualization technique designed for scalable detailed analysis of neuronal connectivity at the nanoscale level. The topology of 3D brain tissue data is abstracted into a multi-scale, relative distance-preserving subway map visualization that allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics aims at reverse-engineering the wiring of the brain. Reconstructing and analyzing the detailed connectivity of neurons and neurites (axons, dendrites) will be crucial for understanding the brain and its development and diseases. However, the enormous scale and complexity of nanoscale neuronal connectivity pose big challenges to existing visualization techniques in terms of scalability. NeuroLines offers a scalable visualization framework that can interactively render thousands of neurites, and that supports the detailed analysis of neuronal structures and their connectivity. We describe and analyze the design of NeuroLines based on two real-world use-cases of our collaborators in developmental neuroscience, and investigate its scalability to large-scale neuronal connectivity data.

Paper Video
Design and Evaluation of Interactive Proofreading Tools for Connectomics
Haehn D, Knowles-Barley S, Roberts M, Beyer J, Kasthuri N, Lichtman JW, Pfister H. Design and Evaluation of Interactive Proofreading Tools for Connectomics [Internet]. IEEE Transactions on Visualization and Computer Graphics 2014;20(12):2466-2475. WebsiteAbstract

Proofreading refers to the manual correction of automatic segmentations of image data. In connectomics, electron microscopy data is acquired at nanometer-scale resolution and results in very large image volumes of brain tissue that require fully automatic segmentation algorithms to identify cell boundaries. However, these algorithms require hundreds of corrections per cubic micron of tissue. Even though this task is time consuming, it is fairly easy for humans to perform corrections through splitting, merging, and adjusting segments during proofreading. In this paper we present the design and implementation of Mojo, a fully-featured single-user desktop application for proofreading, and Dojo, a multi-user web-based application for collaborative proofreading. We evaluate the accuracy and speed of Mojo, Dojo, and Raveler, a proofreading tool from Janelia Farm, through a quantitative user study. We designed a between-subjects experiment and asked non-experts to proofread neurons in a publicly available connectomics dataset. Our results show a significant improvement of corrections using web-based Dojo even in comparison to fully manual expert segmentation, when given the same amount of time. In addition, all participants using Dojo reported better usability. We discuss our findings and provide an analysis of requirements for designing visual proofreading software.

Paper Video
NeuroLines - A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity (best poster award)
Al-Awami AK, Beyer J, Strobelt H, Kasthuri N, Lichtman JW, Pfister H, Hadwiger M. NeuroLines - A Subway Map Metaphor for Visualizing Nanoscale Neuronal Connectivity (best poster award). Poster at the 4th Symposium on Biological Data Visualization 2014;Abstract
We introduce NeuroLines, a novel tool designed for visualizing neuronal morphology and connectivity at the nanoscale level. NeuroLines uses a subway map metaphor to abstract the topology of 3D brain tissue data into a multi-scale, relative distance-preserving 2D visualization. This allows domain scientists to conduct an interactive analysis of neurons and their connectivity. Nanoscale connectomics attempts to reverse-engineer the wiring diagram of the brain. This task, coupled with the task of analyzing the detailed connectivity of neurites (axons, dendrites), is crucial to understanding the brain, its development and pathologies. However, the main challenge with such tasks is the enormous scale, complexity and visual clutter of nanoscale connectivity. This makes it difficult for existing visualization techniques to render such data in a meaningful way. NeuroLines offers a scalable visualization platform that can interactively render thousands of neurites in an uncluttered fashion, paired with interactive features to support the detail analysis of neuronal connectivity.
Sun D, Liu C, Pfister H. Local Layering for Joint Motion Estimation and Occlusion Detection. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2014;Abstract
Most motion estimation algorithms (optical flow, layered models) cannot handle large amount of occlusion in textureless regions, as motion is often initialized with no occlusion assumption despite that occlusion may be included in the final objective. To handle such situations, we propose a local layering model where motion and occlusion relationships are inferred jointly. In particular, the uncertainties of occlusion relationships are retained so that motion is inferred by considering all the possibilities of local occlusion relationships. In addition, the local layering model handles articulated objects with self-occlusion. We demonstrate that the local layering model can handle motion and occlusion well for both challenging synthetic and real sequences.