Visual Computing

Our research in visual computing lies at the intersection of visualization, computer graphics, and computer vision. It spans a wide range of topics, including bio-medical visualization, image and video analysis, 3D fabrication, and data science.

Our Research

Our goal is to combine interactive computer systems with the perceptual and cognitive power of human observers to solve practical problems in science and engineering. We are providing visual analysis tools and methods to help scientists and researchers better process and understand large, multi-dimensional data sets in various domains such as neuroscience, genomics, systems biology, astronomy, and medicine. And we are developing data-driven approaches for the acquisition, modeling, visualization, and fabrication of complex objects. 

Contact

Michaela Kapp
Administrative Manager of Research

33 Oxford Street
Maxwell Dworkin 143
Cambridge, MA 02138
Email: michaela@seas.harvard.edu
Office Phone: (617) 496-0964

Our Lab

Our group belongs to Harvard's School of Engineering and Applied Sciences and the Center for Brain Science. We are located in the Maxwell Dworkin Building (33 Oxford St.) as well as the Northwest Laboratory (52 Oxford St.) on Harvard's main campus in Cambridge, Massachusetts.

Recent Publications

Guidelines for Effective Usage of Text Highlighting Techniques
Strobelt H, Oelke D, Kwon BC, Schreck T, Pfister H. Guidelines for Effective Usage of Text Highlighting Techniques. IEEE Transactions on Visualization and Computer Graphics 2016;22(1):489-498.Abstract

Semi-automatic text analysis involves manual inspection of text. Often, different text annotations (like part-of-speech or named entities) are indicated by using distinctive text highlighting techniques. In typesetting there exist well-known formatting conventions, such as bold typeface, italics, or background coloring, that are useful for highlighting certain parts of a given text. Also, many advanced techniques for visualization and highlighting of text exist; yet, standard typesetting is common, and the effects of standard typesetting on the perception of text are not fully understood. As such, we surveyed and tested the effectiveness of common text highlighting techniques, both individually and in combination, to discover how to maximize pop-out effects while minimizing visual interference between techniques. To validate our findings, we conducted a series of crowd-sourced experiments to determine: i) a ranking of nine commonly-used text highlighting techniques; ii) the degree of visual interference between pairs of text highlighting techniques; iii) the effectiveness of techniques for visual conjunctive search. Our results show that increasing font size works best as a single highlighting technique, and that there are significant visual interferences between some pairs of highlighting techniques. We discuss the pros and cons of different combinations as a design guideline to choose text highlighting techniques for text viewers.

Vials: Visualizing Alternative Splicing of Genes
Strobelt H, Alsallakh B, Botros J, Peterson B, Borowsky M, Pfister H, Lex A. Vials: Visualizing Alternative Splicing of Genes. IEEE Transactions on Visualization and Computer Graphics 2016;22(1):399-408.Abstract

Alternative splicing is a process by which the same DNA sequence is used to assemble different proteins, called protein isoforms. Alternative splicing works by selectively omitting some of the coding regions (exons) typically associated with a gene. Detection of alternative splicing is difficult and uses a combination of advanced data acquisition methods and statistical inference. Knowledge about the abundance of isoforms is important for understanding both normal processes and diseases and to eventually improve treatment through targeted therapies. The data, however, is complex and current visualizations for isoforms are neither perceptually efficient nor scalable. To remedy this, we developed Vials, a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups. Our tool thus enables experts to (a) identify patterns of isoform abundance in groups of samples and (b) evaluate the quality of the data. We demonstrate the value of our tool in case studies using publicly available datasets.

Suissa-Peleg A, Haehn D, Knowles-Barley S, Kaynig V, Jones TR, Wilson A, Schalek R, Lichtman JW, Pfister H. Automatic Neural Reconstruction from Petavoxel of Electron Microscopy Data [Internet]. In: Microscopy and Microanalysis. 2016 p. 536-537. Publisher's VersionAbstract

Connectomics is the study of the dense structure of the neurons in the brain and their synapses, providing new insights into the relation between brain’s structure and its function. Recent advances in Electron Microscopy enable high-resolution imaging (4nm per pixel) of neural tissue at a rate of roughly 10 terapixels in a single day, allowing neuroscientists to capture large blocks of neural tissue in a reasonable amount of time. The large amounts of data require novel computer vision based algorithms and scalable software frameworks to process this data. We describe RhoANA, our dense Automatic Neural Annotation framework, which we have developed in order to automatically align, segment and reconstruct a 1mm3 brain tissue (~2 peta-pixels).

Schalek R, Lee D, Kasthuri N, Suissa-Peleg A, Jones TR, Kaynig V, Haehn D, Pfister H, Cox D, Lichtman JW. Imaging a 1 mm3 Volume of Rat Cortex Using a MultiBeam SEM [Internet]. In: Microscopy and Microanalysis. 2016 p. 582-583. Publisher's VersionAbstract

The rodent brain is organized with length scales spanning centimeters to nanometers --6 orders of magnitude [1]. At the centimeter scale, the brain consist of lobes of cortex, the cerebellum, the brainstem and the spinal cord. The millimeter scale have neurons arranged in columns, layers, or otherwise clustered. Recent technological imaging advances allow the generation of neuronal datasets spanning the spatial range from nanometers to 100s of microns [2,3]. Collecting a 1 mm3 volume dataset of brain tissue at 4 nm x-y resolution using the fastest signal-beam SEM would require ~6 years. To move to the next length and volume scale of neuronal circuits requires several technological advances. The multibeam scanning electron microscope (mSEM) represents a transformative imaging technology that enables neuroscientists to tackle millimeter scale cortical circuit problems. In this work we describe a workflow from tissue harvest to imaging that will generate a 2 petabyte dataset (> 300,000,000 images) of rat visual cortex imaged at a 4nm x 4nm x-y (Nyquist sampling of membranes) and 30nm section thickness in less than 6 months.

Screenit: Visual Analysis of Cellular Screens
Dinkla K, Strobelt H, Genest B, Reiling S, Borowsky M, Pfister H. Screenit: Visual Analysis of Cellular Screens. IEEE Transactions on Visualization and Computer Graphics 2016;PP(99):1-1.Abstract

High-throughput and high-content screening enables large scale, cost-effective experiments in which cell cultures are exposed to a wide spectrum of drugs. The resulting multivariate data sets have a large but shallow hierarchical structure. The deepest level of this structure describes cells in terms of numeric features that are derived from image data. The subsequent level describes enveloping cell cultures in terms of imposed experiment conditions (exposure to drugs). We present Screenit, a visual analysis approach designed in close collaboration with screening experts. Screenit enables the navigation and analysis of multivariate data at multiple hierarchy levels and at multiple levels of detail. Screenit integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures. In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed to match workflows of screening experts. We demonstrate analyses for a real-world data set, CellMorph, with 6 million cells across 20,000 cell cultures.

More