We are interested in the segmentation, visualization and analysis of brain scans in electron and optical microscopy in the multi-terabyte range.
The wiring of brains is staggeringly complex. Our own brains have tens of billions of neurons connected through perhaps one hundred trillion synapses. This circuitry is the result of our development and experience; the neural activity that courses through and alters it, somehow accounts for our thoughts, our behavior, our memories. To echo Wheeler's synopsis of Einstein's theory of gravity: neural circuits tell activity how to propagate and neural activity tells circuits how to change. One hundred years from now, this brain circuitry will be known; today, for the first time, we can contemplate mapping it in detail. New forms of laser-scanning light microscopy and semi-automated electron microscopy allow high resolution imaging of “connectomes”—that is, complete neural wiring diagrams.
Our group collaborates with the Center for Brain Science on the segmentation and visualization and analysis of such images. Some of the challenges our group deals with involve the processing of multi-terabyte and multi-spectral image datasets.
Software tools for connectome research including RhoANA, our segmentation pipeline, and Mojo, our connectome annotation tool, are available at http://www.rhoana.org/
Funding: Sponsored by NSF, NIH and private foundations.Related publications:Related code & data:
Information, Biomedical and Scientific Visualization
We investigate the use of data visualizations for presentation and communication of data, reexamining the role of visualization beyond data analysis and exploration. Most visualization systems to date focus on supporting analytical tasks for domain experts, such as exploring data, confirming hypotheses, and discovering insights. However, these systems typically have complex designs that are unintuitive and cumbersome for non-expert users. On the other hand, visualizations are more and more being used to communicate data and messages to a general audience. But visualization tools for communication are still in their infancy and do not address the cognitive aspects of storytelling. To this aim, we conduct cognitive studies to understand what makes a visualization effective for communication, develop techniques for creating expressive, engaging, and communication-minded visualizations, and finally build tools for telling compelling data-driven stories using visualizations.
We are working to apply novel visualization tools and techniques to leading edge biomedical research. Our current focus is on the 4D Nucleome program which is a collaboration of doctors, biologists, and computational scientists working together to understand the principles underlying nuclear organization in space and time, the role nuclear organization plays in gene expression and cellular function, and how changes in nuclear organization affect normal development as well as various diseases. We are developing visualization methods and tools for exploring and analyzing the large amounts of datasets generated throughout the program.
Visualization for Connectomics
We are developing scalable systems for interactive visual exploration and analysis of large (petavoxel) volumes resulting from high-throughput electron microscopy data streams. This project focuses on the interactive analysis of biomedical 3D volumes and their corresponding segmentation volumes. We research efficient techniques to combine methods from volume visualization with information visualization to allow neuroscientists to interactively explore and analyze their data.
We are researching GPU-based volume rendering techniques for large-scale neuroscience and medical data, with emphasis on visualization of large and multimodal volumes. The tremendous resolution and the high complexity of these volumes present big challenges to storage, processing, visualization and visual analysis at interactive rates. We are currently researching multi-resolution and visualization-driven designs which restrict most computations to a small subset of the data.
Funding: This work was partially supported by NIH Common Fund U01 CA200059, NSF grant OIA-1125087, and King Abdullah University of Science and Technology.
Modern 3D printer technology is getting accessible to consumers. Those printers are capable of printing with multiple materials of varying softness and color. However, this requires not only tools to accurately create the geometry of those objects but also methods to design their appearance and interaction behavior. One of our main focuses is to develop tools and techniques to intuitively and accurately design “fabricatable” interaction behavior and appearance, thereby bridging the gap between the virtual and the real. This involves building acquisition systems to identify material parameters of base materials, developing novel models to intuitively design objects, and accurately express the designed behavior using the measured base materials.
Funding: This work has been partially supported by NSF.
We are interested in computationally restoring corrupted images to be visually more pleasing, as well as in feature-based exploration of large-scale images.
More and more photos are being taken by hand-held cameras, especially smart phones. Degradation is often inevitable in the capture process and greatly affects the visual experience. As most capture moments are ephemeral, these moments usually get lost when severe degradations happen. We develop algorithms and tools to reduce degradations such as image blur and noise. Our algorithms can well handle images of different categories, including natural, text, face, and low-illumination images.
Enabled by technologies such as robot-assisted gigapixel photography, the resolution of acquired images is increasing while the resolution of display devices lag behind. Part of our work is in bridging this gap by providing users with tools to 1) compute fast yet accurate image operations to any coarse target resolution, and 2) guide users in feature-based exploration of high-resolution images.
Digital video editing and computer graphics have allowed unbridled creativity for professional film makers. However, the software tools and techniques used to accomplish these tasks require expert knowledge and are often slow. We aim to improve existing video processing tools which currently slow down existing users and block new users from the creative process.
One particularly difficult task in advanced video processing is seamlessly changing the appearance of objects, or changing the lighting in scenes. Typically this requires painstaking manual painting, which is laborious, or it requires significant compute resources, taking many hours for just a few seconds of footage. We develop new techniques which provide fast interactive tools for video editing. Our approach greatly quickens the creative process, and so opens the door to more sophisticated video content editing of appearance and illumination.
Funding: Sponsored by NSF, and private foundations.Related code & data: