Below is a list of our main research projects in visual computing for connectomics, visualization, computer graphics, and computer vision.
-
Connectomics
We are interested in the segmentation, visualization, and analysis of brain scans in electron and optical microscopy. Our group collaborates with the Center for Brain Science on the segmentation and visualization and analysis of such images. Some of the challenges our group deals with involve the processing of multi-terabyte and multi-spectral image datasets.
The wiring of brains is staggeringly complex. Our own brains have tens of billions of neurons connected through perhaps one hundred trillion synapses. This circuitry is the result of our development and experience; the neural activity that courses through and alters it, somehow accounts for our thoughts, our behavior, our memories. One hundred years from now, this brain circuitry will be known; today, for the first time, we can contemplate mapping it in detail. New forms of laser-scanning light microscopy and semi-automated electron microscopy allow high-resolution imaging of 'connectomes'—that is, complete neural wiring diagrams.
Software tools for connectome research are available on our GitHub.
Funding: Sponsored by IARPA, NSF, NIH and private foundations.
Related publications:Related code & data: -
Information, Biomedical and Scientific Visualization
Information Visualization
We investigate the use of data visualizations for presentation and communication of data, reexamining the role of visualization beyond data analysis and exploration. Most visualization systems to date focus on supporting analytical tasks for domain experts, such as exploring data, confirming hypotheses, and discovering insights. However, these systems typically have complex designs that are unintuitive and cumbersome for non-expert users. On the other hand, visualizations are more and more being used to communicate data and messages to a general audience. But visualization tools for communication are still in their infancy and do not address the cognitive aspects of storytelling. To this aim, we conduct cognitive studies to understand what makes a visualization effective for communication, develop techniques for creating expressive, engaging, and communication-minded visualizations, and finally build tools for telling compelling data-driven stories using visualizations.
Biomedical Visualization
We are working to apply novel visualization tools and techniques to leading edge biomedical research. Our current focus is on the 4D Nucleome program which is a collaboration of doctors, biologists, and computational scientists working together to understand the principles underlying nuclear organization in space and time, the role nuclear organization plays in gene expression and cellular function, and how changes in nuclear organization affect normal development as well as various diseases. We are developing visualization methods and tools for exploring and analyzing the large amounts of datasets generated throughout the program.
Visualization for Connectomics
We are developing scalable systems for interactive visual exploration and analysis of large (petavoxel) volumes resulting from high-throughput electron microscopy data streams. This project focuses on the interactive analysis of biomedical 3D volumes and their corresponding segmentation volumes. We research efficient techniques to combine methods from volume visualization with information visualization to allow neuroscientists to interactively explore and analyze their data.Scientific Visualization
We are researching GPU-based volume rendering techniques for large-scale neuroscience and medical data, with emphasis on visualization of large and multimodal volumes. The tremendous resolution and the high complexity of these volumes present big challenges to storage, processing, visualization and visual analysis at interactive rates. We are currently researching multi-resolution and visualization-driven designs which restrict most computations to a small subset of the data.Funding: This work was partially supported by NIH Common Fund U01 CA200059, NSF grant OIA-1125087, and King Abdullah University of Science and Technology.
Related publications:Related code & data: -
Computer Vision
Multimodal Learning
Our research focuses on advancing multimodal learning, an area of growing importance in both academia and industry. Multimodal learning involves the integration of information from diverse sources, such as images, text, and audio, to build more robust and context-aware AI systems. Learning from uncurated, "wild" data presents significant challenges for multimodal learning. This data is often incomplete or noisy, with missing modalities being a common issue—especially in fields like healthcare, where key inputs such as medical imaging or sensor data may be absent or inconsistent.Another challenge is modality correspondence, where different data types may not always align or correspond to each other in meaningful ways, complicating integration. There is also the disparate modality gap, where certain modalities are fundamentally different in nature, making it difficult to combine them effectively. Bridging the gap between such distinct data types poses an additional challenge for robust multimodal learning. Our group's research is focused on enabling robust multimodal learning that can tackle these real-world complexities, ensuring the practicality of such models in real-world applications. We develop adaptive methods capable of addressing challenges posed by missing modalities, misaligned data, and disparate modality gaps to enhance performance across diverse tasks.
3D Rendering, Generation, and Reconstruction
We are interested in reconstructing and synthesizing the visual world with exceptional fidelity. Our work focuses on state-of-the-art rendering algorithms, including volume rendering, point splatting, and physically-based rendering, ensuring that the generated visuals are both realistic and computationally efficient. By integrating advanced generative models into the rendering pipeline, we continually push the boundaries of visual quality, while ensuring scalability and generalization across a wide range of complex and diverse scenes.Further research is focused on extending 2D foundation models into the 3D domain, enabling spatial intelligence. This involves developing methods to generalize these foundational models for accurate understanding and manipulation of 3D environments. This exploration not only advances the field of 3D rendering but also opens up new possibilities for intelligent spatial reasoning and interaction in virtual worlds. Powered by advanced differentiable and inverse rendering techniques, we focus on accurately representing and reconstructing the detailed geometry and appearance of 3D scenes from 2D images. These models are applicable in numerous domains, including visual content creation, VR & AR, autonomous systems, and robotics, providing robust solutions to real-world challenges in both immersive and interactive environments.
Funding: Sponsored by NSF, and private foundations.
Related publications:Related code & data: -
SportsXR
Our research focuses on developing novel visualization techniques using augmented and virtual reality for sports analytics. Immersive analytics presents promising opportunities to advance sports data analytics to non-data experts in sports domains through the benefits of large displays, natural interactions, and spatial immersion.
Our team is dedicated to exploring innovative ways to enhance the data experience and generate insights for real-world sports tasks. We collaborate with a diverse range of sports domain users, including athletes, coaches, and sports fans, to address user-driven problems and improve sports performance and user experiences using immersive analytics.
In the world of sports, every movement, every decision, and every play matters. From the way a player positions themselves on the field to the trajectory of a ball, every detail can make the difference between victory and defeat. With SportsXR, we believe that the future of sports analytics lies in the ability to visualize and interact with data in real-time, in ways that were previously impossible. With our cutting-edge visualization tools and technologies, we aim to empower athletes, coaches, and fans alike to gain deeper insights into the game, and ultimately, elevate the way we experience and appreciate sports.
SportsBuddy
This initiative targets bringing immersive visualizations to real-world game viewing experiences. Our collaboration with the Harvard men’s basketball team aims to make advanced analytics and visual insights accessible to all levels of sports, from professional to recreational leagues. LinkRelated publications:Related code & data: