-
Enhancer-Gene Visualization
An interactive web-based visualization for browsing enhancer-gene connections genome-wide
The goal of this web application is to provide a scalable visual interface to enhancer-gene connections predicted by the Activity-By-Contact (ABC) model and how they correlate with genetic variants.
Website: https://flekschas.github.io/enhancer-gene-vis/
Source code: https://github.com/flekschas/enhancer-gene-vis -
Piling.js
A JavaScript Library For Building Visual Piling Interface
Piling.js is a general framework and library for creating visual piling interfaces to explore and compare large collections of small multiples. Piling.js is built around a data-agnostic WebGL-based rendering pipeline and a declarative view specification to avoid having to write low-level code. For more background information see https://piling.lekschas.de.
Website: https://piling.js.org
Source code: https://github.com/flekschas/piling.js -
PyTorch Connectomics
A semantic and instance segmentation toolbox for EM connectomics.
The field of connectomics aims to reconstruct the wiring diagram of the brain by mapping the neural connections at the level of individual synapses. Recent advances in electronic microscopy (EM) have enabled the collection of a large number of image stacks at nanometer resolution, but the annotation requires expertise and is super time-consuming. Here we provide a deep learning framework powered by PyTorch for automatic and semi-automatic image segmentation in connectomics.
Website: https://connectomics.readthedocs.io/en/latest/
Source code: https://github.com/zudi-lin/pytorch_connectomics -
MitoEM Dataset
MitoEM Dataset: Large-scale 3D Mitochondria Instance Segmentation
Electron microscopy (EM) allows identifying intracellular organelles such as mitochondria, providing insights for clinical and scientific studies. We introduce the MitoEM dataset, a 3D mitochondria instance segmentation dataset with two (30 µm)^3 volumes from human and rat cortices, respectively, 3, 600× larger than previous benchmarks. With around 40K instances, we find a great diversity of mitochondria in shape and density. On MitoEM, we find existing instance segmentation methods often fail to correctly segment mitochondria with complex shapes or close contacts with other instances.
Website: https://mitoem.grand-challenge.org/
Source code: https://connectomics.readthedocs.io/en/latest/tutorials/mito.html -
AxonEM Dataset
AxonEM Dataset: 3D Axon Instance Segmentation of Brain Cortical Regions
Electron microscopy (EM) enables the reconstruction of neural circuits at the level of individual synapses, which has been transformative for scientific discoveries. we introduce the AxonEM dataset, which consists of two 30×30×30 µm^3 EM image volumes from the human and mouse cortex, respectively. We thoroughly proofread over 18,000 axon instances to provide dense 3D axon instance segmentation, enabling largescale evaluation of axon reconstruction methods. In addition, we densely annotate nine ground truth subvolumes for training, per each data volume.
Website: https://connectomics-bazaar.github.io/proj/AxonEM/index.html
Source code: https://github.com/donglaiw/AxonEM-challenge -
NucMM Dataset
NucMM Dataset: Neuronal Nuclei Segmentation at Sub-Cubic Millimeter Scale
Segmenting 3D cell nuclei from microscopy image volumes is critical for biological and clinical analysis, enabling the study of cellular expression patterns and cell lineages. We pushed the task forward to the sub-cubic millimeter scale and curated the NucMM dataset with two fully annotated volumes: one 0.1 mm^3 electron microscopy (EM) volume containing nearly the entire zebrafish brain with around 170,000 nuclei; and one 0.25 mm^3 micro-CT (uCT) volume containing part of a mouse visual cortex with about 7,000 nuclei.
Website: https://nucmm.grand-challenge.org/
Source code: https://github.com/zudi-lin/pytorch_connectomics/tree/master/configs/NucMM -
Peax
Interactive visual pattern search in sequential data using unsupervised deep representation learning
Peax is a novel feature-based technique for interactive visual pattern search in sequential data based on a convolutional autoencoder for unsupervised representation learning of regions in sequential data. Peax enables interactive feedback-driven adjustments of the pattern search to adapt to the users' perceived similarity, for which an active learning strategy is employed to focus the labeling process on useful regions for training a classifier. ScreenIt has been developed in collaboration with Novartis Institute for Biomedical Research.
Website: http://peax.lekschas.de
Source code: https://github.com/novartis/peax -
Scalable Insets
Pattern-Driven Navigation in 2D Multiscale Visualizations
Scalable Insets is a new technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces such as gigapixel images, matrices, or maps. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the patterns.
Website: http://scalable-insets.lekschas.de/
Source code: https://github.com/flekschas/higlass-scalable-insets -
UpSet
Visualizing Intersecting Sets
Understanding relationships between sets is an important analysis task. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. To address this, we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections.
Website: https://upset.app/
Source code: https://github.com/vcg/upset -
DXR
An Immersive Visualization Toolkit
DXR is a Unity package for rapid prototyping of immersive data visualizations in augmented, mixed, and virtual reality (AR, MR, VR) or XR for short.
Website: https://sites.google.com/view/dxr-vis
Source code: https://github.com/ronellsicat/DxR -
Visually29K
A large-scale curated infographics dataset
This dataset is provided for research purposes, with annotations that can be used for different computer vision and natural language tasks.
Website: http://visuallydata.scripts.mit.edu/
-
LSTMVis
Visual Analysis for Recurrent Neural Networks
LSTMVis is a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain.
Website: http://lstm.seas.harvard.edu
Source code: https://github.com/HendrikStrobelt/LSTMVis -
HiGlass
Web-based Visual Comparison And Exploration Of Genome Interaction Maps
HiGlass is a tool for exploring genomic contact matrices and tracks. Please take a look at the examples and documentation for a description of the ways that it can be configured to explore and compare contact matrices at varying scales.
Website: http://higlass.io
Source code: https://github.com/hms-dbmi/higlass -
HiPiler
An interactive web application for exploring and visualizing regions-of-interest in large genome interaction matrices.
HiPiler is an interactive visualization interface for the exploration and visualization of regions-of-interest (ROI) in large genome interaction matrices. ROIs can be defined, e.g., by sets of adjacent rows and columns, or by specific visual patterns in the matrix. ROIs are first-class objects in HiPiler, which represents them as thumbnail-like “snippets”. Snippets can be laid out automatically based on their data and meta attributes. They are linked back to the matrix and can be explored interactively.
Website: http://hipiler.lekschas.de
Source code: https://github.com/flekschas/hipiler -
ICON
An Interactive Approach to Train Deep Neural Networks for Segmentation of Neuronal Structures
A web-based interactive tool for training deep neural networks for segmentation tasks. It consists of a server backend that runs on a high-performance GPU compute node, and a front end user interface that runs on a web browser. User configures a classifier, the classes of objects to be detected, and the set of images to use for training and validation.
Source code: https://github.com/Rhoana/icon
-
Compresso
Efficient Compression of Segmentation Data For Connectomics
Compresso is a new compression scheme for label data that outperforms existing approaches by using a sliding window to exploit redundancy across border regions in 2D and 3D.
Source code: https://github.com/VCG/compresso
-
RhoANA
Dense Automatic Neural Annotation
RhoANA is software for dense automatic annotation of neurons in EM serial sections. It includes a processing pipeline, as well as Mojo, a proofreading and annotation tool.
Source code: https://github.com/Rhoana
-
MASSVIS
Massachusetts (Massive) Visualization Dataset
The MASSVIS Database was constructed to gain deeper insight into the elements of a visualization that affect its memorability, recognition, recall, and comprehension. This is one of the largest real-world visualization databases, scraped from various online publication venues including government reports, infographic blogs, news media websites, and scientific journals.
Website: http://massvis.mit.edu
-
ScreenIt
Visual Analysis of Cellular Screens
ScreenIt enables the navigation and analysis of multivariate data from high-throughput and high-content screening at multiple hierarchy levels and at multiple levels of detail. It integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures (hits). In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed according to typical work flows of screening experts. ScreenIt has been developed in collaboration with Novartis Institute for Biomedical Research.
Website: https://vcglab.org/screenit/
Source code: https://github.com/kdinkla/Screenit -
Vials
Visualizing Alternative Splicing in Genes
Vials is a novel visual analysis tool that enables analysts to explore the various datasets that scientists use to make judgments about isoforms: the abundance of reads associated with the coding regions of the gene, evidence for junctions, i.e., edges connecting the coding regions, and predictions of isoform frequencies. Vials is scalable as it allows for the simultaneous analysis of many samples in multiple groups.
Website: http://vials.io/info
Source code: https://github.com/Caleydo/vials -
LineUp
Multi-Attribute Rankings
LineUp is an interactive technique designed to create, visualize and explore rankings of items based on a set of heterogeneous attributes.
Website: http://www.caleydo.org/tools/lineup
Source code: https://github.com/Caleydo/lineupjs -
Caleydo
Visualization for Molecular Biology
Caleydo is an open source visual analysis framework targeted at biomolecular data. The biggest strength of Caleydo is the visualization of interdependencies between multiple datasets. Caleydo can load tabular data and groupings/clusterings. You can explore relationships between multiple groupings, between different datasets and see how your data maps onto pathways. Caleydo has been successfully used to analyze mRNA, miRNA, methylation, copy number variation, mutation status and clinical data as well as other dataset types.
Website: http://www.caleydo.org
Source code: https://github.com/Caleydo -
Face Detail Database
The MERL / Princeton Face Detail Database is a set of statistics describing the high-frequency facial geometry of over a hundred scanned subjects. These statistics describe facial geometry as a spatially-varying Heeger-Bergen texture.
Website: https://vcglab.org/facedetail/
-
Skin Reflectance Database
The MERL / ETH Skin Reflectance Database is a set of statistics for parameters of the Torrance-Sparrow and Blinn-Phong analytic BRDF models and face albedo. We derived these statistics from measuring skin reflectance of 149 subjects of varying age, gender, and race. We are making the collected statistics publicly available to the research community for applications in face synthesis and analysis.
Website: https://vcglab.org/facescanning/
-
NPML
Visually Interactive Neural Probabilistic Models of Language
This four-year project will employ a collaborative design process between researchers in visualization and machine learning. We aim to create neural architectures designed from the ground-up for visual interactivity allowing examination and correction. We will do this by designing neural probabilistic models that expose explicit “hooks” in the form of discrete latent variables determining model choices. We will apply these models to core tasks in natural language processing (NLP) including machine translation, summarization, and data-to-text generation, and design hooks that target important domain sub-decisions such as the showing the current topic cluster during text generation or the selected document sub-section during summarization.
Website: https://npml.github.io/
-
Dojo
Distributed Proofreading of Automatic Segmentations
Dojo is a web-based software for proofreading and annotating automatic segmentations of neurons in EM serial sections. It supports collaborative editing of labeled image data in 2D and 3D.
Source code: https://github.com/Rhoana/dojo