An interactive web-based visualization for browsing enhancer-gene connections genome-wide
The goal of this web application is to provide a scalable visual interface to enhancer-gene connections predicted by the Activity-By-Contact (ABC) model and how they correlate with genetic variants.
Source code: https://github.com/flekschas/enhancer-gene-vis
Piling.js is a general framework and library for creating visual piling interfaces to explore and compare large collections of small multiples. Piling.js is built around a data-agnostic WebGL-based rendering pipeline and a declarative view specification to avoid having to write low-level code. For more background information see https://piling.lekschas.de.
Source code: https://github.com/flekschas/piling.js
Interactive visual pattern search in sequential data using unsupervised deep representation learning
Peax is a novel feature-based technique for interactive visual pattern search in sequential data based on a convolutional autoencoder for unsupervised representation learning of regions in sequential data. Peax enables interactive feedback-driven adjustments of the pattern search to adapt to the users' perceived similarity, for which an active learning strategy is employed to focus the labeling process on useful regions for training a classifier. ScreenIt has been developed in collaboration with Novartis Institute for Biomedical Research.
Source code: https://github.com/novartis/peax
Pattern-Driven Navigation in 2D Multiscale Visualizations
Scalable Insets is a new technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces such as gigapixel images, matrices, or maps. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the patterns.
Source code: https://github.com/flekschas/higlass-scalable-insets
Visualizing Intersecting Sets
Understanding relationships between sets is an important analysis task. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. To address this, we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections.
Source code: https://github.com/vcg/upset
An Immersive Visualization Toolkit
DXR is a Unity package for rapid prototyping of immersive data visualizations in augmented, mixed, and virtual reality (AR, MR, VR) or XR for short.
Source code: https://github.com/ronellsicat/DxR
A large-scale curated infographics dataset
This dataset is provided for research purposes, with annotations that can be used for different computer vision and natural language tasks.
Visual Analysis for Recurrent Neural Networks
LSTMVis is a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain.
Source code: https://github.com/HendrikStrobelt/LSTMVis
Web-based Visual Comparison And Exploration Of Genome Interaction Maps
HiGlass is a tool for exploring genomic contact matrices and tracks. Please take a look at the examples and documentation for a description of the ways that it can be configured to explore and compare contact matrices at varying scales.
Source code: https://github.com/hms-dbmi/higlass
An interactive web application for exploring and visualizing regions-of-interest in large genome interaction matrices.
HiPiler is an interactive visualization interface for the exploration and visualization of regions-of-interest (ROI) in large genome interaction matrices. ROIs can be defined, e.g., by sets of adjacent rows and columns, or by specific visual patterns in the matrix. ROIs are first-class objects in HiPiler, which represents them as thumbnail-like “snippets”. Snippets can be laid out automatically based on their data and meta attributes. They are linked back to the matrix and can be explored interactively.
Source code: https://github.com/flekschas/hipiler
Massachusetts (Massive) Visualization Dataset
The MASSVIS Database was constructed to gain deeper insight into the elements of a visualization that affect its memorability, recognition, recall, and comprehension. This is one of the largest real-world visualization databases, scraped from various online publication venues including government reports, infographic blogs, news media websites, and scientific journals.
Visual Analysis of Cellular Screens
ScreenIt enables the navigation and analysis of multivariate data from high-throughput and high-content screening at multiple hierarchy levels and at multiple levels of detail. It integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures (hits). In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed according to typical work flows of screening experts. ScreenIt has been developed in collaboration with Novartis Institute for Biomedical Research.
Source code: https://github.com/kdinkla/Screenit
LineUp is an interactive technique designed to create, visualize and explore rankings of items based on a set of heterogeneous attributes.
Source code: https://github.com/Caleydo/lineupjs
Visualization for Molecular Biology
Caleydo is an open source visual analysis framework targeted at biomolecular data. The biggest strength of Caleydo is the visualization of interdependencies between multiple datasets. Caleydo can load tabular data and groupings/clusterings. You can explore relationships between multiple groupings, between different datasets and see how your data maps onto pathways. Caleydo has been successfully used to analyze mRNA, miRNA, methylation, copy number variation, mutation status and clinical data as well as other dataset types.
Source code: https://github.com/Caleydo
Visually Interactive Neural Probabilistic Models of Language
This four-year project will employ a collaborative design process between researchers in visualization and machine learning. We aim to create neural architectures designed from the ground-up for visual interactivity allowing examination and correction. We will do this by designing neural probabilistic models that expose explicit “hooks” in the form of discrete latent variables determining model choices. We will apply these models to core tasks in natural language processing (NLP) including machine translation, summarization, and data-to-text generation, and design hooks that target important domain sub-decisions such as the showing the current topic cluster during text generation or the selected document sub-section during summarization.