Pattern-Driven Navigation in 2D Multiscale Visualizations
Scalable Insets is a new technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visual spaces such as gigapixel images, matrices, or maps. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the patterns.
Source code: https://github.com/flekschas/higlass-scalable-insets
A large-scale curated infographics dataset
This dataset is provided for research purposes, with annotations that can be used for different computer vision and natural language tasks.
Visual Analysis for Recurrent Neural Networks
LSTMVis is a visual analysis tool for recurrent neural networks with a focus on understanding these hidden state dynamics. The tool allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain.
Web-based Visual Comparison And Exploration Of Genome Interaction Maps
HiGlass is a tool for exploring genomic contact matrices and tracks. Please take a look at the examples and documentation for a description of the ways that it can be configured to explore and compare contact matrices at varying scales.
An interactive web application for exploring and visualizing regions-of-interest in large genome interaction matrices.
HiPiler is an interactive visualization interface for the exploration and visualization of regions-of-interest (ROI) in large genome interaction matrices. ROIs can be defined, e.g., by sets of adjacent rows and columns, or by specific visual patterns in the matrix. ROIs are first-class objects in HiPiler, which represents them as thumbnail-like “snippets”. Snippets can be laid out automatically based on their data and meta attributes. They are linked back to the matrix and can be explored interactively.
Massachusetts (Massive) Visualization Dataset
The MASSVIS Database was constructed to gain deeper insight into the elements of a visualization that affect its memorability, recognition, recall, and comprehension. This is one of the largest real-world visualization databases, scraped from various online publication venues including government reports, infographic blogs, news media websites, and scientific journals.
Visual Analysis of Cellular Screens
ScreenIt enables the navigation and analysis of multivariate data from high-throughput and high-content screening at multiple hierarchy levels and at multiple levels of detail. It integrates the interactive modeling of cell physical states (phenotypes) and the effects of drugs on cell cultures (hits). In addition, quality control is enabled via the detection of anomalies that indicate low-quality data, while providing an interface that is designed according to typical work flows of screening experts. ScreenIt has been developed in collaboration with Novartis Institute for Biomedical Research.
Visualizing Intersecting Sets
Understanding relationships between sets is an important analysis task. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. To address this, we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections.
Visualization for Molecular Biology
Caleydo is an open source visual analysis framework targeted at biomolecular data. The biggest strength of Caleydo is the visualization of interdependencies between multiple datasets. Caleydo can load tabular data and groupings/clusterings. You can explore relationships between multiple groupings, between different datasets and see how your data maps onto pathways. Caleydo has been successfully used to analyze mRNA, miRNA, methylation, copy number variation, mutation status and clinical data as well as other dataset types.
Visually Interactive Neural Probabilistic Models of Language
This four-year project will employ a collaborative design process between researchers in visualization and machine learning. We aim to create neural architectures designed from the ground-up for visual interactivity allowing examination and correction. We will do this by designing neural probabilistic models that expose explicit “hooks” in the form of discrete latent variables determining model choices. We will apply these models to core tasks in natural language processing (NLP) including machine translation, summarization, and data-to-text generation, and design hooks that target important domain sub-decisions such as the showing the current topic cluster during text generation or the selected document sub-section during summarization.