Vimo: Visual Analysis of Neuronal Connectivity Motifs

Vimo: Visual Analysis of Neuronal Connectivity Motifs

Troidl J, Warchol S, Choi J, Matelsky J, Dhanysai N, Wang X, Wester B, Wei D, Lichtman JW, Pfister H, and Beyer J.

bioRxiv (Submitted to EuroVis 2022), 2022.

Recent advances in high-resolution connectomics provide researchers access to accurate reconstructions of vast neuronal circuits and brain networks for the first time. Neuroscientists anticipate analyzing these networks to gain a better understanding of information processing in the brain. In particular, scientists are interested in identifying specific network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to the detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings of the involved neurons and synapses. To reduce visual clutter and simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by continuous visual abstractions that allows the user to transition from the highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly to form a longer synaptic chain. We evaluate Vimo in a user study with seven domain experts and an in-depth case study on motifs in the central complex (CX) of the fruit fly brain.


We thank the members of the Visual Computing Group and the Lichtman Lab at Harvard and HHMI Janelia for their insightful feedback. We gratefully acknowledge the support of the Harvard SEAS Fellowship, the National Science Foundation (NSF) Award Number INCS-FO-2124179 and IIS-1901030, and R24MH114785.