2024 IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
Download PDF

Abstract

With an explosion in the uptake of volume electron microscopy (vEM) across neuroscience and fast-paced advances in imaging protocols, it is timely to introduce general-purpose automation for newly generated large-scale vEM datasets. Recent vision foundation models (e.g., SAM) set a new benchmark for the generalization of 2D segmentation. However, SAM has difficulty handling neurons that are densely packed into 3D volumes. To overcome this obstacle, we consider solutions from both data and model aspects. In terms of data, we introduce a data engine to optimize manual labeling, including (i) human-in-the-loop data cleansing and (ii) model-in-the-loop data unification. In terms of model, we present the SAEM2-SAvEM3 with strategies, including (i) auxiliary learning, which predicts complementary representations for SAM masks and improves performance on dense instances; (ii) full-stage distillation, which integrates ViT embeddings into a 3D U-Net, achieves 2D-to-3D lifting and model slimming at the same time; and (iii) prompt-based graph partitioning, which reuses SAM prompts to assign weights of nodes and edges in the oversegmentation graph. According to evaluations of dense and large-scale sparse neurons, the out-of-distribution performance of our pretrained-distilled models is on par with the state-of-the-art supervised and semi-supervised methods. The overall pipeline provides a possible general-purpose solution for 3D neuron reconstruction in any new vEM data. Our code are available at https://github.com/JackieZhai/SAvEM3.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles