Self-Supervised Voxel-Level Representation Rediscovers Subcellular Structures in Volume Electron Microscopy
Han H., Dmitrieva M., Sauer A., Tam KH., Rittscher J.
Making sense of large volumes of biological imaging data without human annotation often relies on unsupervised representation learning. Although efforts have been made to representing cropped-out microscopy images of single cells and single molecules, a more robust and general model that effectively maps every voxel in a whole cell volume onto a latent space is still lacking. Here, we use variational auto-encoder and metric learning to obtain a voxel-level representation, and explore using it for unsupervised segmentation. To our knowledge we are the first to present self-supervised voxel-level representation and sub-sequent unsupervised segmentation results for a complete cell. We improve upon earlier work by proposing an innovative approach to separate latent space into a semantic subspace and a transformational subspace, and only use the semantic representation for segmentation. We show that in the learned semantic representation the major subcellular components are visually distinguishable and the semantic subspace is more transformation-invariant than another sample latent subspace of equal dimension. For unsupervised segmentation we found that our model manages to automatically rediscover and separate the major classes with errors demonstrating spatial patterns, and further dissect the class not specified by reference segmentation into areas with consistent textures. Our segmentation outperforms a baseline by a large margin.