Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

Making sense of large volumes of biological imaging data without human annotation often relies on unsupervised representation learning. Although efforts have been made to representing cropped-out microscopy images of single cells and single molecules, a more robust and general model that effectively maps every voxel in a whole cell volume onto a latent space is still lacking. Here, we use variational auto-encoder and metric learning to obtain a voxel-level representation, and explore using it for unsupervised segmentation. To our knowledge we are the first to present self-supervised voxel-level representation and sub-sequent unsupervised segmentation results for a complete cell. We improve upon earlier work by proposing an innovative approach to separate latent space into a semantic subspace and a transformational subspace, and only use the semantic representation for segmentation. We show that in the learned semantic representation the major subcellular components are visually distinguishable and the semantic subspace is more transformation-invariant than another sample latent subspace of equal dimension. For unsupervised segmentation we found that our model manages to automatically rediscover and separate the major classes with errors demonstrating spatial patterns, and further dissect the class not specified by reference segmentation into areas with consistent textures. Our segmentation outperforms a baseline by a large margin.

Original publication

DOI

10.1109/CVPRW56347.2022.00204

Type

Conference paper

Publication Date

01/01/2022

Volume

2022-June

Pages

1873 - 1882