|
|
|
|
|
|
|
|
|
Code [GitHub] |
NeurIPS 2022 [Paper] |
Slides [Link] |
Poster [Link] |
|
|
Left: Unlike object-centric images that highlight one iconic object, scene-centric images often contain multiple objects with complex layouts. This adds to the data diversity and increases the potential of image representations, yet it challenges previous learning paradigms that simply treat an image as a whole or individual pixels.
Middle: Contrastive-learning objectives built upon different levels of image representations, in which object-level contrastive learning is viewed as the most suitable solution for scene-centric data.
Right: We jointly learn a set of semantic prototypes in order to perform semantic grouping over the pixel-level representations and form object-centric slots.
|
In this paper, we tackle the problem of learning visual representations from unlabeled scene-centric data.
Existing works have demonstrated the potential of utilizing the underlying complex structure within scene-centric data; still, they commonly rely on hand-crafted objectness priors or specialized pretext tasks to build a learning framework, which may harm generalizability.
Instead, we propose contrastive learning from data-driven semantic slots, namely SlotCon, for joint semantic grouping and representation learning.
The semantic grouping is performed by assigning pixels to a set of learnable prototypes, which can adapt to each sample by attentive pooling over the feature and form new slots. Based on the learned data-dependent slots, a contrastive objective is employed for representation learning, which enhances the discriminability of features, and conversely facilitates grouping semantically coherent pixels together. Compared with previous efforts, by simultaneously optimizing the two coupled objectives of semantic grouping and contrastive learning, our approach bypasses the disadvantages of hand-crafted priors and is able to learn object/group-level representations from scene-centric images. Experiments show our approach effectively decomposes complex scenes into semantic groups for feature learning and significantly benefits downstream tasks, including object detection, instance segmentation, and semantic segmentation. Code is available at https://github.com/CVMI-Lab/SlotCon. |
|
|
Based on a shared pixel embedding function, the model learns to classify pixels into groups according to their feature similarity in a pixel-level deep clustering fashion;
the model produces group-level feature vectors (slots) through attentive pooling over the feature maps, and further performs group-level contrastive learning.
In the figure, we omit the symmetrized loss computed by swapping the two views for simplicity.
1) We show that the decomposition of natural scenes (semantic grouping) can be done in a learnable fashion and jointly optimized with the representations from scratch. 2) We demonstrate that semantic grouping can bring object-centric representation learning to large-scale real-world scenarios. 3) Combining semantic grouping and representation learning, we unleash the potential of scene-centric pre-training, largely close its gap with object-centric pre-training and achieve state-of-the-art results in various downstream tasks. |
|
Main transfer results with COCO pre-training.
We report the results in COCO object detection, COCO instance segmentation, and semantic segmentation in Cityscapes, PASCAL
VOC and ADE20K. Compared with other image-, pixel-, and object-level self-supervised
learning methods, our method shows consistent improvements over different tasks without leveraging
multi-crop and objectness priors. (†: re-impl. w/ official weights; ‡: full re-impl.)
|
|
|
Pushing the limit of scene-centric pre-training.
Our method further sees a notable gain in all tasks with extended COCO+ data, showing the great potential of scene-centric pre-training.
|
|
Main transfer results with ImageNet-1K pre-training.
Our method is also compatible with object-centric data and shows consistent improvements over different tasks without using FPN and objectness priors.
(†: re-impl. w/ official weights; ‡: full re-impl.)
|
|
Examples of visual concepts discovered by SlotCon from the COCO val2017 split.
Each column shows the top 5 segments retrieved with the same prototype, marked with reddish masks or arrows.
Our method can discover visual concepts across various scenarios and semantic granularities regardless of small object size and occlusion.
|
Xin Wen, Bingchen Zhao, Anlin Zheng, Xiangyu Zhang, Xiaojuan Qi Self-Supervised Visual Representation Learning with Semantic Grouping In NeurIPS, 2022. |
@inproceedings{wen2022slotcon,
  title={Self-Supervised Visual Representation Learning with Semantic Grouping},   author={Wen, Xin and Zhao, Bingchen and Zheng, Anlin and Zhang, Xiangyu and Qi, Xiaojuan},   booktitle={Advances in Neural Information Processing Systems},   year={2022} } |
Acknowledgements |