ChatPaper.aiChatPaper

在隨機位置預測遮罩標記可改善遮罩圖像建模。

Predicting masked tokens in stochastic locations improves masked image modeling

July 31, 2023
作者: Amir Bar, Florian Bordes, Assaf Shocher, Mahmoud Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann LeCun
cs.AI

摘要

自我監督學習是深度學習中一個有前景的範式,它通過構建需要學習有用表示的假設任務,從未標記數據中進行學習。在自然語言處理中,主要的假設任務是遮罩語言建模(MLM),而在計算機視覺中存在一個等效的任務稱為遮罩圖像建模(MIM)。然而,MIM 是具有挑戰性的,因為它需要在準確的位置預測語義內容。例如,給定一張狗的不完整圖片,我們可以猜測有一條尾巴,但無法確定其確切位置。在這項工作中,我們提出了FlexPredict,這是一個能夠應對這一挑戰的隨機模型,通過將位置不確定性納入模型中。具體來說,我們將模型條件設置為隨機遮罩標記位置,以引導模型學習更能抵抗位置不確定性的特徵。我們的方法提高了一系列任務的下游性能,例如,與 MIM 基線相比,FlexPredict 在使用 ViT-B 進行 ImageNet 線性探測時提升了 1.6%,在使用 ViT-L 進行半監督視頻分割時提升了 2.5%。
English
Self-supervised learning is a promising paradigm in deep learning that enables learning from unlabeled data by constructing pretext tasks that require learning useful representations. In natural language processing, the dominant pretext task has been masked language modeling (MLM), while in computer vision there exists an equivalent called Masked Image Modeling (MIM). However, MIM is challenging because it requires predicting semantic content in accurate locations. E.g, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location. In this work, we propose FlexPredict, a stochastic model that addresses this challenge by incorporating location uncertainty into the model. Specifically, we condition the model on stochastic masked token positions to guide the model toward learning features that are more robust to location uncertainties. Our approach improves downstream performance on a range of tasks, e.g, compared to MIM baselines, FlexPredict boosts ImageNet linear probing by 1.6% with ViT-B and by 2.5% for semi-supervised video segmentation using ViT-L.
PDF160December 15, 2024