http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
신희성(Heeseong Shin),조석주(Seokju Cho),홍성환(Sunghwan Hong),김수현(Soohyun Kim),김승룡(Seungryong Kim) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
Existing works on open-vocabulary semantic segmentation have utilized large-scale visionlanguage models, such as CLIP, to leverage their exceptional open vocabulary recognition capabilities. However, the problem of transferring these capabilities learned from image-level supervision to the pixel-level task of segmentation and addressing arbitrary unseen categories at inference makes this task challenging. To address these issues, we aim to attentively relate objects within an image to given categories by lever- aging relational information among class categories and visual semantics through aggregation, while also adapting the CLIP representations to the pixellevel task. However, we observe that direct optimization of the CLIP embeddings can harm its open-vocabulary capabilities. In this regard, we propose an alternative approach to optimize the image- text similarity map, i.e. the cost map, using a novel cost aggregation-based method. Our framework, namely CAT- Seg, achieves state-ofthe-art performance across all bench- marks. We provide extensive ablation studies to validate our choices.