http://chineseinput.net/에서 pinyin(병음)방식으로 중국어를 변환할 수 있습니다.
변환된 중국어를 복사하여 사용하시면 됩니다.
확률적 소프트 프롬프트: 프롬프트를 통한 저자원 NLI의 강인성 향상
이수현(Suhyun Lee),이은주(Eunju Lee),권준형(JuneHyoung Kwon),김영빈(YoungBin Kim) 대한전자공학회 2023 대한전자공학회 학술대회 Vol.2023 No.6
In environments with limited resources, conventional techniques such as Pretrained Language Models(PLMs) frequently encounter overfitting, leading to suboptimal performance. This paper presents Stochastic Soft Prompt Tuning(SSPT), an innovative method that leverages stochasticity in soft prompts to improve model robustness and better navigate low-resource scenarios. We evaluated SSPT’s effectiveness using the Cross-Lingual Natural Language Inference (XNLI) dataset, a low-resource corpus containing English, German, and French entries. SSPT demonstrated a remarkable performance, outpacing a fine-tuned Multilingual BERT by 8%p, 11%p, and 5%p in English, Germa, and French respectively. These results accentuate the robustness and adaptability of SSPT in low-resource environments, marking it as a promising strategy for enhancing performance in these challenging situations.