9th International Conference on Computer Science and Engineering, UBMK 2024, Antalya, Türkiye, 26 - 28 Ekim 2024, ss.169-174, (Tam Metin Bildiri)
This study investigates Unsupervised Domain Adap-tation (UDA) for pre-trained language models (PrLMs) in downstream tasks. While PrLMs demonstrate impressive re-sults due to their generic knowledge from different domains, fully fine-tuning a II parameters 0 n a s maller, domain-specific dataset can lead to a loss of this generalized understanding and is resource-intensive. To address this, we introduce a novel Parameter- Efficient approach f or U DA that selectively tunes a small subset of parameters using PEFT methods, significantly reducing computational demands. Our method combines an in-vertible bottleneck adapter with Low-Rank Adaptation (LoRA). It employs a two-phase training process: adaptive pretraining with Masked Language Modeling (MLM) on unlabeled target domain data, followed by supervised fine-tuning on labeled source domain data. Evaluations on the MNLI benchmark show that our approach outperforms the current parameter-efficient UDA state-of-the-art in 13 out of 20 domains, with an average macro F1 score of 75.44, exceeding the average performance of the existing method. Additionally, our method rivals fully tuned UDA approaches while utilizing only 6 % of tunable parameters. Our experiments indicate that combining multiple PEFT methods enhances domain adaptation results and that invertible bottleneck adapters are particularly effective in MLM compared to other PEFT methods.