loading page

Evaluation of Transfer Learning and Adaptability in Large Language Models with the GLUE Benchmark
  • Nuraini Sulaiman,
  • Farizal Hamzah
Nuraini Sulaiman

Corresponding Author:[email protected]

Author Profile
Farizal Hamzah
Author Profile

Abstract

This article presents a comprehensive investigation into the adaptability of state-of-the-art language models (LMs) to diverse domains through transfer learning techniques, evaluated using the General Language Understanding Evaluation (GLUE) benchmark. Our study systematically examines the effectiveness of various transfer learning strategies, including fine-tuning and data augmentation, in enhancing the performance of selected LMs across the spectrum of GLUE tasks. Findings reveal significant improvements in domain adaptability, though the degree of effectiveness varies across models, highlighting the influence of model architecture and pre-training depth. The analysis provides insights into the complexities of transfer learning, suggesting a nuanced understanding of its application for optimal model performance. The study contributes to the discourse on the potential and limitations of current LMs in generalizing learned knowledge to new domains, underscoring the need for more sophisticated transfer learning frameworks, diverse and comprehensive evaluation benchmarks, and future research directions aimed at improving model adaptability and inclusivity in natural language processing.
12 Mar 2024Submitted to TechRxiv
18 Mar 2024Published in TechRxiv