2024 15th International Conference on Information, Intelligence, Systems & Applications (IISA)
Download PDF

Abstract

Ahstract-Zero-shot text classification leverages pre-trained transformer models to categorize texts without the need for task-specific training. This paper analyzes a variety of transformer-based pre-trained methods on the task of zero-shot text classi-fication. Specifically, we present a deep comparative analysis of various transformer models, such as BART, DeBerta, DistiIBART, RoBERTa, and their variants, and evaluate their performance in various zero-shot text classification schemes. Furthermore, we examine the models' generalization capabilities. The findings highlight key strengths and weaknesses of each model, providing insights into their suitability for different text categorization tasks. Our research contributes to the broader understanding of transformer-based approaches and offers guidance for selecting models for zero-shot text classification.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles