2023 8th International Conference on Intelligent Computing and Signal Processing (ICSP)
Download PDF

Abstract

Recently, pre-trained language models (PLMs) have been significantly improved for downstream tasks by infusing knowledge. In the field of medical research, with the continuous updating and increasing of data, PLM often requires continuous learning of knowledge. Most existing methods jointly store new knowledge and historical knowledge in an entangled way by updating the overall parameters of PLM. However, when new knowledge infuses into PLM continuously, the historical knowledge is replaced by new knowledge. In this article, we propose CPK-Adapter, a method combines prompt learning and continual learning to balance the abilities of PLM to learn new knowledge and to memorize historical knowledge. We evaluated CPK-Adapter on a suit of BERT models, including BERT, BioBERT, ClinicalBERT, and BlueBERT. The experiments demonstrate that CPK-Adapter outperforms the comparative PLMs in medical tasks, and CPK-Adapter is significantly better than K-Adapter. Furthermore, the performance of CPK-Adapter approaches or exceeds DiseaseBERT which stores task knowledge by updating the overall parameters of PLM.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles