Knowledge-centric Prompt Composition for Knowledge Base Construction from Pre-trained Language Models

Publikation: Beitrag in Buch/KonferenzbandBeitrag in Konferenzband

Abstract

Pretrained language models (PLMs), exemplified by the GPT family of models, have exhibited remarkable proficiency across a spectrum of natural language processing tasks and have displayed potential for extracting knowledge from within the model itself. While numerous endeavors have delved into this
capability through probing or prompting methodologies, the potential for constructing comprehensive knowledge bases from PLMs remains relatively uncharted. The Knowledge Base Construction from Pre-trained Language Model Challenge (LM-KBC) [1] looks to bridge this gap. This paper presents the
system implementation from team thames to Track 2 of LM-KBC. Our methodology achieves 67 % F1 score on the test set provided by the organisers outperforming the baseline by over 40 points, which ranked 2nd place for Track 2. It does so through the use of additional prompt context derived from both
training data and the constraints and descriptions of the relations. All code and results can be found on GitHub.
OriginalspracheEnglisch
Titel des SammelwerksJoint proceedings of the KBC-LM workshop and the LM-KBC challenge @ ISWC 2023
Untertitel des SammelwerksJoint proceedings of the 1st workshop on Knowledge Base Construction from Pre-trained Language Models the 2nd challenge on Language Models for Knowledge Base Construction (LM-KBC) (KBC-LM + LM-KBC 2023) : Athens, Greece, November 6, 2023
Herausgeber*innenSimon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jeff Z. Pan
VerlagCEUR Workshop Proceedings
Seitenumfang13
PublikationsstatusVeröffentlicht - 2023

Publikationsreihe

ReiheCEUR Workshop Proceedings
Band3577
ISSN1613-0073

Zitat