Skip to main navigation Skip to search Skip to main content

Knowledge-centric Prompt Composition for Knowledge Base Construction from Pre-trained Language Models

Publication: Chapter in book/Conference proceedingContribution to conference proceedings

Abstract

Pretrained language models (PLMs), exemplified by the GPT family of models, have exhibited remarkable proficiency across a spectrum of natural language processing tasks and have displayed potential for extracting knowledge from within the model itself. While numerous endeavors have delved into this
capability through probing or prompting methodologies, the potential for constructing comprehensive knowledge bases from PLMs remains relatively uncharted. The Knowledge Base Construction from Pre-trained Language Model Challenge (LM-KBC) [1] looks to bridge this gap. This paper presents the
system implementation from team thames to Track 2 of LM-KBC. Our methodology achieves 67 % F1 score on the test set provided by the organisers outperforming the baseline by over 40 points, which ranked 2nd place for Track 2. It does so through the use of additional prompt context derived from both
training data and the constraints and descriptions of the relations. All code and results can be found on GitHub.
Original languageEnglish
Title of host publicationJoint proceedings of the KBC-LM workshop and the LM-KBC challenge @ ISWC 2023
Subtitle of host publicationJoint proceedings of the 1st workshop on Knowledge Base Construction from Pre-trained Language Models the 2nd challenge on Language Models for Knowledge Base Construction (LM-KBC) (KBC-LM + LM-KBC 2023) : Athens, Greece, November 6, 2023
EditorsSimon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jeff Z. Pan
PublisherCEUR Workshop Proceedings
Number of pages13
Publication statusPublished - 2023

Publication series

SeriesCEUR Workshop Proceedings
Volume3577
ISSN1613-0073

Cite this