LM-KBC 2025: 4th Challenge on Knowledge Base Construction from Pre-trained Language Models
Research output: Contribution to book/Conference proceedings/Anthology/Report › Conference contribution › Contributed › peer-review
Contributors
Abstract
Pretrained language models (LMs) have significantly advanced a variety of semantic tasks and have shown promise as sources of knowledge elicitation. While prior work has studied this ability through probing or prompting, the potential of LMs for large-scale knowledge base construction remains underexplored. The fourth edition of the LM-KBC Challenge invited participants to build knowledge bases directly from LMs, given specific subjects and relations. Unlike existing probing benchmarks, the challenge imposed no simplifying assumptions on relation cardinality-allowing a subject entity to be linked to zero, one, or multiple object entities. To ensure accessibility, the challenge featured a single track based the same LLM to be used by all participants. Five submissions were received, which explored a variety of ideas from self-consistency, self-RAG, reasoning, and prompt optimization.
Details
| Original language | English |
|---|---|
| Title of host publication | KBC-LM Workshop and LM-KBC Challenge at ISWC 2025 |
| Editors | Simon Razniewski, Jan-Christoph Kalo, Duygu Islakoğlu, Tuan-Phong Nguyen, Bohui Zhang |
| Number of pages | 7 |
| Publication status | Published - 2025 |
| Peer-reviewed | Yes |
Publication series
| Series | CEUR Workshop Proceedings |
|---|---|
| Volume | 4041 |
| ISSN | 1613-0073 |
Other
| Title | 4th challenge on Knowledge Base Construction from Pre-trained Language Models |
|---|---|
| Abbreviated title | LM-KBC 2025 |
| Conference number | 4 |
| Description | co-located with the 24th International Semantic Web Conference (ISWC 2025) |
| Duration | 2 November 2025 |
| Website | |
| Location | Nara Prefectural Convention Center |
| City | Nara |
| Country | Japan |
External IDs
| ORCID | /0000-0002-5410-218X/work/194826583 |
|---|