Extracting Operator Trees from Model Embeddings

Publikation: Beitrag in Buch/Konferenzbericht/Sammelband/GutachtenBeitrag in KonferenzbandBeigetragenBegutachtung

Beitragende

Abstract

Transformer-based language models are able to capture several linguistic properties such as hierarchical structures like dependency or constituency trees. Whether similar structures for mathematics are extractable from language models has not yet been explored. This work aims to probe current state-of-the-art models for the extractability of Operator Trees from their contextualized embeddings using the structure probe designed by (Hewitt and Manning, 2019). We release the code and our data set for future analyses.

Details

OriginalspracheEnglisch
TitelMathNLP 2022 - 1st Workshop on Mathematical Natural Language Processing, Proceedings of the Workshop
Herausgeber (Verlag)Association for Computational Linguistics (ACL)
Seiten40-50
Seitenumfang11
ISBN (elektronisch)9781959429142
PublikationsstatusVeröffentlicht - 2022
Peer-Review-StatusJa

Workshop

Titel1st Workshop on Mathematical Natural Language Processing
KurztitelMathNLP 2022
Veranstaltungsnummer1
Dauer8 Dezember 2022
Webseite
OrtAbu Dhabi National Exhibition Centre & Online
StadtAbu Dhabi
LandVereinigte Arabische Emirate

Externe IDs

ORCID /0000-0001-8107-2775/work/197963764