Extracting Operator Trees from Model Embeddings

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

Transformer-based language models are able to capture several linguistic properties such as hierarchical structures like dependency or constituency trees. Whether similar structures for mathematics are extractable from language models has not yet been explored. This work aims to probe current state-of-the-art models for the extractability of Operator Trees from their contextualized embeddings using the structure probe designed by (Hewitt and Manning, 2019). We release the code and our data set for future analyses.

Details

Original languageEnglish
Title of host publicationMathNLP 2022 - 1st Workshop on Mathematical Natural Language Processing, Proceedings of the Workshop
PublisherAssociation for Computational Linguistics (ACL)
Pages40-50
Number of pages11
ISBN (electronic)9781959429142
Publication statusPublished - 2022
Peer-reviewedYes

Workshop

Title1st Workshop on Mathematical Natural Language Processing
Abbreviated titleMathNLP 2022
Conference number1
Duration8 December 2022
Website
LocationAbu Dhabi National Exhibition Centre & Online
CityAbu Dhabi
CountryUnited Arab Emirates

External IDs

ORCID /0000-0001-8107-2775/work/197963764