Gradual Learning of Matrix-Space Models of Language for Sentiment Analysis

Research output: Contribution to book/Conference proceedings/Anthology/ReportConference contributionContributedpeer-review

Contributors

Abstract

Learning word representations to capture the semantics and compositionality of language has received much research interest in natural language processing. Beyond the popular vector space models, matrix representations for words have been proposed, since then, matrix multiplication can serve as natural composition operation. In this work, we investigate the problem of learning matrix representations of words. We present a learning approach for compositional matrix-space models for the task of sentiment analysis. We show that our approach, which learns the matrices gradually in two steps, outperforms other approaches and a gradient-descent baseline in terms of quality and computational cost.

Details

Original languageEnglish
Title of host publicationProceedings of the 2nd Workshop on Representation Learning for NLP
Place of PublicationVancouver, Canada
PublisherThe Association for Computational Linguistics
Pages178-185
Number of pages8
Publication statusPublished - 1 Aug 2017
Peer-reviewedYes

External IDs

Scopus 85112347293