Contextualized multi-sense word embedding

Abstract

Currently, distributed word representations are employed in many natural language processing tasks. However, when generating one representation for each word, the meanings of a polysemous word cannot be differentiated because the meanings are integrated into one representation. Therefore, several attempts have been made to generate different representations per meaning based on parts of speech or the topic of a sentence. However, these methods are too unrefined to deal with polysemy. In this paper, we proposed two methods to generate more subtle multiple word representations. The first method involves generating multiple word representations using the word in a dependency relationship as a clue. The second approach involves employing a bi-directional language model in which a word representation that considers all the words in the context is generated. The results of the extensive evaluation of the Lexical Substitution task and Context-Aware Word Similarity task confirmed the effectiveness of our approaches to generate more subtle multiple word representations.

Publication
Journal of Natural Language Processing
Tomoyuki Kajiwara
Tomoyuki Kajiwara
Guest Assistant Professor

Natural Language Processing. Especially: Text Simplification, Paraphrasing, Semantic Textual Similarity, Quality Estimation.

Related