Within the toolkit of automated text analysis, word embedding models are becoming increasingly prominent to distill meanings from text. This group of models maps high-dimensional text data into a lower dimensional vector space where each word is represented by a dense vector, and it has demonstrated superior performance in many traditional NLP tasks. However, lack of interpretability and the unsupervised nature of word embeddings have so far limited their use in CSSDH.
In this paper we propose using informative priors to create interpretable and domain-informed dimensions for probabilistic word embeddings. The key idea is to place restrictions -- in the form of priors -- on a subset of words such that a pre-specified dimension captures a latent concept of interest. Our methodological paper evaluates how such restrictions should be placed to capture dimensions that could be of interest for researchers within CSSDH.
In our application of the method, we consider two semantic dimensions, sentiment and gender, which have proven difficult to capture in standard word embedding models, and we show how emotions attached to and the gender associated with specific words change over time in various large corpora. Our method yields easily interpretable numeric values representing semantic dimensions. The figure, based on the United States Senate corpus (1981-2016), illustrates the sentiment trajectories for two words experiencing drastic three-year consecutive changes, September and Oklahoma, that capture two terror attacks (the September 11 attack in 2001 and the Oklahoma City bombing in 1995). The changes precede the events due to the smoothness of the priors. In the years that follow, the words gradually regain their previous sentiment, reflecting a decline in their association with terror.
The paper was accepted to EMNLP 2019 and can be accessed here.