pyLDAvis_and_Mallet
One useful library for viewing a topic model is LDAvis, an R package for creating interactive web visualizations of topic models, and its Python port, PyLDAvis. This library is focused on visualizing a topic model, using PCA to chart the relationship between topics and between topics and words in the topic model. It is also agnostic about library you use to create the topic model, so long as you extract the necessary data in the correct formats.
While the python version of the library works very smoothly with Gensim, which I have discussed before, there is little documentation for how to move from a topic model created using MALLET to data that can be processed by the LDAvis library. For reasons that require their own blog post, I have shifted from using Gensim for my topic model to using MALLET (spoilers: better documentation of output formats, more widespread use in the humanities so better documentation and code examples generally). But I still wanted to use this library to visualize the full model as a way of generating an overall view of the relationship between the 250 topics it contains.
The documentation for both LDAvis and PyLDAvis relies primarily on code examples to demonstrate how to use the libraries. My primary sources were a python example and two R examples, one focused on manipulating the model data and one on the full model to visualization process. The "details" documentation for the R library also proved key for trouble-shooting when the outputs did not match my expectations. (Pro tip: word order matters.)
Looking at the examples, the data required for the visualization library are:
- topic-term distributions (matrix,
phi
) - document-topic distributions (matrix,
theta
) - document lengths (number vector)
- vocab (character vector)
- term frequencies (number vector)
One challenge is that the order of the data needs to be managed, so that the terms columns in phi
, the topic-term matrix, are in the same order as the vocab vector, which is in the same order as the frequencies vector, and the documents index of theta
, the document-topic matrix, is in the same order of the document lengths vector.
I was able to isolate and generate all of the data necessary for creating the visualization from the statefile produced by MALLET. For the document lengths, vocabulary, and frequencies, I computed the data on the tokens remaining after stopword removal, so only those tokens considered in the creation of the model. I used the optimize interval in creating the model to allow some topics to be more prominent than others. As a result, the alpha
value is a list of values (weights per topic).
The first step was to extract the data from the MALLET statefile and into a pandas dataframe.
import gzip
import os
import pandas as pd
dataDir = "/Users/jeriwieringa/Dissertation/data/model_outputs/"
def extract_params(statefile):
"""Extract the alpha and beta values from the statefile.
Args:
statefile (str): Path to statefile produced by MALLET.
Returns:
tuple: alpha (list), beta
"""
with gzip.open(statefile, 'r') as state:
params = [x.decode('utf8').strip() for x in state.readlines()[1:3]]
return (list(params[0].split(":")[1].split(" ")), float(params[1].split(":")[1]))
def state_to_df(statefile):
"""Transform state file into pandas dataframe.
The MALLET statefile is tab-separated, and the first two rows contain the alpha and beta hypterparamters.
Args:
statefile (str): Path to statefile produced by MALLET.
Returns:
datframe: topic assignment for each token in each document of the model
"""
return pd.read_csv(statefile,
compression='gzip',
sep=' ',
skiprows=[1,2]
)
params = extract_params(os.path.join(dataDir, 'target_300_10.18497.state.gz'))
alpha = [float(x) for x in params[0][1:]]
beta = params[1]
print("{}, {}".format(alpha, beta))
df = state_to_df(os.path.join(dataDir, 'target_300_10.18497.state.gz'))
One small complication I found with my data was there is a token nan
in the list of "types." Because nan
is used in Pandas to indicate missing integer values, Pandas assumed it was an integer, rather than a string. To address this misidentification problem, I explicitly set the type for the entire column "type" to string. (Two meanings of type. Also confusing.)
df['type'] = df.type.astype(str)
df[:10]
Above is a preview of the data that I have from the statefile, and which I am going to use generate the data needed for the LDAvis library. It contains the document id, the word position index, the word index, the word, and its topic assignment.
The first bit of data to gather is the length of the documents. To do this, I grouped the data by the document id and counted the tokens in the doc. This data is sorted by the document id, so is in the correct order for the visualization preprocessing.
# Get document lengths from statefile
docs = df.groupby('#doc')['type'].count().reset_index(name ='doc_length')
docs[:10]
The second bit of data is the vocabulary and frequencies. Here I used pandas to generate a new frame with the counts for each word. I then sorted this dataframe so that it is alphabetical by type, a step I will repeat in creating the topic-term matrix.
# Get vocab and term frequencies from statefile
vocab = df['type'].value_counts().reset_index()
vocab.columns = ['type', 'term_freq']
vocab = vocab.sort_values(by='type', ascending=True)
vocab[:10]
Clearly some OCR problems have managed to make their way through all of my preprocessing efforts. With the low term frequencies, however, they should be only minimally present in the final representation of the topic model.
The third step was to create the matrix files. Here is where things get a bit tricky, as there is the adding of smoothing values and normalizing the data so that each row sums to 1. To do the normalizing, I am used sklearn
because these are large matrices that require a more optimized function than dividing by the sum of the row with pandas.
# Topic-term matrix from state file
# https://ldavis.cpsievert.me/reviews/reviews.html
import sklearn.preprocessing
def pivot_and_smooth(df, smooth_value, rows_variable, cols_variable, values_variable):
"""
Turns the pandas dataframe into a data matrix.
Args:
df (dataframe): aggregated dataframe
smooth_value (float): value to add to the matrix to account for the priors
rows_variable (str): name of dataframe column to use as the rows in the matrix
cols_variable (str): name of dataframe column to use as the columns in the matrix
values_variable(str): name of the dataframe column to use as the values in the matrix
Returns:
dataframe: pandas matrix that has been normalized on the rows.
"""
matrix = df.pivot(index=rows_variable, columns=cols_variable, values=values_variable).fillna(value=0)
matrix = matrix.values + smooth_value
normed = sklearn.preprocessing.normalize(matrix, norm='l1', axis=1)
return pd.DataFrame(normed)
First, we need to aggregate the data from the statefile dataframe to get the number of topic assignments for word in the documents. For phi
, the topic-term matrix, I aggregated by topic and word, counted the number of times each word was assigned to each topic, and then sorted the resulting dataframe alphabetically by word, so that it matches the order of the vocabulary frame. Here I used the beta
hyperparamter as the smoothing value.
phi_df = df.groupby(['topic', 'type'])['type'].count().reset_index(name ='token_count')
phi_df = phi_df.sort_values(by='type', ascending=True)
phi_df[:10]
phi = pivot_and_smooth(phi_df, beta, 'topic', 'type', 'token_count')
# phi[:10]
Then we do this again, but focused on the documents and topics, to generate the theta
document-topic matrix. Here I used alpha
as the smoothing value.
theta_df = df.groupby(['#doc', 'topic'])['topic'].count().reset_index(name ='topic_count')
theta_df[:10]
theta = pivot_and_smooth(theta_df, alpha , '#doc', 'topic', 'topic_count')
# theta[:10]
These data processing steps, as always, represent about 90% of the work needed to use the LDAvis library with a MALLET topic model. Now that I have all of the data in place, I can queue that data up and pass it to the visualization library.
import pyLDAvis
data = {'topic_term_dists': phi,
'doc_topic_dists': theta,
'doc_lengths': list(docs['doc_length']),
'vocab': list(vocab['type']),
'term_frequency': list(vocab['term_freq'])
}
I used the code examples I mentioned earlier for setting the keys of the data dictionary. Keeping the data in sorted dataframes helps ensure that the order is consistent and preserved as it is moved into a list for analysis.
This data is then passed the visualization library, first for preparation and then for display. In preparing the data, the library computes the distances between topics and then projects those distances into a two-dimensional space using "multidimensional scaling" or principle component analysis. In the resulting visualization, the overlap in words of the topic is represented by shared location in space, and the greater the distance between topics, the larger the dissimilarity between the weight of the words that comprise the topic. The second element that the library computes is the most relevant terms for each topic with a sliding scale that either ranks on the overall frequency of the word (the default) or the distinctiveness of the word (lambda = 0.0 on the resulting slider). This provides a richer view of the topic assignments and is useful in labeling for distinguishing between topics.
vis_data = pyLDAvis.prepare(**data)
pyLDAvis.display(vis_data)