uCluster is a simple and fast way to cluster textual content. It is available both as a standalone Python package as well as a VisiData plugin. This documentation will focus on using uCluster as a VisiData plugin; to see how to use it as a Python package, simply inspect ucluster/text_cluster.py
— it's a very simple interface.
Suppose you have a dataset of millions of text posts. That's great! But there's no way you're going to be able to sift through each post one-by-one. Sure, you can do an emoji flag analysis to get a sense of national identities; similarly, you can look at n-grams to get a sense of popular phrases. These are both useful approaches, but they don't give you a great sense for communities of content. Are there a bunch of posts in the dataset that are essentially saying the same thing?
uCluster uses (relatively simple) NLP techniques to cluster datasets of text. It looks for similarities in the posts, and groups related posts — for some rough understanding of "related" — together. Not every post will be assigned to a cluster, and not every cluster will be a perfect grouping. As an exploratory tool, however, uCluster can be quite powerful. If a lot of posts are saying essentially the same thing — even with small variations in phrasing, punctuation, emojis, etc. — uCluster will notice.
When you run uCluster on your text dataset, each piece of text will either be assigned a cluster (identified by a number >= 0), no cluster (identified by the number -1). You can think of each cluster as containing "similar" posts — whether that similarity arises from using one particularly rare word (or hashtag), from using a similar sentence structure, or from something else. You can probably get an intuition for "why" each cluster is as it is from just looking at its content.
There is no pre-set number of clusters, as clusters are determined "naturally" by looking at density. For more information about how this clustering works, see the "architecture" section below.
Note: uCluster also supports exact clustering, in which it doesn't perform any NLP magic and simply creates clusters based on exact matches. For example, if there are multiple posts with the exact same content (case insensitive), they will be assigned to the same cluster. You can access this clusterer through the exact-cluster
VisiData command, or through the ExactClusterer
class in ucluster/text_cluster.py
. Most of this document is referring to the more advanced "fuzzy" clusterer.
uCluster is not a magic tool, and it's important to be mindful of its inherent limitations.
-
Clusters aren't significant by themselves. The clusters exist only to point you towards potentially related content — there is nothing significant about the clusters themselves. For example, if two users' posts frequently cluster together, that is not a sign of coordinated inauthentic behavior; it's simply a sign that you might want to investigate those users (and those clusters) further.
-
It handles English best (but it's still somewhat multilingual). uCluster (and the tools it uses under the hood) makes several assumptions about the input text, which tend to be most accurate for English text, usually accurate for latin-charactered space-separated languages (e.g., Portuguese), kind of accurate for other space-separated languages (e.g., Russian), and least accurate for logograph-based languages (e.g., Chinese). You can use uCluster for multilingual datasets (this is one of its key design goals), but don't be surprised if you end up with a cluster that is characterized not by content, per se, but by language (e.g., a cluster that contains all the Arabic posts in your dataset).
-
It uses a lot of memory. uCluster doesn't use pre-trained word vectors; this is what makes it possible to cluster multilingual content, as well as what allows it to perform so well on social media content. Rather, it trains its word vectors from scratch each time you run it. This means that it uses quite a bit of memory. You might want to run this on a powerful server somewhere — not on your laptop.
-
It assumes posts are relatively short. uCluster works best on posts that are relatively short (think tweet-length). That makes it great for datasets from platforms like Twitter, Gab, Gettr, and Parler, where posts tend to be only a few sentences. It's going to be much less effective for classifying, say, Medium posts.
-
It doesn't perform any meaningful pre-processing. uCluster doesn't strip HTML tags, links, or @mentions, for example. Be sure to clean your dataset to make it as close to "pure" text content as possible before running it through uCluster. (Don't strip emojis, though — anecdotally, these are very helpful for clustering. They carry a lot of meaning!)
When installed in VisiData, the uCluster plugin adds two commands that operate on columns: fuzzy-cluster
and exact-cluster
. Simply select the column that contains the text you would like to cluster, press space, then type fuzzy-cluster
or exact-cluster
. uCluster will then create a new column that will contain the cluster IDs. This can take several minutes. (Don't worry — it's done asynchronously, though VisiData still seems to occasionally freeze on giant datasets.)
For most people installing uCluster is as easy as running pip3 install ucluster
, then adding import ucluster.vd.plugin
to your ~/.visidatarc
file. If you want to install uCluster in a way that allows local development, follow the "Development Installation" steps below.
Note: Following these steps is only necessary if you want to be able to hack on uCluster's code itself.
The first step is to clone uCluster onto your local machine (git clone git@github.com:stanfordio/ucluster.git
). Then follow the steps below.
Mac users will need to install OpenCV and OpenBLAS using Homebrew: brew install openblas opencv
. (On Linux, see this guide).
If you're on an x86 machine, everything should go relatively smoothly. With Poetry installed, simply run poetry install
. If you want to use uCluster in your "main" VisiData environment (i.e., you don't want to have to activate the Poetry virtual environment every time), then run poetry config virtualenvs.create false
before running poetry install
.
If you're on an M1 machine, things are a bit more complex. It doesn't seem possible to install SciPy (required by NLTK) using pip3 on M1 Macs, as it requires building SciPy from scratch. As a result, you'll need to use Conda. In the main project folder, run conda env create -f env.yml
, then conda activate ucluster
. Next, run poetry env use $(which python3)
followed by poetry install
.
If you run into strange compilation issues on Mac while running
poetry install
, you may need to explicitly point Poetry at your OpenBLAS and OpenCV installations.You can do this by running
SYSTEM_VERSION_COMPAT=1 LDFLAGS="-L/usr/local/opt/openblas/lib" CPPFLAGS="-I/usr/local/opt/openblas/include" OPENBLAS="$(brew --prefix openblas)" poetry install
instead of justpoetry install
.
uCluster will now be available in your Python environment.
First ensure that the local ucluster
package is installed in the same Python environment as VisiData. Then simply add import ucluster.vd.plugin
to your ~/.visidatarc
file, as explained above.
This section will eventually contain a more detailed explanation for how uCluster works. For now, here's a brief overview.
- First, we throw all the text into a giant file and use it to train word vectors using FastText.
- Next, we use those word vectors to turn each post into a vector. By default, we use 100-dimensional space.
- Finally, we run HDBSCAN on those post vectors to create the final clusters. HDBSCAN is a high-performance general purpose clustering algorithm.