Lecture 18: Introduction to natural language processing#

UBC 2024-25

Imports#

import os
import re
import string
import sys
import time

sys.path.append(os.path.join(os.path.abspath(".."), "code"))

from plotting_functions_unsup import *

import IPython
import numpy as np
import numpy.random as npr
import pandas as pd
from comat import CooccurrenceMatrix
from nltk.corpus import stopwords
from nltk.tokenize import sent_tokenize, word_tokenize
from preprocessing import MyPreprocessor
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline

DATA_DIR = os.path.join(os.path.abspath(".."), "data/")
../../_images/339ed9aaa421ac0fa4ae223c421bdef0dfbab126f26ed411adcc44db6419922b.png
import nltk
nltk.download('stopwords')
[nltk_data] Downloading package stopwords to
[nltk_data]     /Users/kvarada/nltk_data...
[nltk_data]   Package stopwords is already up-to-date!
True



Learning objectives#

  • Broadly explain what is natural language processing (NLP).

  • Name some common NLP applications.

  • Explain the general idea of a vector space model.

  • Explain the difference between different word representations: term-term co-occurrence matrix representation and Word2Vec representation.

  • Describe the reasons and benefits of using pre-trained embeddings.

  • Load and use pre-trained word embeddings to find word similarities and analogies.

  • Demonstrate biases in embeddings and learn to watch out for such biases in pre-trained embeddings.

  • Use word embeddings in text classification and document clustering using spaCy.

  • Explain the general idea of topic modeling.

  • Describe the input and output of topic modeling.

  • Carry out basic text preprocessing using spaCy.



What is Natural Language Processing (NLP)?#

  • What should a search engine return when asked the following question?

How often do you search everyday?

Natural Language Processing (NLP) is a branch of machine learning focused on enabling computers to understand, interpret, and generate human language.

Everyday NLP applications

NLP in news

Often you’ll see NLP in news. Some examples:

Why is NLP hard?

  • Language is complex and subtle.

  • Language is ambiguous at different levels.

  • Language understanding involves common-sense knowledge and real-world reasoning.

  • All the problems related to representation and reasoning in artificial intelligence arise in this domain.

Lexical ambiguity



Referential ambiguity

Ambiguous news headlines

PROSTITUTES APPEAL TO POPE
  • appeal to means make a serious or urgent request or be attractive or interesting?

KICKING BABY CONSIDERED TO BE HEALTHY
  • kicking is used as an adjective or a verb?

MILK DRINKERS ARE TURNING TO POWDER
  • turning means becoming or take up?

Overall goal

  • Give you a quick introduction to you of this important field in artificial intelligence which extensively used machine learning.

This is a huge field. We’ll focus on the following three topics which are likely to be useful for you.

  • Word embeddings

  • Topic modeling

  • Basic text preprocessing



Motivation and context#

  • Do large language models, such as ChatGPT, “understand” your questions to some extent and provide useful responses?

  • What is required for a machine to “understand” language?

  • So far we have been talking about sentence or document representations.

  • This week, we’ll go one step back and talk about word representations.

  • Why? Because word is a basic semantic unit of text and in order to capture meaning of text it is useful to capture word meaning (e.g., in terms of relationships between words).

Activity: Context and word meaning#

  • Pair up with the person next to you and try to guess the meanings of two made-up words: flibbertigibbet and groak.

  1. The plot twist was totally unexpected, making it a flibbertigibbet experience.

  2. Despite its groak special effects, the storyline captivated my attention till the end.

  3. I found the character development rather groak, failing to evoke empathy.

  4. The cinematography is flibbertigibbet, showcasing breathtaking landscapes.

  5. A groak narrative that could have been saved with better direction.

  6. This movie offers a flibbertigibbet blend of humour and action, a must-watch.

  7. Sadly, the movie’s potential was overshadowed by its groak pacing.

  8. The soundtrack complemented the film’s theme perfectly, adding to its flibbertigibbet charm.

  9. It’s rare to see such a flibbertigibbet performance by the lead actor.

  10. Despite high expectations, the film turned out to be quite groak.

  11. Flibbertigibbet dialogues and a gripping plot make this movie stand out.

  12. The film’s groak screenplay left much to be desired.

Attributions: Thanks to ChatGPT!

  • How did you infer the meaning of the words flibbertigibbet and groak?

  • Which specific words or phrases in the context helped you infer the meaning of these imaginary words?





What you did in the above activity is referred to as distributional hypothesis.

You shall know a word by the company it keeps.

Firth, 1957
If A and B have almost identical environments we say that they are synonyms.
Harris, 1954

Example:

  • The plot twist was totally unexpected, making it a flibbertigibbet experience.

  • The plot twist was totally unexpected, making it a delightful experience.

Word representations: intro

  • A standard way to represent meanings of words is by placing them into a vector space.

  • Distances between words in the vector space indicate relationships between them.

(Attribution: Jurafsky and Martin 3rd edition)

Canada (A.G.) v. Igloo Vikski Inc. was a tariff code case that made its way to the SCC (Supreme Court of Canada). The case disputed the definition of hockey gloves as either "gloves, mittens, or mitts" or as "other articles of plastic."

In ML and Natural Language Processing (NLP) we are interested in

  • Modeling word meaning that allows us to

    • draw useful inferences to solve meaning-related problems

    • find relationship between words, e.g., which words are similar, which ones have positive or negative connotations

Example: Word similarity

  • Suppose you are carrying out sentiment analysis.

  • Consider the sentences below.

S1: This movie offers a flibbertigibbet blend of humour and action, a must-watch.

S2: This movie offers a delightful blend of humour and action, a must-watch.

  • Here we would like to capture similarity between flibbertigibbet and delightful in reference to sentiment analysis task.

How are word embeddings related to unsupervised learning?

  • They are closely related to extracting meaningful representations from raw data.

  • The word2vec algorithm is an unsupervised (or semi-supervised) method; we do not need any labeled data but we use running text as supervision signal.

Overview of dot product and cosine similarity

  • To create a vector space where similar words are close together, we need some metric to measure distances between representations.

  • We have used the Euclidean distance before for numeric features.

  • For sparse features, the most commonly used metrics are Dot product and Cosine distance.

  • Let’s look at an example.

vec1 = np.array([2.0, 4.0, 3.0])
vec2 = np.array([5.0, 1.0, 0.0])

Euclidean distance

\[distance(vec1, vec2) = \sqrt{\sum_{i =1}^{n} (vec1_i - vec2_i)^2}\]
# Euclidean Distance
euclidean_distance = np.linalg.norm(vec1 - vec2)
print(f"Euclidean Distance: {euclidean_distance:.4f}")
Euclidean Distance: 5.1962

dot product similarity: $\(similarity_{dot product}(vec1,vec2) = vec1.vec2\)$

# Dot Product
dot_product = np.dot(vec1, vec2)
print(f"Dot Product: {dot_product:.4f}")
Dot Product: 14.0000

Cosine similarity: normalized version of dot product. $\(similarity_{cosine}(vec1,vec2) = \frac{vec1.vec2}{\left\lVert vec1\right\rVert_2 \left\lVert vec2\right\rVert_2}\)$

Where,

  • The L2 norm of \(vec1\) is the magnitude of \(vec1\) $\(\left\lVert vec1\right\rVert_2 = \sqrt{\sum_i vec1_i^2}\)$

  • The L2 norm of \(vec2\) is the magnitude of \(vec2\) $\(\left\lVert vec2\right\rVert_2 = \sqrt{\sum_i vec2_i^2}\)$

# Cosine Similarity
cosine_similarity = np.dot(vec1, vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2))
print(f"Cosine Similarity: {cosine_similarity:.4f}")
Cosine Similarity: 0.5098

Discussion question#

Suppose you are recommending items based on similarity between items. Given a query vector “Query” in the picture below and the three item vectors, determine the ranking for the three similarity measures below:

  • Similarity based on Euclidean distance

  • similarity based on dot product

  • Cosine similarity

  • Adapted from here.



Word embeddings#

  • Word embeddings are dense vector representations of words that capture semantic relationships by positioning similar words close to each other in vector space.

  • By converting words into continuous numerical vectors, word embeddings allow machine learning models to work with text data effectively.

  • Some commonly used algorithms to create embeddings are Word2Vec or GloVe. They learn rich and meaningful representations of words from large corpora.

Pre-trained embeddings

Creating these representations on your own is resource intensive. So people typically use “pretrained” embeddings. A number of pre-trained word embeddings are available. The most popular ones are:

How to use pretrained embeddings

Let’s try Google News pre-trained embeddings.

  • You can download pre-trained embeddings from their original source.

  • Gensim provides an api to conveniently load them. You need to install the gensim package in the course environment.

conda install conda-forge::gensim

If you get errors when you import gensim, try to install the following in the course environment.
pip install --upgrade gensim scipy

import gensim
import gensim.downloader as api

print(list(api.info()["models"].keys()))
['fasttext-wiki-news-subwords-300', 'conceptnet-numberbatch-17-06-300', 'word2vec-ruscorpora-300', 'word2vec-google-news-300', 'glove-wiki-gigaword-50', 'glove-wiki-gigaword-100', 'glove-wiki-gigaword-200', 'glove-wiki-gigaword-300', 'glove-twitter-25', 'glove-twitter-50', 'glove-twitter-100', 'glove-twitter-200', '__testing_word2vec-matrix-synopsis']
# It'll take a while to run this when you try it out for the first time.

google_news_vectors = api.load('word2vec-google-news-300')
print("Size of vocabulary: ", len(google_news_vectors))
Size of vocabulary:  3000000
  • google_news_vectors above has 300 dimensional word vectors for 3,000,000 unique words/phrases from Google news.

What can we do with these word vectors?

  • Let’s examine word vector for the word UBC.

google_news_vectors["UBC"][:20]  # Representation of the word UBC
array([-0.3828125 , -0.18066406,  0.10644531,  0.4296875 ,  0.21582031,
       -0.10693359,  0.13476562, -0.08740234, -0.14648438, -0.09619141,
        0.02807617,  0.01409912, -0.12890625, -0.21972656, -0.41210938,
       -0.1875    , -0.11914062, -0.22851562,  0.19433594, -0.08642578],
      dtype=float32)
google_news_vectors["UBC"].shape
(300,)

It’s a short and a dense (we do not see any zeros) vector!

Finding similar words

  • Given word \(w\), search in the vector space for the word closest to \(w\) as measured by cosine similarity.

google_news_vectors.most_similar("UBC")
[('UVic', 0.788647472858429),
 ('SFU', 0.7588527798652649),
 ('Simon_Fraser', 0.7356575131416321),
 ('UFV', 0.688043475151062),
 ('VIU', 0.6778583526611328),
 ('Kwantlen', 0.6771429181098938),
 ('UBCO', 0.6734487414360046),
 ('UPEI', 0.6731126308441162),
 ('UBC_Okanagan', 0.6709133982658386),
 ('Lakehead_University', 0.6622507572174072)]
google_news_vectors.most_similar("information")
[('info', 0.7363681793212891),
 ('infomation', 0.6800296306610107),
 ('infor_mation', 0.6733849048614502),
 ('informaiton', 0.6639009118080139),
 ('informa_tion', 0.6601256728172302),
 ('informationon', 0.633933424949646),
 ('informationabout', 0.6320979595184326),
 ('Information', 0.6186580061912537),
 ('informaion', 0.6093292236328125),
 ('details', 0.6063088774681091)]

If you want to extract all documents containing words similar to information, you could use this information.

Google News embeddings also support multi-word phrases.

google_news_vectors.most_similar("british_columbia")
[('alberta', 0.6111123561859131),
 ('canadian', 0.6086405515670776),
 ('ontario', 0.6031432151794434),
 ('erik', 0.5993570685386658),
 ('dominican_republic', 0.5925410985946655),
 ('costco', 0.5824530720710754),
 ('rhode_island', 0.5804311633110046),
 ('dreampharmaceuticals', 0.5755444169044495),
 ('canada', 0.5630921721458435),
 ('austin', 0.5623062252998352)]

Finding similarity scores between words

google_news_vectors.similarity("Canada", "hockey")
0.27610135
google_news_vectors.similarity("Japan", "hockey")
0.0019627889
word_pairs = [
    ("height", "tall"),
    ("height", "official"),
    ("pineapple", "mango"),
    ("pineapple", "juice"),
    ("sun", "robot"),
    ("GPU", "hummus"),
]
for pair in word_pairs:
    print(
        "The similarity between %s and %s is %0.3f"
        % (pair[0], pair[1], google_news_vectors.similarity(pair[0], pair[1]))
    )
The similarity between height and tall is 0.473
The similarity between height and official is 0.002
The similarity between pineapple and mango is 0.668
The similarity between pineapple and juice is 0.418
The similarity between sun and robot is 0.029
The similarity between GPU and hummus is 0.094

We are getting reasonable word similarity scores!!

Success of word2vec

  • This analogy example often comes up when people talk about word2vec, which was used by the authors of this method.

  • MAN : KING :: WOMAN : ?

    • What is the word that is similar to WOMAN in the same sense as KING is similar to MAN?

  • Perform a simple algebraic operations with the vector representation of words. \(\vec{X} = \vec{\text{KING}} − \vec{\text{MAN}} + \vec{\text{WOMAN}}\)

  • Search in the vector space for the word closest to \(\vec{X}\) measured by cosine distance.

../../_images/word_analogies1.png

(Credit: Mikolov et al. 2013)

def analogy(word1, word2, word3, model=google_news_vectors):
    """
    Returns analogy word using the given model.

    Parameters
    --------------
    word1 : (str)
        word1 in the analogy relation
    word2 : (str)
        word2 in the analogy relation
    word3 : (str)
        word3 in the analogy relation
    model :
        word embedding model

    Returns
    ---------------
        pd.dataframe
    """
    print("%s : %s :: %s : ?" % (word1, word2, word3))
    sim_words = model.most_similar(positive=[word3, word2], negative=[word1])
    return pd.DataFrame(sim_words, columns=["Analogy word", "Score"])
analogy("man", "king", "woman")
man : king :: woman : ?
Analogy word Score
0 queen 0.711819
1 monarch 0.618967
2 princess 0.590243
3 crown_prince 0.549946
4 prince 0.537732
5 kings 0.523684
6 Queen_Consort 0.523595
7 queens 0.518113
8 sultan 0.509859
9 monarchy 0.508741
analogy("Montreal", "Canadiens", "Vancouver")
Montreal : Canadiens :: Vancouver : ?
Analogy word Score
0 Canucks 0.821327
1 Vancouver_Canucks 0.750401
2 Calgary_Flames 0.705470
3 Leafs 0.695783
4 Maple_Leafs 0.691617
5 Thrashers 0.687504
6 Avs 0.681716
7 Sabres 0.665307
8 Blackhawks 0.664625
9 Habs 0.661023
analogy("Toronto", "UofT", "Vancouver")
Toronto : UofT :: Vancouver : ?
Analogy word Score
0 SFU 0.579245
1 UVic 0.576921
2 UBC 0.571431
3 Simon_Fraser 0.543464
4 Langara_College 0.541347
5 UVIC 0.520495
6 Grant_MacEwan 0.517273
7 UFV 0.514150
8 Ubyssey 0.510421
9 Kwantlen 0.503807
analogy("Gauss", "mathematician", "Bob_Dylan")
Gauss : mathematician :: Bob_Dylan : ?
Analogy word Score
0 singer_songwriter_Bob_Dylan 0.520782
1 poet 0.501191
2 Pete_Seeger 0.497143
3 Joan_Baez 0.492307
4 sitarist_Ravi_Shankar 0.491968
5 bluesman 0.490930
6 jazz_musician 0.489593
7 Joni_Mitchell 0.487740
8 Billie_Holiday 0.486664
9 Johnny_Cash 0.485722
analogy("USA", "pizza", "India") # Just for fun
USA : pizza :: India : ?
Analogy word Score
0 vada_pav 0.554463
1 jalebi 0.547090
2 idlis 0.540039
3 pav_bhaji 0.526046
4 dosas 0.521772
5 samosa 0.520700
6 idli 0.516858
7 pizzas 0.516199
8 tiffin 0.514347
9 chaat 0.508534

So you can imagine these models being useful in many meaning-related tasks.

(Credit: Mikolov et al. 2013)

Examples of semantic and syntactic relationships

../../_images/word_analogies2.png

(Credit: Mikolov 2013)

Implicit biases and stereotypes in word embeddings

analogy("man", "computer_programmer", "woman")
man : computer_programmer :: woman : ?
Analogy word Score
0 homemaker 0.562712
1 housewife 0.510505
2 graphic_designer 0.505180
3 schoolteacher 0.497949
4 businesswoman 0.493489
5 paralegal 0.492551
6 registered_nurse 0.490797
7 saleswoman 0.488163
8 electrical_engineer 0.479773
9 mechanical_engineer 0.475540

Most of the modern embeddings are de-biased for some obvious biases.

Other popular methods to get embeddings

fastText

  • NLP library by Facebook research

  • Includes an algorithm which is an extension to word2vec

  • Helps deal with unknown words elegantly

  • Breaks words into several n-gram subwords

  • Example: trigram sub-words for berry are ber, err, rry

  • Embedding(berry) = embedding(ber) + embedding(err) + embedding(rry)

(Optional) GloVe: Global Vectors for Word Representation

  • Starts with the co-occurrence matrix

    • Co-occurrence can be interpreted as an indicator of semantic proximity of words

  • Takes advantage of global count statistics

  • Predicts co-occurrence ratios

  • Loss based on word frequency



Word vectors with spaCy#

  • spaCy is a popular NLP library.

  • spaCy gives you access to word vectors with bigger models: en_core_web_md or en_core_web_lr

  • spaCy’s pre-trained embeddings are trained on OntoNotes corpus.

  • This corpus has a collection of different styles of texts such as telephone conversations, newswire, newsgroups, broadcast news, broadcast conversation, weblogs, religious texts.

  • Let’s try it out.

import spacy

nlp = spacy.load("en_core_web_md")

doc = nlp("pineapple") # extract all interesting information about the document
doc.vector[:10]
array([ 0.65486 , -2.2584  ,  0.062793,  1.8801  ,  0.207   , -3.3299  ,
       -0.96833 ,  1.5131  , -3.7041  , -0.077749], dtype=float32)
doc.vector.shape
(300,)

Representing documents using word embeddings

  • Assuming that we have reasonable representations of words.

  • How do we represent meaning of paragraphs or documents?

  • Two simple approaches

    • Averaging embeddings

    • Concatenating embeddings

Averaging embeddings

All empty promises

\((embedding(all) + embedding(empty) + embedding(promise))/3\)

Average embeddings with spaCy

  • We can do this conveniently with spaCy.

  • We need en_core_web_md model to access word vectors.

  • You can download the model by going to command line and in your course conda environment and download en_core_web_md as follows.

conda activate cpsc330
python -m spacy download en_core_web_md

We can access word vectors for individual words in spaCy as follows.

We can get average embeddings for a sentence or a document in spaCy as follows:

s = "All empty promises"
doc = nlp(s)
avg_sent_emb = doc.vector
print(avg_sent_emb.shape)
print("Vector for: {}\n{}".format((s), (avg_sent_emb[0:10])))
(300,)
Vector for: All empty promises
[-0.459937    1.9785299   1.0319      1.5123      1.4806334   2.73183
  1.204       1.1724668  -3.5227966  -0.05656664]

Check out Appendix_B to see an example of using these embeddings for text classification. That said, compared to these, the sentence transformers you used in your clustering homework provide better representations for sentences.



Break (5 min)#





Topic modeling#

Why topic modeling?

Topic modeling introduction activity (~5 mins)

  • Consider the following documents.

toy_df = pd.read_csv(DATA_DIR + "toy_clustering.csv")
toy_df
text
0 famous fashion model
1 elegant fashion model
2 fashion model at famous probabilistic topic mo...
3 fresh elegant fashion model
4 famous elegant fashion model
5 probabilistic conference
6 creative probabilistic model
7 model diet apple kiwi nutrition
8 probabilistic model
9 kiwi health nutrition
10 fresh apple kiwi health diet
11 health nutrition
12 fresh apple kiwi juice nutrition
13 probabilistic topic model conference
14 probabilistic topi model

Discuss the following questions with your neighbour

  1. Suppose you are asked to cluster these documents manually. How many clusters would you identify?

  2. What are the prominent words in each cluster?

  3. Are there documents which are a mixture of multiple clusters?

Topic modeling motivation#

  • Humans are pretty good at reading and understanding a document and answering questions such as

    • What is it about?

    • Which documents is it related to?

  • What if you’re given a large collection of documents on a variety of topics.

A corpus of news articles

Example: A corpus of food magazines

A corpus of scientific articles

(Credit: Dave Blei’s presentation)

  • It would take years to read all documents and organize and categorize them so that they are easy to search.

  • You need an automated way

    • to get an idea of what’s going on in the data or

    • to pull documents related to a certain topic

  • Topic modeling gives you an ability to summarize the major themes in a large collection of documents (corpus).

    • Example: The major themes in a collection of news articles could be

      • politics

      • entertainment

      • sports

      • technology

  • Topic modeling is a great EDA tool to get a sense of what’s going on in a large corpus.

  • Some examples

    • If you want to pull documents related to a particular lawsuit.

    • You want to examine people’s sentiment towards a particular candidate and/or political party and so you want to pull tweets or Facebook posts related to election.

How do you do topic modeling?

  • A common tool to solve such problems is unsupervised ML methods.

  • Given the hyperparameter \(K\), the goal of topic modeling is to describe a set of documents using \(K\) “topics”.

  • In unsupervised setting, the input of topic modeling is

    • A large collection of documents

    • A value for the hyperparameter \(K\) (e.g., \(K = 3\))

  • and the output is

    1. Topic-words association

      • For each topic, what words describe that topic?

    2. Document-topics association

      • For each document, what topics are expressed by the document?

Topic modeling: Example

  • Topic-words association

    • For each topic, what words describe that topic?

    • A topic is a mixture of words.

Topic modeling: Example

  • Document-topics association

    • For each document, what topics are expressed by the document?

    • A document is a mixture of topics.

Topic modeling: Input and output

  • Input

    • A large collection of documents

    • A value for the hyperparameter \(K\) (e.g., \(K = 3\))

  • Output

    • For each topic, what words describe that topic?

    • For each document, what topics are expressed by the document?

Topic modeling: Some applications

  • Topic modeling is a great EDA tool to get a sense of what’s going on in a large corpus.

  • Some examples

    • If you want to pull documents related to a particular lawsuit.

    • You want to examine people’s sentiment towards a particular candidate and/or political party and so you want to pull tweets or Facebook posts related to election.

Topic modeling examples

Topic modeling: Input

Credit: David Blei’s presentation

Topic modeling: output

(Credit: David Blei’s presentation)

Topic modeling: output with interpretation

  • Assigning labels is a human thing.

(Credit: David Blei’s presentation)

LDA topics in Yale Law Journal

(Credit: David Blei’s paper)

LDA topics in social media#

(Credit: Health topics in social media)

  • Based on the tools in your toolbox what would you use for topic modeling?







In this lecture, I will demonstrate how to perform topic modeling using the Latent Dirichlet Allocation model implemented in sklearn. We won’t delve into the inner workings of the model, as it falls outside the scope of this course. Instead, our objective is to understand how to apply it to your specific problems and comprehend the model’s input and output.

Topic modeling toy example#

Let’s work with a toy example.

toy_df = pd.read_csv(DATA_DIR + "toy_lda_data.csv")
toy_df
doc_id text
0 1 famous fashion model
1 2 fashion model pattern
2 3 fashion model probabilistic topic model confer...
3 4 famous fashion model
4 5 fresh fashion model
5 6 famous fashion model
6 7 famous fashion model
7 8 famous fashion model
8 9 famous fashion model
9 10 creative fashion model
10 11 famous fashion model
11 12 famous fashion model
12 13 fashion model probabilistic topic model confer...
13 14 probabilistic topic model
14 15 probabilistic model pattern
15 16 probabilistic topic model
16 17 probabilistic topic model
17 18 probabilistic topic model
18 19 probabilistic topic model
19 20 probabilistic topic model
20 21 probabilistic topic model
21 22 fashion model probabilistic topic model confer...
22 23 apple kiwi nutrition
23 24 kiwi health nutrition
24 25 fresh apple health
25 26 probabilistic topic model
26 27 creative health nutrition
27 28 probabilistic topic model
28 29 probabilistic topic model
29 30 hidden markov model probabilistic
30 31 probabilistic topic model
31 32 probabilistic topic model
32 33 apple kiwi nutrition
33 34 apple kiwi health
34 35 apple kiwi nutrition
35 36 fresh kiwi health
36 37 apple kiwi nutrition
37 38 apple kiwi nutrition
38 39 apple kiwi nutrition
  • Input to the LDA topic model is bag-of-words representation of text.

  • Let’s create bag-of-words representation of “text” column.

from sklearn.feature_extraction.text import CountVectorizer

vec = CountVectorizer(stop_words="english")
toy_X = vec.fit_transform(toy_df["text"])
toy_X
<39x15 sparse matrix of type '<class 'numpy.int64'>'
	with 124 stored elements in Compressed Sparse Row format>
vocab = vec.get_feature_names_out() # vocabulary
vocab
array(['apple', 'conference', 'creative', 'famous', 'fashion', 'fresh',
       'health', 'hidden', 'kiwi', 'markov', 'model', 'nutrition',
       'pattern', 'probabilistic', 'topic'], dtype=object)
len(vocab)
15

Let’s try to create a topic model with sklearn’s LatentDirichletAllocation.

from sklearn.decomposition import LatentDirichletAllocation

n_topics = 3 # number of topics
lda = LatentDirichletAllocation(
    n_components=n_topics, learning_method="batch", max_iter=10, random_state=0
)
lda.fit(toy_X) 
document_topics = lda.transform(toy_X)
  • Once we have a fitted model we can get the word-topic association and document-topic association

  • Word-topic association

    • lda.components_ gives us the weights associated with each word for each topic. In other words, it tells us which word is important for which topic.

  • Document-topic association

    • Calling transform on the data gives us document-topic association.

lda.components_
array([[ 0.33380754,  3.31038074,  0.33476534,  0.33397112,  0.36695134,
         0.33439238,  0.33381373,  0.35771821,  0.33380649,  0.35771821,
        17.78521263,  0.33380761,  0.3573886 , 17.31634363, 15.32791718],
       [ 8.33224516,  0.33400489,  2.2173627 ,  0.33411086,  0.33732465,
         3.28753559,  5.33223002,  0.33435326,  9.33224759,  0.33435326,
         0.33797555,  8.3322447 ,  0.33462759,  0.33440682,  0.33425967],
       [ 0.3339473 ,  0.35561437,  0.44787197,  8.33191802, 14.29572402,
         0.37807203,  0.33395626,  1.30792853,  0.33394593,  1.30792853,
        13.87681182,  0.33394769,  2.30798381,  0.34924955,  0.33782315]])
print("lda.components_.shape: {}".format(lda.components_.shape))
lda.components_.shape: (3, 15)
import plotly.express as px
plot_lda_w_vectors(lda.components_, ['topic 0', 'topic 1', 'topic 2'], vocab, width=800, height=600)

Let’s look at the words with highest weights for each topic more systematically.

np.argsort(lda.components_, axis=1)
array([[ 8,  0, 11,  6,  3,  5,  2, 12,  7,  9,  4,  1, 14, 13, 10],
       [ 1,  3, 14,  7,  9, 13, 12,  4, 10,  2,  5,  6, 11,  0,  8],
       [ 8,  0, 11,  6, 14, 13,  1,  5,  2,  9,  7, 12,  3, 10,  4]])
sorting = np.argsort(lda.components_, axis=1)[:, ::-1]
feature_names = np.array(vec.get_feature_names_out())
import mglearn

mglearn.tools.print_topics(
    topics=range(3),
    feature_names=feature_names,
    sorting=sorting,
    topics_per_chunk=5,
    n_words=10,
)
topic 0       topic 1       topic 2       
--------      --------      --------      
model         kiwi          fashion       
probabilistic apple         model         
topic         nutrition     famous        
conference    health        pattern       
fashion       fresh         hidden        
markov        creative      markov        
hidden        model         creative      
pattern       fashion       fresh         
creative      pattern       conference    
fresh         probabilistic probabilistic 
  • Here is how we can interpret the topics

    • Topic 0 \(\rightarrow\) ML modeling

    • Topic 1 \(\rightarrow\) fruit and nutrition

    • Topic 2 \(\rightarrow\) fashion

Let’s look at distribution of topics for a document

toy_df.iloc[0]['text']
'famous fashion model'
document_topics[0]
array([0.08791477, 0.08338644, 0.82869879])

This document is made up of

  • ~83% topic 2

  • ~9% topic 0

  • ~8% topic 1.

Topic modeling pipeline#

  • Above we worked with toy data. In the real world, we usually need to preprocess the data before passing it to LDA.

  • Here are typical steps if you want to carry out topic modeling on real-world data.

    • Preprocess your corpus.

    • Train LDA.

    • Interpret your topics.

Data

import wikipedia

queries = [
    "Artificial Intelligence",
    "unsupervised learning",
    "Supreme Court of Canada",
    "Peace, Order, and Good Government",
    "Canadian constitutional law",
    "ice hockey",
]
wiki_dict = {"wiki query": [], "text": []}
for i in range(len(queries)):
    wiki_dict["text"].append(wikipedia.page(queries[i]).content)
    wiki_dict["wiki query"].append(queries[i])

wiki_df = pd.DataFrame(wiki_dict)
wiki_df
wiki query text
0 Artificial Intelligence Artificial intelligence (AI), in its broadest ...
1 unsupervised learning Supervised learning (SL) is a paradigm in mach...
2 Supreme Court of Canada The Supreme Court of Canada (SCC; French: Cour...
3 Peace, Order, and Good Government In many Commonwealth jurisdictions, the phrase...
4 Canadian constitutional law Canadian constitutional law (French: droit con...
5 ice hockey Ice hockey (or simply hockey in North America)...

Preprocessing the corpus

  • Preprocessing is crucial!

  • Tokenization, converting text to lower case

  • Removing punctuation and stopwords

  • Discarding words with length < threshold or word frequency < threshold

  • Possibly lemmatization: Consider the lemmas instead of inflected forms.

  • Depending upon your application, restrict to specific part of speech;

    • For example, only consider nouns, verbs, and adjectives

We’ll use spaCy for preprocessing. Check out available token attributes here.

import spacy

nlp = spacy.load("en_core_web_md", disable=["parser", "ner"])
def preprocess(
    doc,
    min_token_len=2,
    irrelevant_pos=["ADV", "PRON", "CCONJ", "PUNCT", "PART", "DET", "ADP", "SPACE"],
):
    """
    Given text, min_token_len, and irrelevant_pos carry out preprocessing of the text
    and return a preprocessed string.

    Parameters
    -------------
    doc : (spaCy doc object)
        the spacy doc object of the text
    min_token_len : (int)
        min_token_length required
    irrelevant_pos : (list)
        a list of irrelevant pos tags

    Returns
    -------------
    (str) the preprocessed text
    """

    clean_text = []

    for token in doc:
        if (
            token.is_stop == False  # Check if it's not a stopword
            and len(token) > min_token_len  # Check if the word meets minimum threshold
            and token.pos_ not in irrelevant_pos
        ):  # Check if the POS is in the acceptable POS tags
            lemma = token.lemma_  # Take the lemma of the word
            clean_text.append(lemma.lower())
    return " ".join(clean_text)
wiki_df["text_pp"] = [preprocess(text) for text in nlp.pipe(wiki_df["text"])]
wiki_df
wiki query text text_pp
0 Artificial Intelligence Artificial intelligence (AI), in its broadest ... artificial intelligence broad sense intelligen...
1 unsupervised learning Supervised learning (SL) is a paradigm in mach... supervised learning paradigm machine learning ...
2 Supreme Court of Canada The Supreme Court of Canada (SCC; French: Cour... supreme court canada scc french cour suprême c...
3 Peace, Order, and Good Government In many Commonwealth jurisdictions, the phrase... commonwealth jurisdiction phrase peace order g...
4 Canadian constitutional law Canadian constitutional law (French: droit con... canadian constitutional law french droit const...
5 ice hockey Ice hockey (or simply hockey in North America)... ice hockey hockey north america team sport pla...
from sklearn.feature_extraction.text import CountVectorizer

vec = CountVectorizer(stop_words='english')
X = vec.fit_transform(wiki_df["text_pp"])
from sklearn.decomposition import LatentDirichletAllocation

n_topics = 3
lda = LatentDirichletAllocation(
    n_components=n_topics, learning_method="batch", max_iter=10, random_state=0
)
document_topics = lda.fit_transform(X)
print("lda.components_.shape: {}".format(lda.components_.shape))
lda.components_.shape: (3, 4011)
sorting = np.argsort(lda.components_, axis=1)[:, ::-1]
feature_names = np.array(vec.get_feature_names_out())
import mglearn

mglearn.tools.print_topics(
    topics=range(3),
    feature_names=feature_names,
    sorting=sorting,
    topics_per_chunk=5,
    n_words=10,
)
topic 0       topic 1       topic 2       
--------      --------      --------      
intelligence  court         hockey        
problem       law           player        
machine       provincial    ice           
artificial    displaystyle  team          
human         government    league        
datum         power         play          
learning      federal       game          
use           canada        puck          
include       case          penalty       
network       justice       nhl           

Check out some recent topic modeling tools





Basic text preprocessing [video]#

  • Why do we need preprocessing?

    • Text data is unstructured and messy.

    • We need to “normalize” it before we do anything interesting with it.

  • Example:

    • Lemma: Same stem, same part-of-speech, roughly the same meaning

      • Vancouver’s → Vancouver

      • computers → computer

      • rising → rise, rose, rises

Tokenization#

  • Sentence segmentation

    • Split text into sentences

  • Word tokenization

    • Split sentences into words

Sentence segmentation

MDS is a Master's program at UBC in British Columbia. MDS teaching team is truly multicultural!! Dr. George did his Ph.D. in Scotland. Dr. Timbers, Dr. Ostblom, Dr. Rodríguez-Arelis, and Dr. Kolhatkar did theirs in Canada. Dr. Gelbart did his PhD in the U.S.
  • How many sentences are there in this text?

### Let's do sentence segmentation on "."
text = (
    "UBC is one of the well known universities in British Columbia. "
    "UBC CS teaching team is truly multicultural!! "
    "Dr. Toti completed her Ph.D. in Italy."
    "Dr. Moosvi, Dr. Kolhatkar, and Dr. Ola completed theirs in Canada."
    "Dr. Heeren and Dr. Lécuyer completed theirs in the U.S."
)
print(text.split("."))
['UBC is one of the well known universities in British Columbia', ' UBC CS teaching team is truly multicultural!! Dr', ' Toti completed her Ph', 'D', ' in Italy', 'Dr', ' Moosvi, Dr', ' Kolhatkar, and Dr', ' Ola completed theirs in Canada', 'Dr', ' Heeren and Dr', ' Lécuyer completed theirs in the U', 'S', '']
  • In English, period (.) is quite ambiguous. (In Chinese, it is unambiguous.)

    • Abbreviations like Dr., U.S., Inc.

    • Numbers like 60.44%, 0.98

  • ! and ? are relatively ambiguous.

  • How about writing regular expressions?

  • A common way is using off-the-shelf models for sentence segmentation.

### Let's try to do sentence segmentation using nltk
from nltk.tokenize import sent_tokenize

sent_tokenized = sent_tokenize(text)
print(sent_tokenized)
['UBC is one of the well known universities in British Columbia.', 'UBC CS teaching team is truly multicultural!!', 'Dr. Toti completed her Ph.D. in Italy.Dr.', 'Moosvi, Dr. Kolhatkar, and Dr. Ola completed theirs in Canada.Dr.', 'Heeren and Dr. Lécuyer completed theirs in the U.S.']

Word tokenization

MDS is a Master's program at UBC in British Columbia.
  • How many words are there in this sentence?

  • Is whitespace a sufficient condition for a word boundary?

MDS is a Master's program at UBC in British Columbia.
  • What’s our definition of a word?

    • Should British Columbia be one word or two words?

    • Should punctuation be considered a separate word?

    • What about the punctuations in U.S.?

    • What do we do with words like Master's?

  • This process of identifying word boundaries is referred to as tokenization.

  • You can use regex but better to do it with off-the-shelf ML models.

### Let's do word segmentation on white spaces
print("Splitting on whitespace: ", [sent.split() for sent in sent_tokenized])

### Let's try to do word segmentation using nltk
from nltk.tokenize import word_tokenize

word_tokenized = [word_tokenize(sent) for sent in sent_tokenized]
# This is similar to the input format of word2vec algorithm
print("\n\n\nTokenized: ", word_tokenized)
Splitting on whitespace:  [['UBC', 'is', 'one', 'of', 'the', 'well', 'known', 'universities', 'in', 'British', 'Columbia.'], ['UBC', 'CS', 'teaching', 'team', 'is', 'truly', 'multicultural!!'], ['Dr.', 'Toti', 'completed', 'her', 'Ph.D.', 'in', 'Italy.Dr.'], ['Moosvi,', 'Dr.', 'Kolhatkar,', 'and', 'Dr.', 'Ola', 'completed', 'theirs', 'in', 'Canada.Dr.'], ['Heeren', 'and', 'Dr.', 'Lécuyer', 'completed', 'theirs', 'in', 'the', 'U.S.']]



Tokenized:  [['UBC', 'is', 'one', 'of', 'the', 'well', 'known', 'universities', 'in', 'British', 'Columbia', '.'], ['UBC', 'CS', 'teaching', 'team', 'is', 'truly', 'multicultural', '!', '!'], ['Dr.', 'Toti', 'completed', 'her', 'Ph.D.', 'in', 'Italy.Dr', '.'], ['Moosvi', ',', 'Dr.', 'Kolhatkar', ',', 'and', 'Dr.', 'Ola', 'completed', 'theirs', 'in', 'Canada.Dr', '.'], ['Heeren', 'and', 'Dr.', 'Lécuyer', 'completed', 'theirs', 'in', 'the', 'U.S', '.']]

Word segmentation

For some languages you need much more sophisticated tokenizers.

  • For languages such as Chinese, there are no spaces between words.

    • jieba is a popular tokenizer for Chinese.

  • German doesn’t separate compound words.

    • Example: rindfleischetikettierungsüberwachungsaufgabenübertragungsgesetz

    • (the law for the delegation of monitoring beef labeling)

Types and tokens

  • Usually in NLP, we talk about

    • Type an element in the vocabulary

    • Token an instance of that type in running text

Exercise for you

UBC is located in the beautiful province of British Columbia. It's very close to the U.S. border. You'll get to the USA border in about 45 mins by car.
  • Consider the example above.

    • How many types? (task dependent)

    • How many tokens?

Other commonly used preprocessing steps#

  • Punctuation and stopword removal

  • Stemming and lemmatization

Punctuation and stopword removal

  • The most frequently occurring words in English are not very useful in many NLP tasks.

    • Example: the , is , a , and punctuation

  • Probably not very informative in many tasks

# Let's use `nltk.stopwords`.
# Add punctuations to the list.
stop_words = list(set(stopwords.words("english")))
import string

punctuation = string.punctuation
stop_words += list(punctuation)
# stop_words.extend(['``','`','br','"',"”", "''", "'s"])
print(stop_words)
['or', 'any', 'same', "wouldn't", "she's", 'having', 'doesn', "isn't", 'there', 'few', 'has', 'in', 'hers', 's', 'and', 'ours', "hasn't", 'off', 'our', 'him', "couldn't", "you'll", 'while', 'out', 'just', 'through', 'isn', 'for', 'a', 'were', 'be', 'are', 'will', "won't", 'you', "needn't", 'the', 'it', 'own', 'above', "wasn't", 'against', 'that', "should've", 'himself', 'very', "didn't", 'which', 'theirs', 'each', 'yourself', 'we', "don't", 'ourselves', 'an', 'her', 'so', 'with', 'couldn', 'who', 'before', 'haven', 'until', 'under', 'once', 'does', 'down', 'needn', 'll', 'to', 'further', 'about', 'myself', 'weren', 'during', 'he', 'should', 'mightn', 'had', "doesn't", 'over', 'whom', 'its', "weren't", 'of', 'such', 'as', 'below', "you're", 'been', "haven't", 'only', 'y', 'if', 'hasn', 'not', 'd', 'between', 'am', 'do', 'themselves', "it's", 'ma', 'this', 'where', 'other', "hadn't", 'nor', 'now', 'she', 'from', 'is', 'here', 'why', 'yourselves', 'wouldn', 'itself', 'didn', 'up', 'me', 'what', 'again', 't', "shan't", 'at', 'shan', 'wasn', 'then', 'herself', 'have', 'm', 'than', 'shouldn', 're', 'into', 'can', "mightn't", 'too', 'won', 'i', 'they', "that'll", "aren't", 'on', 'how', 'aren', "mustn't", 'o', 'hadn', "you'd", 'more', 'both', 'because', 'some', 'when', 'did', 'those', 'no', 'mustn', 'them', 'my', 'ain', "shouldn't", 'being', 'doing', 'by', 'was', 'your', 'their', "you've", 'yours', 'after', 'his', 'most', 'but', 'don', 'all', 'these', 've', '!', '"', '#', '$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '`', '{', '|', '}', '~']
### Get rid of stop words
preprocessed = []
for sent in word_tokenized:
    for token in sent:
        token = token.lower()
        if token not in stop_words:
            preprocessed.append(token)
print(preprocessed)
['ubc', 'one', 'well', 'known', 'universities', 'british', 'columbia', 'ubc', 'cs', 'teaching', 'team', 'truly', 'multicultural', 'dr.', 'toti', 'completed', 'ph.d.', 'italy.dr', 'moosvi', 'dr.', 'kolhatkar', 'dr.', 'ola', 'completed', 'canada.dr', 'heeren', 'dr.', 'lécuyer', 'completed', 'u.s']

Lemmatization

  • For many NLP tasks (e.g., web search) we want to ignore morphological differences between words

    • Example: If your search term is “studying for ML quiz” you might want to include pages containing “tips to study for an ML quiz” or “here is how I studied for my ML quiz”

  • Lemmatization converts inflected forms into the base form.

import nltk

nltk.download("wordnet")
[nltk_data] Downloading package wordnet to /Users/kvarada/nltk_data...
[nltk_data]   Package wordnet is already up-to-date!
True
nltk.download('omw-1.4')
[nltk_data] Downloading package omw-1.4 to /Users/kvarada/nltk_data...
[nltk_data]   Package omw-1.4 is already up-to-date!
True
# nltk has a lemmatizer
from nltk.stem import WordNetLemmatizer

lemmatizer = WordNetLemmatizer()
print("Lemma of studying: ", lemmatizer.lemmatize("studying", "v"))
print("Lemma of studied: ", lemmatizer.lemmatize("studied", "v"))
Lemma of studying:  study
Lemma of studied:  study

Stemming

  • Has a similar purpose but it is a crude chopping of affixes

    • automates, automatic, automation all reduced to automat.

  • Usually these reduced forms (stems) are not actual words themselves.

  • A popular stemming algorithm for English is PorterStemmer.

  • Beware that it can be aggressive sometimes.

from nltk.stem.porter import PorterStemmer

text = (
    "UBC is located in the beautiful province of British Columbia... "
    "It's very close to the U.S. border."
)
ps = PorterStemmer()
tokenized = word_tokenize(text)
stemmed = [ps.stem(token) for token in tokenized]
print("Before stemming: ", text)
print("\n\nAfter stemming: ", " ".join(stemmed))
Before stemming:  UBC is located in the beautiful province of British Columbia... It's very close to the U.S. border.


After stemming:  ubc is locat in the beauti provinc of british columbia ... it 's veri close to the u.s. border .

Other tools for preprocessing#

spaCy

  • Industrial strength NLP library.

  • Lightweight, fast, and convenient to use.

  • spaCy does many things that we did above in one line of code!

  • Also has multi-lingual support.

import spacy

# Load the model
nlp = spacy.load("en_core_web_md")
text = (
    "MDS is a Master's program at UBC in British Columbia. "
    "MDS teaching team is truly multicultural!! "
    "Dr. George did his Ph.D. in Scotland. "
    "Dr. Timbers, Dr. Ostblom, Dr. Rodríguez-Arelis, and Dr. Kolhatkar did theirs in Canada. "
    "Dr. Gelbart did his PhD in the U.S."
)

doc = nlp(text)
# Accessing tokens
tokens = [token for token in doc]
print("\nTokens: ", tokens)

# Accessing lemma
lemmas = [token.lemma_ for token in doc]
print("\nLemmas: ", lemmas)

# Accessing pos
pos = [token.pos_ for token in doc]
print("\nPOS: ", pos)
Tokens:  [MDS, is, a, Master, 's, program, at, UBC, in, British, Columbia, ., MDS, teaching, team, is, truly, multicultural, !, !, Dr., George, did, his, Ph.D., in, Scotland, ., Dr., Timbers, ,, Dr., Ostblom, ,, Dr., Rodríguez, -, Arelis, ,, and, Dr., Kolhatkar, did, theirs, in, Canada, ., Dr., Gelbart, did, his, PhD, in, the, U.S.]

Lemmas:  ['mds', 'be', 'a', 'Master', "'s", 'program', 'at', 'UBC', 'in', 'British', 'Columbia', '.', 'mds', 'teaching', 'team', 'be', 'truly', 'multicultural', '!', '!', 'Dr.', 'George', 'do', 'his', 'ph.d.', 'in', 'Scotland', '.', 'Dr.', 'Timbers', ',', 'Dr.', 'Ostblom', ',', 'Dr.', 'Rodríguez', '-', 'Arelis', ',', 'and', 'Dr.', 'Kolhatkar', 'do', 'theirs', 'in', 'Canada', '.', 'Dr.', 'Gelbart', 'do', 'his', 'phd', 'in', 'the', 'U.S.']

POS:  ['NOUN', 'AUX', 'DET', 'PROPN', 'PART', 'NOUN', 'ADP', 'PROPN', 'ADP', 'PROPN', 'PROPN', 'PUNCT', 'NOUN', 'NOUN', 'NOUN', 'AUX', 'ADV', 'ADJ', 'PUNCT', 'PUNCT', 'PROPN', 'PROPN', 'VERB', 'PRON', 'NOUN', 'ADP', 'PROPN', 'PUNCT', 'PROPN', 'PROPN', 'PUNCT', 'PROPN', 'PROPN', 'PUNCT', 'PROPN', 'PROPN', 'PUNCT', 'PROPN', 'PUNCT', 'CCONJ', 'PROPN', 'PROPN', 'VERB', 'PRON', 'ADP', 'PROPN', 'PUNCT', 'PROPN', 'PROPN', 'VERB', 'PRON', 'NOUN', 'ADP', 'DET', 'PROPN']

Other typical NLP tasks#

In order to understand text, we usually are interested in extracting information from text. Some common tasks in NLP pipeline are:

  • Part of speech tagging

    • Assigning part-of-speech tags to all words in a sentence.

  • Named entity recognition

    • Labelling named “real-world” objects, like persons, companies or locations.

  • Coreference resolution

    • Deciding whether two strings (e.g., UBC vs University of British Columbia) refer to the same entity

  • Dependency parsing

    • Representing grammatical structure of a sentence

Extracting named-entities using spaCy

from spacy import displacy

doc = nlp(
    "University of British Columbia "
    "is located in the beautiful "
    "province of British Columbia."
)
displacy.render(doc, style="ent")
# Text and label of named entity span
print("Named entities:\n", [(ent.text, ent.label_) for ent in doc.ents])
print("\nORG means: ", spacy.explain("ORG"))
print("GPE means: ", spacy.explain("GPE"))
University of British Columbia ORG is located in the beautiful province of British Columbia GPE .
Named entities:
 [('University of British Columbia', 'ORG'), ('British Columbia', 'GPE')]

ORG means:  Companies, agencies, institutions, etc.
GPE means:  Countries, cities, states

Dependency parsing using spaCy

doc = nlp("I like cats")
displacy.render(doc, style="dep")
I PRON like VERB cats NOUN nsubj dobj

Many other things possible

  • spaCy is a powerful tool

  • You can build your own rule-based searches.

  • You can also access word vectors using spaCy with bigger models. (Currently we are using en_core_web_md model.)





Summary#

  • NLP is a big and very active field.

  • We broadly explored three topics:

    • Word embeddings using pretrained models

    • Topic modeling

    • Basic text preprocessing

Here are some resources if you want to get into NLP.