N-gram language models or how to write scientific papers

11 minute read

The goal of this project is to train a language model on a corpora of ArXiv science articles and see if we can generate a new one! This work is inspired by an exercise from the NLP course for you .

img

data by neelshah18 from here

Disclaimer: this has nothing to do with actual science. But it’s fun, so who cares?!

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Alternative manual download link: https://yadi.sk/d/_nGyU2IajjR9-w
# !wget "https://www.dropbox.com/s/99az9n1b57qkd9j/arxivData.json.tar.gz?dl=1" -O arxivData.json.tar.gz
# !tar -xvzf arxivData.json.tar.gz
data = pd.read_json("./arxivData.json")
data.sample(n=5)
# assemble lines: concatenate title and description
lines = data.apply(lambda row: row['title'] + ' ; ' + row['summary'], axis=1).tolist()

sorted(lines, key=len)[:3]

[‘Differential Contrastive Divergence ; This paper has been retracted.’, ‘What Does Artificial Life Tell Us About Death? ; Short philosophical essay’, ‘P=NP ; We claim to resolve the P=?NP problem via a formal argument for P=NP.’]

Tokenization

You know the dril. The data is messy. Go clean the data. Use WordPunctTokenizer or something.

# Task: convert lines (in-place) into strings of space-separated tokens. import & use WordPunctTokenizer

# import WordPunctTokenizer() method from nltk
import nltk
from nltk.tokenize import WordPunctTokenizer
tokenizer = nltk.tokenize.WordPunctTokenizer()
#' '.join(tokenizer.tokenize(lines[0]))
lines = [' '.join(tokenizer.tokenize(l.lower())) for l in lines]
sorted(lines, key=len)[0]
assert sorted(lines, key=len)[0] == \
    'differential contrastive divergence ; this paper has been retracted .'
assert sorted(lines, key=len)[2] == \
    'p = np ; we claim to resolve the p =? np problem via a formal argument for p = np .'

N-Gram Language Model

A language model is a probabilistic model that estimates text probability: the joint probability of all tokens \(w_t\) in text \(X\): \(P(X) = P(w_1, \dots, w_T)\).

It can do so by following the chain rule: \(P(w_1, \dots, w_T) = P(w_1)P(w_2 \mid w_1)\dots P(w_T \mid w_1, \dots, w_{T-1}).\)

The problem with such approach is that the final term \(P(w_T \mid w_1, \dots, w_{T-1})\) depends on \(n-1\) previous words. This probability is impractical to estimate for long texts, e.g. \(T = 1000\).

One popular approximation is to assume that next word only depends on a finite amount of previous words:

\[P(w_t \mid w_1, \dots, w_{t - 1}) = P(w_t \mid w_{t - n + 1}, \dots, w_{t - 1})\]

Such model is called n-gram language model where n is a parameter. For example, in 3-gram language model, each word only depends on 2 previous words.

\[P(w_1, \dots, w_n) = \prod_t P(w_t \mid w_{t - n + 1}, \dots, w_{t - 1}).\]

You can also sometimes see such approximation under the name of n-th order markov assumption.

The first stage to building such a model is counting all word occurences given N-1 previous words

from tqdm import tqdm
from collections import defaultdict, Counter, deque

# special tokens: 
# - unk represents absent tokens, 
# - eos is a special token after the end of sequence

UNK, EOS = "_UNK_", "_EOS_"

def count_ngrams(lines, n):
    """
    Count how many times each word occured after (n - 1) previous words
    :param lines: an iterable of strings with space-separated tokens
    :returns: a dictionary { tuple(prefix_tokens): {next_token_1: count_1, next_token_2: count_2}}

    When building counts, please consider the following two edge cases
    - if prefix is shorter than (n - 1) tokens, it should be padded with UNK. For n=3,
      empty prefix: "" -> (UNK, UNK)
      short prefix: "the" -> (UNK, the)
      long prefix: "the new approach" -> (new, approach)
    - you should add a special token, EOS, at the end of each sequence
      "... with deep neural networks ." -> (..., with, deep, neural, networks, ., EOS)
      count the probability of this token just like all others.
    """
    counts = defaultdict(Counter)
    # counts[(word1, word2)][word3] = how many times word3 occured after (word1, word2)

    for l in tqdm(lines):
        tok = l.split()
        tok.append(EOS)
        prefix = deque([UNK] * (n-1), maxlen=n-1)
        for t in tok:
            counts[tuple(prefix)][t] += 1
            prefix.append(t)
        counts[tuple(prefix)][t] += 1
    return counts

# let's test it
dummy_lines = sorted(lines, key=len)[:100]
dummy_counts = count_ngrams(dummy_lines, n=3)
assert set(map(len, dummy_counts.keys())) == {2}, "please only count {n-1}-grams"
assert len(dummy_counts[('_UNK_', '_UNK_')]) == 78
assert dummy_counts['_UNK_', 'a']['note'] == 3
assert dummy_counts['p', '=']['np'] == 2
assert dummy_counts['author', '.']['_EOS_'] == 1

Once we can count N-grams, we can build a probabilistic language model. The simplest way to compute probabilities is in proporiton to counts:

\[P(w_t | prefix) = { Count(prefix, w_t) \over \sum_{\hat w} Count(prefix, \hat w) }\]
class NGramLanguageModel:    
    def __init__(self, lines, n):
        """ 
        Train a simple count-based language model: 
        compute probabilities P(w_t | prefix) given ngram counts
        
        :param n: computes probability of next token given (n - 1) previous words
        :param lines: an iterable of strings with space-separated tokens
        """
        assert n >= 1
        self.n = n
    
        counts = count_ngrams(lines, self.n)
        
        # compute token proabilities given counts
        self.probs = defaultdict(Counter)
        # probs[(word1, word2)][word3] = P(word3 | word1, word2)
        
        # populate self.probs with actual probabilities
        for words in tqdm(counts.keys()):
            for word in counts[words].keys():
                self.probs[words][word] = counts[words][word]/sum(counts[words].values())
            
    def get_possible_next_tokens(self, prefix):
        """
        :param prefix: string with space-separated prefix tokens
        :returns: a dictionary {token : it's probability} for all tokens with positive probabilities
        """
        prefix = prefix.split()
        prefix = prefix[max(0, len(prefix) - self.n + 1):]
        prefix = [ UNK ] * (self.n - 1 - len(prefix)) + prefix
        return self.probs[tuple(prefix)]
    
    def get_next_token_prob(self, prefix, next_token):
        """
        :param prefix: string with space-separated prefix tokens
        :param next_token: the next token to predict probability for
        :returns: P(next_token|prefix) a single number, 0 <= P <= 1
        """
        return self.get_possible_next_tokens(prefix).get(next_token, 0)

Let’s test it!

dummy_lm = NGramLanguageModel(dummy_lines, n=3)

p_initial = dummy_lm.get_possible_next_tokens('') # '' -> ['_UNK_', '_UNK_']
assert np.allclose(p_initial['learning'], 0.02)
assert np.allclose(p_initial['a'], 0.13)
assert np.allclose(p_initial.get('meow', 0), 0)
assert np.allclose(sum(p_initial.values()), 1)

p_a = dummy_lm.get_possible_next_tokens('a') # '' -> ['_UNK_', 'a']
assert np.allclose(p_a['machine'], 0.15384615)
assert np.allclose(p_a['note'], 0.23076923)
assert np.allclose(p_a.get('the', 0), 0)
assert np.allclose(sum(p_a.values()), 1)

assert np.allclose(dummy_lm.get_possible_next_tokens('a note')['on'], 1)
assert dummy_lm.get_possible_next_tokens('a machine') == \
    dummy_lm.get_possible_next_tokens("there have always been ghosts in a machine"), \
    "your 3-gram model should only depend on 2 previous words"

Now that you’ve got a working n-gram language model, let’s see what sequences it can generate. But first, let’s train it on the whole dataset.

lm = NGramLanguageModel(lines, n=3)

The process of generating sequences is… well, it’s sequential. You maintain a list of tokens and iteratively add next token by sampling with probabilities.

\[X = []\]

forever:

  • \[w_{next} \sim P(w_{next} | X)\]
  • \[X = concat(X, w_{next})\]

Instead of sampling with probabilities, one can also try always taking most likely token, sampling among top-K most likely tokens or sampling with temperature. In the latter case (temperature), one samples from

\[w_{next} \sim {P(w_{next} | X) ^ {1 / \tau} \over \sum_{\hat w} P(\hat w | X) ^ {1 / \tau}}\]

Where \(\tau > 0\) is model temperature. If \(\tau << 1\), more likely tokens will be sampled with even higher probability while less likely tokens will vanish.

def get_next_token(lm, prefix, temperature=1.0):
    """
    return next token after prefix;
    :param temperature: samples proportionally to lm probabilities ^ (1 / temperature)
        if temperature == 0, always takes most likely token. Break ties arbitrarily.
    """
    if temperature == 0.:
        ptk = sorted (lm.get_possible_next_tokens(prefix), key = lm.get_possible_next_tokens(prefix).get)
        return ptk[-1]
    else:
        tokens = list(lm.get_possible_next_tokens(prefix).keys())
        probs = [ lm.get_next_token_prob(prefix, token) for token in tokens ]
        
        probs = np.power(probs, 1.0/temperature)/sum(np.power(probs, 1.0/temperature))
        return np.random.choice(np.array(tokens), p=probs)
 
        
from collections import Counter
test_freqs = Counter([get_next_token(lm, 'there have') for _ in range(10000)])
assert 250 < test_freqs['not'] < 450
assert 8500 < test_freqs['been'] < 9500
assert 1 < test_freqs['lately'] < 200

test_freqs = Counter([get_next_token(lm, 'deep', temperature=1.0) for _ in range(10000)])
assert 1500 < test_freqs['learning'] < 3000
test_freqs = Counter([get_next_token(lm, 'deep', temperature=0.5) for _ in range(10000)])
assert 8000 < test_freqs['learning'] < 9000
test_freqs = Counter([get_next_token(lm, 'deep', temperature=0.0) for _ in range(10000)])
assert test_freqs['learning'] == 10000

print("Looks nice!")

Let’s have fun with this model

prefix = 'artificial' # <- your ideas :)

for i in range(100):
    prefix += ' ' + get_next_token(lm, prefix)
    if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0:
        break
        
print(prefix)

Here is the output from the model with the prefix ‘artificial’:

artificial immune system can have a good document format , or phonological components . we suggest that automated health surveillance systems based on spatial - frequency oscillatory artifacts , via dirichlet processes ; recent works that explore partially the search process . the proposed methods can be unconsciously or maliciously introduce biases that models correlation structure ; this article presents a novel feature descriptors . the wasserstein loss ; in this paper , we use the hinge - loss showed to be represented in suitable fragments of a deep neural network ( cnn ) which monsters attempted to improve the sample

prefix = 'bridging the' # <- more of your ideas

for i in range(100):
    prefix += ' ' + get_next_token(lm, prefix, temperature=0.5)
    if prefix.endswith(EOS) or len(lm.get_possible_next_tokens(prefix)) == 0:
        break
        
print(prefix)

Here is the output from the model with the prefix ‘bridging the’:

bridging the gap between the two - step approach to characterize the information from the best of our approach is that the proposed method is compared to the original image is decomposed into a single machine . EOS

Evaluating language models: perplexity

Perplexity is a measure of how well does your model approximate true probability distribution behind data. Smaller perplexity = better model.

To compute perplexity on one sentence, use: \({\mathbb{P}}(w_1 \dots w_N) = P(w_1, \dots, w_N)^{-\frac1N} = \left( \prod_t P(w_t \mid w_{t - n}, \dots, w_{t - 1})\right)^{-\frac1N},\)

On the corpora level, perplexity is a product of probabilities of all tokens in all sentences to the power of 1, divided by total length of all sentences in corpora.

This number can quickly get too small for float32/float64 precision, so we recommend you to first compute log-perplexity (from log-probabilities) and then take the exponent.

def perplexity(lm, lines, min_logprob=np.log(10 ** -50.)):
    """
    :param lines: a list of strings with space-separated tokens
    :param min_logprob: if log(P(w | ...)) is smaller than min_logprop, set it equal to min_logrob
    :returns: corpora-level perplexity - a single scalar number from the formula above
    
    Note: do not forget to compute P(w_first | empty) and P(eos | full_sequence)
    
    PLEASE USE lm.get_next_token_prob and NOT lm.get_possible_next_tokens
    """
    # n will contain the total length of all sentences
    perplexity = 0.0
    n = 0
    for line in lines:
        # We will use l_line the list of tokens in line for convenience
        l_line = line.split()
        
        for k, tok_pref in enumerate(l_line):
            if k > lm.n :                
                perplexity += np.maximum(np.log(lm.get_next_token_prob(' '.join(l_line[k-lm.n:k]), tok_pref)), min_logprob)
                n += 1
            elif k == 0:
                # First token : prefix is empty
                perplexity += np.maximum(np.log(lm.get_next_token_prob('', tok_pref)), min_logprob)
                n += 1
            else:
                perplexity += np.maximum(np.log(lm.get_next_token_prob(' '.join(l_line[:k]), tok_pref)), min_logprob)
                n += 1
# For the last token on the line we add the log probability for the EOS token
            if k == (len(l_line) -1) :
                perplexity += np.maximum(np.log(lm.get_next_token_prob(' '.join(l_line[k-lm.n+1:k+1]), EOS)), min_logprob)
                n += 1
    # We divide the sum of log probs by the number of tokens            
    perplexity = perplexity/n
    return np.exp(-perplexity)
lm1 = NGramLanguageModel(dummy_lines, n=1)
lm3 = NGramLanguageModel(dummy_lines, n=3)
lm10 = NGramLanguageModel(dummy_lines, n=10)

ppx1 = perplexity(lm1, dummy_lines)
ppx3 = perplexity(lm3, dummy_lines)
ppx10 = perplexity(lm10, dummy_lines)
ppx_missing = perplexity(lm3, ['the jabberwock , with eyes of flame , '])  # thanks, L. Carrol

print("Perplexities: ppx1=%.3f ppx3=%.3f ppx10=%.3f" % (ppx1, ppx3, ppx10))

assert all(0 < ppx < 500 for ppx in (ppx1, ppx3, ppx10)), "perplexity should be nonnegative and reasonably small"
assert ppx1 > ppx3 > ppx10, "higher N models should overfit and "
assert np.isfinite(ppx_missing) and ppx_missing > 10 ** 6, "missing words should have large but finite perplexity. " \
    " Make sure you use min_logprob right"

Now let’s measure the actual perplexity: we’ll split the data into train and test and score model on test data only.

from sklearn.model_selection import train_test_split
train_lines, test_lines = train_test_split(lines, test_size=0.25, random_state=42)

for n in (1, 2, 3):
    lm = NGramLanguageModel(n=n, lines=train_lines)
    ppx = perplexity(lm, test_lines)
    print("N = %i, Perplexity = %.5f" % (n, ppx))

# whoops, it just blew up :)

LM Smoothing

The problem with our simple language model is that whenever it encounters an n-gram it has never seen before, it assigns it with the probabilitiy of 0. Every time this happens, perplexity explodes.

To battle this issue, there’s a technique called smoothing. The core idea is to modify counts in a way that prevents probabilities from getting too low. The simplest algorithm here is Additive smoothing (aka Lapace smoothing):

\[P(w_t | prefix) = { Count(prefix, w_t) + \delta \over \sum_{\hat w} (Count(prefix, \hat w) + \delta) }\]

If counts for a given prefix are low, additive smoothing will adjust probabilities to a more uniform distribution. Not that the summation in the denominator goes over all words in the vocabulary.

Here’s an example code we’ve implemented for you:

class LaplaceLanguageModel(NGramLanguageModel): 
    """ this code is an example, no need to change anything """
    def __init__(self, lines, n, delta=1.0):
        self.n = n
        counts = count_ngrams(lines, self.n)
        self.vocab = set(token for token_counts in counts.values() for token in token_counts)
        self.probs = defaultdict(Counter)

        for prefix in counts:
            token_counts = counts[prefix]
            total_count = sum(token_counts.values()) + delta * len(self.vocab)
            self.probs[prefix] = {token: (token_counts[token] + delta) / total_count
                                          for token in token_counts}
    def get_possible_next_tokens(self, prefix):
        token_probs = super().get_possible_next_tokens(prefix)
        missing_prob_total = 1.0 - sum(token_probs.values())
        missing_prob = missing_prob_total / max(1, len(self.vocab) - len(token_probs))
        return {token: token_probs.get(token, missing_prob) for token in self.vocab}
    
    def get_next_token_prob(self, prefix, next_token):
        token_probs = super().get_possible_next_tokens(prefix)
        if next_token in token_probs:
            return token_probs[next_token]
        else:
            missing_prob_total = 1.0 - sum(token_probs.values())
            missing_prob_total = max(0, missing_prob_total) # prevent rounding errors
            return missing_prob_total / max(1, len(self.vocab) - len(token_probs))
        
#test that it's a valid probability model
for n in (1, 2, 3):
    dummy_lm = LaplaceLanguageModel(dummy_lines, n=n)
    assert np.allclose(sum([dummy_lm.get_next_token_prob('a', w_i) for w_i in dummy_lm.vocab]), 1), "I told you not to break anything! :)"
for n in (1, 2, 3):
    lm = LaplaceLanguageModel(train_lines, n=n, delta=0.1)
    ppx = perplexity(lm, test_lines)
    print("N = %i, Perplexity = %.5f" % (n, ppx))