-
Notifications
You must be signed in to change notification settings - Fork 1
Using the GMM Specializer
Here we describe how to use the GMM specializer in your Python code and give examples. Please refer to our HotPar'11 and ASRU'11 papers for details on the specializer and the speaker diarization example respectively. Our specializer uses numpy to store and manipulate arrays.
Contact egonina at eecs dot berkeley dot edu
with questions and comments.
After installing Asp and the GMM specializer, you need to import it in your Python script like so:
from em import *
Creating a GMM object is just like creating an object of any class in Python. You can either create an empty GMM object, specifying its dimensions (M = number of components, D = dimension):
gmm = GMM(M, D)
The parameters will be initialized randomly from the data when the train()
function is called (see below). GMM can also be initialized with existing parameters, like so:
gmm = GMM(M, D, means, vars, weights)
Where means, vars and weights are numpy arrays. Note: when training the GMM, these parameters will get overwritten by new parameters after training, if you are using parameters from a different GMM, make sure to make a copy of the parameters first and pass that to the GMM constructor.
To train the GMM object on a set of observations, use the train()
function:
lkld = gmm.train(data)
Where data
is an N by D numpy array of observation vectors (N vectors, each of D dimensions). It returns the likelihood of the trained GMM fitting the data.
To compute the (log)likelihood of the trained GMM on a new set of observations use the score()
function:
log_lklds = gmm.score(data)
Where data
is an N by D numpy array. The function returns a numpy array of N log-likelihoods, one for each observation vector. To get cummulative statistics about the data, you can use numpy.average() or numpy.sum().
You can access the GMM mean, covariance and weight parameters like so:
means = gmm.components.means
covariance = gmm.components.covars
weights = gmm.components.weights
means
is an M by D array (number of components by number of dimensions), covariance
is an M by D by D array (number of components by number of dimensions by number of dimensions) and weights
is an array of size M (number of components).
This is a simple example that takes a training dataset training_data
, creates a 32-component GMM and trains it on the data, and then computes the average log_likelihood of a testing dataset:
from em import *
import numpy as np
training_data = np.array(get_training_data()) # training_data.shape = (N1, D)
testing_data = np.array(get_testing_data()) # testing_data.shape = (N2, D)
M = 32
D = training_data.shape[1] # get the D dimension from the data
gmm = GMM(M, D) # create new GMM object
gmm.train(training_data) # train the GMM on the training data
log_lklds = gmm.score(testing_data) # compute the log likelihoods of the testing data obsevations
print "Average log likelihood for testing data = ", np.average(log_lklds)
the gmm/tests/
directory includes two example applications song_recommendation.py
and cluster.py
.
-
song_recommendation.py
-
cluster.py