I have some trouble understanding exactly how one should implement the structured perceptron for part-of-speech tagging. Could you please confirm or correct my thoughts, and/or fill in any gaps missing?
So, basically the structured perceptron is a variant of the multiclass perceptron, except for how you implement collecting the best score. A first-order Markov assumption is made, saying that the current sequence index only depends on the previous index. The input is an entire sequence of words, instead of just one word as would be in a non-structured case, as well as a vector of all possible labels (y). The function f(x,y) returns a guessed label sequence for the given word sequence.
In a multiclass perceptron, getting the best score is easily done through iteration since we only deal with classifying one label to one instance. The problem with classifying entire sequences is that it results in an exponential growth of the number of possible labelings. This is where the Viterbi algorithm is needed, which recursively finds the best path using two feature sets; one for determining how likely a given POS tag is to a certain word, and one for determining how likely a certain POS tag is coming directly after another POS tag. The score from each of these feature sets are multiplied with a unique weight for each state. If the chosen path is wrong, each weight in the states of the wrong path are punished, and the weights in the correct path are awarded.
This is about how far I have (hopefully) understood. My biggest questions right now is how the features are structured (is the previous tag sequence a part of the features?), and how to actually implement the Viterbi algorithm. Also, is there an implementation of a POS tagger using structured perceptron anywhere I could analyze (preferably in Java)?
I would be very grateful if you could give me some hints or resources to look into!
[link] [1 comment]