TY - GEN
T1 - Empirical co-occurrence rate networks for sequence labeling
AU - Zhu, Zhemin
AU - Hiemstra, Djoerd
AU - Apers, Peter M.G.
AU - Wombacher, Andreas
N1 - eemcs-eprint-24073
PY - 2013/11/29
Y1 - 2013/11/29
N2 - Sequence labeling has wide applications in many areas. For example, most of named entity recog- nition tasks, which extract named entities or events from unstructured data, can be formalized as sequence labeling problems. Sequence labeling has been studied extensively in different commu- nities, such as data mining, natural language processing or machine learning. Many powerful and popular models have been developed, such as hidden Markov models (HMMs) [4], conditional Markov models (CMMs) [3], and conditional random fields (CRFs) [2]. Despite their successes, they suffer from some known problems: (i) HMMs are generative models which suffer from the mismatch problem, and also it is difficult to incorporate overlapping, non-independent features into a HMM explicitly. (ii) CMMs suffer from the label bias problem; (iii) CRFs overcome the problems of HMMs and CMMs, but the global normalization of CRFs can be very expensive. This prevents CRFs from being applied to big datasets (e.g. Tweets).
In this paper, we propose the empirical Co-occurrence Rate Networks (ECRNs) [5] for sequence la- beling. CRNs avoid the problems of the existing models mentioned above. To make the training of CRNs as efficient as possible, we simply use the empirical distribution as the parameter estimation. This results in the ECRNs which can be trained orders of magnitude faster and still obtain compet- itive accuracy to the existing models. ECRN has been applied as a component to the University of Twente system [1] for concept extraction challenge at #MSM2013, which won the best challenge submission awards. ECRNs can be very useful for practitioners on big data.
AB - Sequence labeling has wide applications in many areas. For example, most of named entity recog- nition tasks, which extract named entities or events from unstructured data, can be formalized as sequence labeling problems. Sequence labeling has been studied extensively in different commu- nities, such as data mining, natural language processing or machine learning. Many powerful and popular models have been developed, such as hidden Markov models (HMMs) [4], conditional Markov models (CMMs) [3], and conditional random fields (CRFs) [2]. Despite their successes, they suffer from some known problems: (i) HMMs are generative models which suffer from the mismatch problem, and also it is difficult to incorporate overlapping, non-independent features into a HMM explicitly. (ii) CMMs suffer from the label bias problem; (iii) CRFs overcome the problems of HMMs and CMMs, but the global normalization of CRFs can be very expensive. This prevents CRFs from being applied to big datasets (e.g. Tweets).
In this paper, we propose the empirical Co-occurrence Rate Networks (ECRNs) [5] for sequence la- beling. CRNs avoid the problems of the existing models mentioned above. To make the training of CRNs as efficient as possible, we simply use the empirical distribution as the parameter estimation. This results in the ECRNs which can be trained orders of magnitude faster and still obtain compet- itive accuracy to the existing models. ECRN has been applied as a component to the University of Twente system [1] for concept extraction challenge at #MSM2013, which won the best challenge submission awards. ECRNs can be very useful for practitioners on big data.
KW - EWI-24073
KW - METIS-302560
KW - IR-88490
KW - DB-DM: DATA MINING
M3 - Conference contribution
SN - not assigned
SP - 10
BT - Dutch-Belgian Database Day, DBDBD 2013
PB - Erasmus University Rotterdam
CY - Rotterdam, The Netherlands
Y2 - 29 November 2013 through 29 November 2013
ER -