In this paper we propose a new method for human action categorization by using an effective combination of novel gradient and optic flow descriptors, and creating a more effective codebook modeling the ambiguity of feature assignment in the traditional bag-of-words model. Recent approaches have represented video sequences using a bag of spatio-temporal visual words, following the successful results achieved in object and scene classification. Codebooks are usually obtained by k-means clustering and hard assignment of visual features to the best representing codeword. Our main contribution is two-fold. First, we define a new 3D gradient descriptor that combined with optic flow outperforms the state-of-the-art, without requiring fine parameter tuning. Second, we show that for spatio-temporal features the popular k-means algorithm is insufficient because cluster centers are attracted by the denser regions of the sample distribution, providing a non-uniform description of the feature space and thus failing to code other informative regions. Therefore, we apply a radius-based clustering method and a soft assignment that considers the information of two or more relevant candidates. This approach generates a more effective codebook resulting in a further improvement of classification performances. We extensively test our approach on standard KTH and Weizmann action datasets showing its validity and outperforming other recent approaches.

Effective Codebooks for Human Action Categorization / Lamberto, Ballan; Marco, Bertini; Alberto Del, Bimbo; Lorenzo, Seidenari; Serra, Giuseppe. - STAMPA. - (2009), pp. 506-513. (Intervento presentato al convegno 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009 tenutosi a Kyoto, jpn nel Sept. 27 2009-Oct. 4 2009) [10.1109/ICCVW.2009.5457658].

Effective Codebooks for Human Action Categorization

SERRA, GIUSEPPE
2009

Abstract

In this paper we propose a new method for human action categorization by using an effective combination of novel gradient and optic flow descriptors, and creating a more effective codebook modeling the ambiguity of feature assignment in the traditional bag-of-words model. Recent approaches have represented video sequences using a bag of spatio-temporal visual words, following the successful results achieved in object and scene classification. Codebooks are usually obtained by k-means clustering and hard assignment of visual features to the best representing codeword. Our main contribution is two-fold. First, we define a new 3D gradient descriptor that combined with optic flow outperforms the state-of-the-art, without requiring fine parameter tuning. Second, we show that for spatio-temporal features the popular k-means algorithm is insufficient because cluster centers are attracted by the denser regions of the sample distribution, providing a non-uniform description of the feature space and thus failing to code other informative regions. Therefore, we apply a radius-based clustering method and a soft assignment that considers the information of two or more relevant candidates. This approach generates a more effective codebook resulting in a further improvement of classification performances. We extensively test our approach on standard KTH and Weizmann action datasets showing its validity and outperforming other recent approaches.
2009
2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009
Kyoto, jpn
Sept. 27 2009-Oct. 4 2009
506
513
Lamberto, Ballan; Marco, Bertini; Alberto Del, Bimbo; Lorenzo, Seidenari; Serra, Giuseppe
Effective Codebooks for Human Action Categorization / Lamberto, Ballan; Marco, Bertini; Alberto Del, Bimbo; Lorenzo, Seidenari; Serra, Giuseppe. - STAMPA. - (2009), pp. 506-513. (Intervento presentato al convegno 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops 2009 tenutosi a Kyoto, jpn nel Sept. 27 2009-Oct. 4 2009) [10.1109/ICCVW.2009.5457658].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/979907
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 29
  • ???jsp.display-item.citation.isi??? ND
social impact