This paper proposes a novel method for estimating thegeospatial trajectory of a moving camera. The proposedmethod uses a set of reference images with known GPS(global positioning system) locations to recover the trajectoryof a moving camera using geometric constraints. Theproposed method has three main steps. First, scale invariantfeatures transform (SIFT) are detected and matched betweenthe reference images and the video frames to calculatea weighted adjacency matrix (WAM) based on the numberof SIFT matches. Second, using the estimated WAM, themaximum matching reference image is selected for the currentvideo frame, which is then used to estimate the relativeposition (rotation and translation) of the video frame usingthe fundamental matrix constraint. The relative position isrecovered upto a scale factor and a triangulation amongthe video frame and two reference images is performed toresolve the scale ambiguity. Third, an outlier rejection andtrajectory smoothing (using b-spline) post processing stepis employed. This is because the estimated camera locationsmay be noisy due to bad point correspondence or degenerateestimates of fundamental matrices. Results of recoveringcamera trajectory are reported for real sequences.

Estimating Geospatial Trajectory of a Moving Camera / A., Hakeem; Vezzani, Roberto; S., Shah; Cucchiara, Rita. - STAMPA. - 2:(2006), pp. 82-87. (Intervento presentato al convegno 18th International Conference on Pattern Recognition, ICPR 2006 tenutosi a Hong Kong nel 20-24 Aug) [10.1109/ICPR.2006.499].

Estimating Geospatial Trajectory of a Moving Camera

VEZZANI, Roberto;CUCCHIARA, Rita
2006

Abstract

This paper proposes a novel method for estimating thegeospatial trajectory of a moving camera. The proposedmethod uses a set of reference images with known GPS(global positioning system) locations to recover the trajectoryof a moving camera using geometric constraints. Theproposed method has three main steps. First, scale invariantfeatures transform (SIFT) are detected and matched betweenthe reference images and the video frames to calculatea weighted adjacency matrix (WAM) based on the numberof SIFT matches. Second, using the estimated WAM, themaximum matching reference image is selected for the currentvideo frame, which is then used to estimate the relativeposition (rotation and translation) of the video frame usingthe fundamental matrix constraint. The relative position isrecovered upto a scale factor and a triangulation amongthe video frame and two reference images is performed toresolve the scale ambiguity. Third, an outlier rejection andtrajectory smoothing (using b-spline) post processing stepis employed. This is because the estimated camera locationsmay be noisy due to bad point correspondence or degenerateestimates of fundamental matrices. Results of recoveringcamera trajectory are reported for real sequences.
2006
18th International Conference on Pattern Recognition, ICPR 2006
Hong Kong
20-24 Aug
2
82
87
A., Hakeem; Vezzani, Roberto; S., Shah; Cucchiara, Rita
Estimating Geospatial Trajectory of a Moving Camera / A., Hakeem; Vezzani, Roberto; S., Shah; Cucchiara, Rita. - STAMPA. - 2:(2006), pp. 82-87. (Intervento presentato al convegno 18th International Conference on Pattern Recognition, ICPR 2006 tenutosi a Hong Kong nel 20-24 Aug) [10.1109/ICPR.2006.499].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/464388
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 24
  • ???jsp.display-item.citation.isi??? 17
social impact