Audio-Assisted Trajectory Estimation in Non-Overlapping Multi-Camera Networks

Free registration required

Executive Summary

The authors present an algorithm to improve trajectory estimation in networks of non-overlapping cameras using audio measurements. The algorithm fuses audiovisual cues in each camera's field of view and recovers trajectories in unobserved regions using microphones only. Audio source localization is performed using STereo Audio and Cycloptic Vision (STAC) sensor by estimating the Time Difference Of Arrival (TDOA) between microphone pair and then by computing the cross correlation. Audio estimates are then smoothed using Kalman filtering. The audio-visual fusion is performed using a dynamic weighting strategy. They show that using a multi-modal sensor with combined visual (narrow) and audio (wider) field of view can enable extended target tracking in non-overlapping camera settings. In particular, the weighting scheme improves performance in the overlapping regions.

  • Format: PDF
  • Size: 2662.4 KB