An automated system for tracking objects across non-overlapping views from multiple un-calibrated cameras, using space-time cues
Automated surveillance across a wide area requires a network of cameras. Ideally, these cameras should have overlapping views to facilitate the ability to determine the location of each object in the environment, at every instant of time. For this reason, much of the work done on automated multi-camera surveillance assumes overlapping views. However, the reality is that often cameras used in wide area surveillance, do not have such a luxury. The challenge in such cases is to develop accurate algorithms for establishing a connection between a real world object from one camera to the next, since the same object may appear to be different due to differences in illumination, position and that camera’s particular properties. It is also desirable that the tracking algorithms used do not require calibrated cameras or complete site modeling, as these are not typically available in most surveillance situations.
UCF scientists have developed such a tracking system that can detect object correspondence across un-calibrated cameras whose images do not need to overlap. The developed system uses existing technologies, such as Parzen windows, Bhattacharyya distances and the Maximum A Posteriori (MAP) statistical estimation framework, to greatly increase the capabilities of currently utilized surveillance systems.
- Does not require surveillance with overlapping views
- Uses efficient computations which enables the system to track in real time
- Does not require expensive full camera calibration
- Law enforcement
- National defense
- Border control
- Airport security