Abstract
Calibration of camera networks is a well studied problem. However, most previous attempts assume all the cameras in the network to be synchronized which is especially difficult over large distances. In this paper we present a simple method to fully calibrate the asynchronized cameras of differing frame rates from the acquired video content directly. The presented methods either utilize content based tracked features or alternatively a light marker together with epipolar or homography based constraints to estimate the synchronization as well as intrinsic and extrinsic camera parameters. We assume two cameras within the network to be pre-calibrated (intrinsics only) using standard approaches. We validate our method with numerous simulations for noise analysis as well as real experiments. Furthermore we show how our approach can be used for robust 3D reconstruction in spite of using asynchronized cameras.