Abstract
We describe algorithms for authoring and viewing high resolution immersive videos. Given a set of cameras designed to be aligned more or less at the same nodal point, we first present a process for stitching seamlessly synchronized streams of videos into a single immersive video corresponding to the video of the abstract multi-head camera. We describe a general registration technique onto geometric envelopes based on minimizing a novel appropriate objective function, and detail our compounded image synthesis algorithm of multi-head cameras. Several new environment maps with low discrepancy are presented. Finally, we give details on the viewer implementation. Experimental results on both immersive real and synthetic videos are shown.