Date Added: Oct 2011
In this paper, the authors present a system they have developed for a mobile audio-video collaboration that is centered around the distributed datasets. In their approach all the data are processed remotely on dedicated servers, where they are successively rendered off-the-screen and compressed using a video codec. The signals captured from the users' cameras are transferred to the server in real time, where they are combined with the data frames into single video streams. Dependent on the device's capabilities and current network bandwidth every session participant receives individually customized stream, which presents both the remote data and the camera view of currently chosen presenter alternately.