Download now Free registration required
Performance of existing TCP implementations over Optical Burst Switched (OBS) networks is not satisfactory, as they suffer from false congestion detection. As the contention induced losses are more common than congestion induced losses in OBS, the TCP reduces the congestion window even when there is no congestion. This effect, in turn unnecessarily reduces the TCP throughput. So it is crucial to differentiate the contention induced loss from congestion induced loss in such networks. It has been proposed in this paper, a mechanism that makes use of short-term RTT variation and assembly times' of individual TCP segments to differentiate between congestion and contention induced losses.
- Format: PDF
- Size: 559.7 KB