ADAPTATION
OF TCP WINDOW
The first phase of a TCP session is establishment
of the connection. This requires a three-way handshake, ensuring that both
sides of the connection have an unambiguous understanding of the sequence
number space of the remote side for this session. The operation of the
connection is as follows:
·
The local system sends the remote end an initial
sequence number to the remote port, using a SYN packet.
·
The remote system responds with an ACK of the
initial sequence number and the initial sequence number of the remote end in a
response SYN packet.
·
The local end responds with an ACK of this remote
sequence number.
·
The performance implication of this protocol
exchange is that it takes one and a half round-trip times (RTTs) for the two
systems to synchronize state before any data can be sent.
After the connection has been established, the TCP
protocol manages the reliable exchange of data between the two systems. The
algorithms that determine the various retransmission timers have been redefined
numerous times. TCP is a sliding-window protocol, and the general principle of
flow control is based on the management of the advertised window size and the
management of retransmission timeouts, attempting to optimize protocol
performance within the observed delay and loss parameters of the connection.
Tuning a TCP protocol stack for optimal performance
over a very low-delay, high-bandwidth LAN requires different settings to obtain
optimal performance over a dialup Internet connection, which in turn is
different for the requirements of a high-speed wide-area network. Although TCP
attempts to discover the delay bandwidth product of the connection, and
attempts to automatically optimize its flow rates within the estimated
parameters of the network path, some estimates will not be accurate, and the
corresponding efforts by TCP to optimize behavior may not be completely
successful.
Another critical aspect is that TCP is an adaptive
flow-control protocol. TCP uses a basic flow-control algorithm of increasing
the data-flow rate until the network signals that some form of saturation level
has been reached (normally indicated by data loss). When the sender receives an
indication of data loss, the TCP flow rate is reduced; when reliable
transmission is reestablished, the flow rate slowly increases again.
If no
reliable flow is reestablished, the flow rate backs further off to an initial
probe of a single packet, and the entire adaptive flow-control process starts
again.This process has numerous results relevant to service quality. First, TCP
behaves adaptively , rather than predictively . The flow-control algorithms are
intended to increase the data-flow rate to fill all available network path
capacity, but they are also intended to quickly back off if the available
capacity changes because of interaction with other traffic, or if a dynamic
change occurs in the end-to-end network path.
For
example, a single TCP flow across an otherwise idle network attempts to fill
the network path with data, optimizing the flow rate within the available
network capacity. If a second TCP flow opens up across the same path, the two
flow-control algorithms will interact so that both flows will stabilize to use
approximately half of the available capacity per flow. The objective of the TCP
algorithms is to adapt so that the network is fully used whenever one or more
data flows are present. In design, tension always exists between the efficiency
of network use and the enforcement of predictable session performance. With
TCP, you give up predictable throughput but gain a highly utilized, efficient
network.
Related Topics
Privacy Policy, Terms and Conditions, DMCA Policy and Compliant
Copyright © 2018-2023 BrainKart.com; All Rights Reserved. Developed by Therithal info, Chennai.