In Part 6, we dove into the Nagle algorithm - perhaps (or hopefully)
something you'll never see. In Part VII, we get back to "pure" network and
TCP roots as we examine how the TCP receive window interacts with WAN links.
TCP Window Size
Each node participating in a TCP connection advertises its available buffer
space using the TCP window size field. This value identifies the maximum
amount of data a sender can transmit without receiving a window update via a
TCP acknowledgement; in other words, this is the maximum number of "bytes in
flight" - bytes that have been sent, are traversing the network, but remain
unacknowledged. Once the sender has reached this limit and exhausted the
receive window, the sender must stop and wait for a window update.
The sender transmits a full window then waits for window updates before
continuing. As these window updates arrive, the sen... (more)
When we think of application performance problems that are network-related,
we often immediately think of bandwidth and congestion as likely culprits;
faster speeds and less traffic will solve everything, right? This is
reminiscent of recent ISP wars; which is better, DSL or cable modems? Cable
modem proponents touted the higher bandwidth while DSL proponents warned of
the dangers of sharing the network with your potentially bandwidth-hogging
neighbors. In this blog entry, we'll examine these two closely-related
constraints, beginning the series of performance analyses using the ... (more)
In Part II, we discussed performance constraints caused by both bandwidth and
congestion. Purposely omitted was a discussion about packet loss - which is
often an inevitable result of heavy network congestion. I'll use this blog
entry on TCP slow-start to introduce the Congestion Window (CWD), which is
fundamental for Part IV's in-depth review of Packet Loss.
TCP uses a slow-start algorithm as it tries to understand the characteristics
(bandwidth, latency, congestion) of the path supporting a new TCP connection.
In most cases, TCP has no inherent understanding of th... (more)
We know that losing packets is not a good thing; retransmissions cause
delays. We also know that TCP ensures reliable data delivery, masking the
impact of packet loss. So why are some applications seemingly unaffected by
the same packet loss rate that seems to cripple others? From a performance
analysis perspective, how do you understand the relevance of packet loss and
avoid chasing red herrings?
In Part II, we examined two closely related constraints - bandwidth and
congestion. In Part III, we discussed TCP slow-start and introduced the
Congestion Window (CWD). In Part IV, we'... (more)
In Part IV, we wrapped up our discussions on bandwidth, congestion and packet
loss. In Part V, we examine the four types of processing delays visible on
the network, using the request/reply paradigm we outlined in Part I.
Server Processing (Between Flows)
From the network's perspective, we allocate the time period between the end
of a request flow and the beginning of the corresponding reply flow to server
processing. Generally speaking, the server doesn't begin processing a request
until it has received the entire flow, i.e., the last packet in the request
message; similarly, th... (more)