We know that losing packets is not a good thing; retransmissions cause
delays. We also know that TCP ensures reliable data delivery, masking the
impact of packet loss. So why are some applications seemingly unaffected by
the same packet loss rate that seems to cripple others? From a performance
analysis perspective, how do you understand the relevance of packet loss and
avoid chasing red herrings?
In Part II, we examined two closely related constraints - bandwidth and
congestion. In Part III, we discussed TCP slow-start and introduced the
Congestion Window (CWD). In Part IV, we'll focus on packet loss, continuing
the concepts from these two previous entries.
TCP ensures reliable delivery of data through its sliding window approach to
managing byte sequences and acknowledgements; among other things, this
sequencing allows a receiver to inform the send... (more)
In Part II, we discussed performance constraints caused by both bandwidth and
congestion. Purposely omitted was a discussion about packet loss - which is
often an inevitable result of heavy network congestion. I'll use this blog
entry on TCP slow-start to introduce the Congestion Window (CWD), which is
fundamental for Part IV's in-depth review of Packet Loss.
TCP uses a slow-start algorithm as it tries to understand the characteristics
(bandwidth, latency, congestion) of the path supporting a new TCP connection.
In most cases, TCP has no inherent understanding of th... (more)
In Part 6, we dove into the Nagle algorithm - perhaps (or hopefully)
something you'll never see. In Part VII, we get back to "pure" network and
TCP roots as we examine how the TCP receive window interacts with WAN links.
TCP Window Size
Each node participating in a TCP connection advertises its available buffer
space using the TCP window size field. This value identifies the maximum
amount of data a sender can transmit without receiving a window update via a
TCP acknowledgement; in other words, this is the maximum number of "bytes in
flight" - bytes that have been sent, are traver... (more)
In Part V, we discussed processing delays caused by "slow" client and server
nodes. In Part VI, we'll discuss the Nagle algorithm, a behavior that can
have a devastating impact on performance and, in many ways, appear to be a
Common TCP ACK Timing
Beyond being important for (reasonably) accurate packet flow diagrams,
understanding "normal" TCP ACK timing can help in the effective diagnosis of
certain types of performance problems. These include those introduced by the
Nagle algorithm, which we will discuss here, and application windowing, to be
discussed in Par... (more)
In Part IV, we wrapped up our discussions on bandwidth, congestion and packet
loss. In Part V, we examine the four types of processing delays visible on
the network, using the request/reply paradigm we outlined in Part I.
Server Processing (Between Flows)
From the network's perspective, we allocate the time period between the end
of a request flow and the beginning of the corresponding reply flow to server
processing. Generally speaking, the server doesn't begin processing a request
until it has received the entire flow, i.e., the last packet in the request
message; similarly, th... (more)