Welcome!

Gary Kaiser

Subscribe to Gary Kaiser: eMailAlertsEmail Alerts
Get Gary Kaiser via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Top Stories by Gary Kaiser

When we think of application performance problems that are network-related, we often immediately think of bandwidth and congestion as likely culprits; faster speeds and less traffic will solve everything, right? This is reminiscent of recent ISP wars; which is better, DSL or cable modems? Cable modem proponents touted the higher bandwidth while DSL proponents warned of the dangers of sharing the network with your potentially bandwidth-hogging neighbors. In this blog entry, we'll examine these two closely-related constraints, beginning the series of performance analyses using the framework we introduced in Part I. I'll use graphics from Compuware's application-centric protocol analyzer - Transaction Trace - as illustrations. Bandwidth We define bandwidth delay as the serialization delay encountered as bits are clocked out onto the network medium. Most important for pe... (more)

Understanding Application Performance on the Network | Part 4

We know that losing packets is not a good thing; retransmissions cause delays. We also know that TCP ensures reliable data delivery, masking the impact of packet loss. So why are some applications seemingly unaffected by the same packet loss rate that seems to cripple others? From a performance analysis perspective, how do you understand the relevance of packet loss and avoid chasing red herrings? In Part II, we examined two closely related constraints - bandwidth and congestion. In Part III, we discussed TCP slow-start and introduced the Congestion Window (CWD). In Part IV, we'... (more)

Understanding Application Performance on the Network | Part 5

In Part IV, we wrapped up our discussions on bandwidth, congestion and packet loss. In Part V, we examine the four types of processing delays visible on the network, using the request/reply paradigm we outlined in Part I. Server Processing (Between Flows) From the network's perspective, we allocate the time period between the end of a request flow and the beginning of the corresponding reply flow to server processing. Generally speaking, the server doesn't begin processing a request until it has received the entire flow, i.e., the last packet in the request message; similarly, th... (more)

Understanding Application Performance on the Network | Part 3

In Part II, we discussed performance constraints caused by both bandwidth and congestion. Purposely omitted was a discussion about packet loss - which is often an inevitable result of heavy network congestion. I'll use this blog entry on TCP slow-start to introduce the Congestion Window (CWD), which is fundamental for Part IV's in-depth review of Packet Loss. TCP Slow-Start TCP uses a slow-start algorithm as it tries to understand the characteristics (bandwidth, latency, congestion) of the path supporting a new TCP connection. In most cases, TCP has no inherent understanding of th... (more)

Understanding Application Performance on the Network | Part 6

In Part V, we discussed processing delays caused by "slow" client and server nodes. In Part VI, we'll discuss the Nagle algorithm, a behavior that can have a devastating impact on performance and, in many ways, appear to be a processing delay. Common TCP ACK Timing Beyond being important for (reasonably) accurate packet flow diagrams, understanding "normal" TCP ACK timing can help in the effective diagnosis of certain types of performance problems. These include those introduced by the Nagle algorithm, which we will discuss here, and application windowing, to be discussed in Par... (more)