Welcome!

Gary Kaiser

Subscribe to Gary Kaiser: eMailAlertsEmail Alerts
Get Gary Kaiser via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Top Stories by Gary Kaiser

Why a discussion around Application Performance Analytics? There's a lot of buzz in this industry around the topic of performance analytics - an informal subset of IT operations analytics (ITOA) - as a solution to the growing mountains of monitoring data and the increasing complexity of application and network architectures. At the same time, there exist many purpose-built performance analysis solutions. Many are domain-centric - server monitoring and network monitoring, for example - while some exhibit a key ITOA characteristic by incorporating and correlating data from multiple sources. Most perform some level of analysis to expose predefined insights. Application Performance Analytics: Viewed Through a Simple Framework In this blog, I'll outline a simple analytics framework that illustrates how network and application metrics can be derived from a network probe ("w... (more)

Understanding Application Performance on the Network | Part 3

In Part II, we discussed performance constraints caused by both bandwidth and congestion. Purposely omitted was a discussion about packet loss - which is often an inevitable result of heavy network congestion. I'll use this blog entry on TCP slow-start to introduce the Congestion Window (CWD), which is fundamental for Part IV's in-depth review of Packet Loss. TCP Slow-Start TCP uses a slow-start algorithm as it tries to understand the characteristics (bandwidth, latency, congestion) of the path supporting a new TCP connection. In most cases, TCP has no inherent understanding of th... (more)

Understanding Application Performance on the Network | Part 1

As a network professional, one of your newer roles is likely troubleshooting poor application performance. For most of us, our jobs have advanced beyond network "health," towards sharing - if not owning - responsibility for application delivery. There are many reasons for this more justifiable than the adage that the network is first to be blamed for performance problems. (Your application and system peers feel they are first to be blamed as well.) Two related influencing trends come to mind: Increased globalization, coupled with (in fact facilitated by) inexpensive bandwidth me... (more)

Understanding Application Performance on the Network | Part 2

When we think of application performance problems that are network-related, we often immediately think of bandwidth and congestion as likely culprits; faster speeds and less traffic will solve everything, right? This is reminiscent of recent ISP wars; which is better, DSL or cable modems? Cable modem proponents touted the higher bandwidth while DSL proponents warned of the dangers of sharing the network with your potentially bandwidth-hogging neighbors. In this blog entry, we'll examine these two closely-related constraints, beginning the series of performance analyses using the ... (more)

Understanding Application Performance on the Network | Part 5

In Part IV, we wrapped up our discussions on bandwidth, congestion and packet loss. In Part V, we examine the four types of processing delays visible on the network, using the request/reply paradigm we outlined in Part I. Server Processing (Between Flows) From the network's perspective, we allocate the time period between the end of a request flow and the beginning of the corresponding reply flow to server processing. Generally speaking, the server doesn't begin processing a request until it has received the entire flow, i.e., the last packet in the request message; similarly, th... (more)