Welcome!

Gary Kaiser

Subscribe to Gary Kaiser: eMailAlertsEmail Alerts
Get Gary Kaiser via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Top Stories by Gary Kaiser

IT's Little Secret: Too Little End-User Experience (EUE) Data Use Many of us are grappling with the modern demands of digital business: developing new mobile apps, evaluating security in the face of IoT, moving to hybrid clouds, testing approaches to defining networks through software. It's all part of the hard trend toward service-oriented IT, with a primary goal of delivering a premium end user experience (EUE) to all your users - internal, partners, and customers - with the speed, quality and agility the business demands. How do you meet these elevated expectations? As modern data centers evolve rapidly to meet these agility demands, network and application architectures are becoming increasingly complex, complicating efforts to understand service quality from infrastructure and application monitoring alone. Virtualization can obscure critical performance visibil... (more)

Transaction-Centric NPM | @CloudExpo #APM #DevOps #Microservices

Transaction-Centric NPM: Enabling IT Operations/Development Collaboration In my last post, I wrote about the value of IT / business collaboration, and the importance of a common language, a common definition of end-user experience - user transaction response time - as the one performance metric both IT and business have in common. In it, I provided some background on the importance of understanding exactly how we define response time, since this definition dictates the usefulness of the measurement. For the sake of brevity, I'll summarize three common definitions here: Session-la... (more)

Understanding Application Performance on the Network | Part 5

In Part IV, we wrapped up our discussions on bandwidth, congestion and packet loss. In Part V, we examine the four types of processing delays visible on the network, using the request/reply paradigm we outlined in Part I. Server Processing (Between Flows) From the network's perspective, we allocate the time period between the end of a request flow and the beginning of the corresponding reply flow to server processing. Generally speaking, the server doesn't begin processing a request until it has received the entire flow, i.e., the last packet in the request message; similarly, th... (more)

Understanding Application Performance on the Network | Part 2

When we think of application performance problems that are network-related, we often immediately think of bandwidth and congestion as likely culprits; faster speeds and less traffic will solve everything, right? This is reminiscent of recent ISP wars; which is better, DSL or cable modems? Cable modem proponents touted the higher bandwidth while DSL proponents warned of the dangers of sharing the network with your potentially bandwidth-hogging neighbors. In this blog entry, we'll examine these two closely-related constraints, beginning the series of performance analyses using the ... (more)

Understanding Application Performance on the Network | Part 3

In Part II, we discussed performance constraints caused by both bandwidth and congestion. Purposely omitted was a discussion about packet loss - which is often an inevitable result of heavy network congestion. I'll use this blog entry on TCP slow-start to introduce the Congestion Window (CWD), which is fundamental for Part IV's in-depth review of Packet Loss. TCP Slow-Start TCP uses a slow-start algorithm as it tries to understand the characteristics (bandwidth, latency, congestion) of the path supporting a new TCP connection. In most cases, TCP has no inherent understanding of th... (more)