Can Wire Data Be APM?
I recently read something – a blog, a tweet, a LinkedIn article perhaps –
describing the use of wire data to analyze application performance. I
remember that the author’s use of the term “APM” in this context caused
one reader to comment, complaining that “you can’t call wire data APM.”
This was around the same time I referred casually to Dynatrace’s wire data
offering (Data Center Real User Monitoring, or DC RUM) as both “APM for IT
Operations” and “probe-based APM.” So that complaint has stuck with me,
prompting me to ask – and offer an answer to – the question.
It depends, of course, on answers to related questions. How do you define
APM? What role does APM play in your organization? What APM insights can wire
data provide? Let’s take a brief look at each of these.
What is APM to you?
In very general terms (Wikipedia is great for this), APM ... (more)
In Part IV, we wrapped up our discussions on bandwidth, congestion and packet
loss. In Part V, we examine the four types of processing delays visible on
the network, using the request/reply paradigm we outlined in Part I.
Server Processing (Between Flows)
From the network's perspective, we allocate the time period between the end
of a request flow and the beginning of the corresponding reply flow to server
processing. Generally speaking, the server doesn't begin processing a request
until it has received the entire flow, i.e., the last packet in the request
message; similarly, th... (more)
Transaction-Centric NPM: Enabling IT Operations/Development Collaboration
In my last post, I wrote about the value of IT / business collaboration, and
the importance of a common language, a common definition of end-user
experience - user transaction response time - as the one performance metric
both IT and business have in common. In it, I provided some background on the
importance of understanding exactly how we define response time, since this
definition dictates the usefulness of the measurement. For the sake of
brevity, I'll summarize three common definitions here:
As a network professional, one of your newer roles is likely troubleshooting
poor application performance. For most of us, our jobs have advanced beyond
network "health," towards sharing - if not owning - responsibility for
application delivery. There are many reasons for this more justifiable than
the adage that the network is first to be blamed for performance problems.
(Your application and system peers feel they are first to be blamed as well.)
Two related influencing trends come to mind:
Increased globalization, coupled with (in fact facilitated by) inexpensive
bandwidth me... (more)
In Part II, we discussed performance constraints caused by both bandwidth and
congestion. Purposely omitted was a discussion about packet loss - which is
often an inevitable result of heavy network congestion. I'll use this blog
entry on TCP slow-start to introduce the Congestion Window (CWD), which is
fundamental for Part IV's in-depth review of Packet Loss.
TCP uses a slow-start algorithm as it tries to understand the characteristics
(bandwidth, latency, congestion) of the path supporting a new TCP connection.
In most cases, TCP has no inherent understanding of th... (more)