Welcome!

Gary Kaiser

Subscribe to Gary Kaiser: eMailAlertsEmail Alerts
Get Gary Kaiser via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Application Performance Management (APM), DevOps Journal

Article

IT Operations and Digital Disruption | @DevOps Summit #APM #DevOps

Are you driven primarily by infrastructure and application performance metrics, or by end-user experience?

Transaction-Centric NPM: Enabling IT/Business Collaboration

One of the consequences of digital disruption is that IT is propelled much closer to users, who expect applications and services to be available and to perform well anytime, anywhere, on any device. Communicating with the business now more than ever requires communicating with these users, and effective communication requires a clear understanding of their experience with the application services you deliver. But how well do you understand end-user experience? To answer that question, let's first define end-user experience as the transaction response time a user receives from an application service; click to glass seems to be the term in vogue. Alternatively, you can look at the question from a different perspective: How is the quality of your application services measured? Are you driven primarily by infrastructure and application performance metrics, or by end-user experience?

Whether your data center supports customer-facing applications that directly generate revenue or enterprise applications that automate and manage critical business processes - or, increasingly, both - application performance matters. It is the way your service quality is perceived by your users - consumers or employees. Business owners understand this intrinsically; poor ecommerce site performance results in lost revenue and damaged loyalty, while internally, poor application performance leads to decreased productivity and lost business opportunities.

APM's Three Aspects
Application performance monitoring can be segmented into three aspects, or disciplines:

  • Device monitoring provides critical visibility into the health of the infrastructure components - the servers, disks, switches, routers, firewalls, desktops, etc. required to deliver application services.
  • Application monitoring provides visibility into critical application components, such as application containers, methods, databases, APIs, etc.
  • End-user experience monitoring, within its performance context, provides visibility into user productivity, and provides a top-down business-centric perspective of both device and application performance.

It's clear that the first two aspects - device and application monitoring - are fundamentally important. We should also be able to agree that, to answer our earlier question about service quality, you must also measure the end-user's experience. When users complain, they speak in terms of transaction speed, not network latency, interface utilization, database queries, java server pages, or CPU speed. (Well, if you're on the network team, you'll claim they often will say the network is slow, but we generally know better.)

The user's experience can be considered the intersection between business metrics (productivity) and IT metrics (device and application); it's the one metric both groups have in common.

Most of us are pretty good at device and application monitoring; this is often not the case when we consider end-user experience monitoring. So what are the penalties if you're not measuring end-user experience?'

  • You won't know users are having a problem until they call you (unless something catastrophic happens).
  • You will chase after problems that don't affect users (because you're monitoring dozens, or hundreds, of metrics of varying impact).
  • You won't have a description of the problem that matches your metrics (and therefore don't have a validated starting point for troubleshooting).
  • You won't know when or if you've resolved the problem (without asking the users).

At Cisco Live this week, an IT manager told me of his frustration with all-too-frequent 3 a.m. infrastructure or application alerts: should he get up and investigate? He had no idea if the problem of the moment had any impact on users. Only by adopting end-user experience monitoring was he able to qualify and prioritize his response.

Don't We Already Measure End-User Experience?
It's true that many applications- particularly those based on Java and .NET platforms - may already be instrumented with APM agents, some of which provide exactly this insight into end-user experience. However:

  • These APM solutions are often not used by operations teams
  • Not all Java and .NET apps will be instrumented (and if you're not using Dynatrace, you might only be sampling transactions)
  • Many application architectures don't lend themselves to agent-based instrumentation

IT operations teams therefore usually rely on more traditional infrastructure-centric device and network monitoring solutions. The rise of application-awareness (primarily in Application Aware Network Performance Monitoring - AA NPM - solutions, but also in device management offerings) has given IT varying degrees of insight into application behavior - and sometimes a degree of insight into application performance. However, without visibility into end-user experience, without a user transaction-centric starting point, these tools do little to foster the communication and collaboration we mentioned earlier. As I pointed out in a recent webcast called the Top Five Benefits of Transaction-Centric NPM, AA NPM solutions are generally quite limited in their ability to measure actual end-user experience, especially across a broad range of application architectures. Instead, these tools use key infrastructure measurements such as network latency, packet loss, jitter, etc., as indicators or hints of application performance as experienced by end users. These metrics may be quite meaningful to IT specialists, but they aren't end-user experience and don't provide a basis for effective communication.

Click here for the full article.

More Stories By Gary Kaiser

Gary Kaiser is a Subject Matter Expert in Network Performance Analytics at Dynatrace, responsible for DC RUM’s technical marketing programs. He is a co-inventor of multiple performance analysis features, and continues to champion the value of network performance analytics. He is the author of Network Application Performance Analysis (WalrusInk, 2014).

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.