Popular Post

_

Tuesday, December 27, 2011

IT/EV-Charts as an Application Signature: CMG'11 Trip Report, Part 1


I have attended the following CMG’11 presentation (see my previous post):

A Way to Identify, Quantify and Report Change
Richard Gimarc Kiran Chennuri
CA Technologies, Inc. Aetna Life Insurance Company

Identifying change in application performance is a time consuming task. Businesses today have
hundreds of applications and each application has hundreds of metrics. How do you wade
through that mass of data to find an indication of change? This paper describes the use of an
Application Signature to identify, quantify and report change. A Signature is a compact
description of application performance that is used much like a template to judge if a change has
occurred. There are a concise set of visual indicators generated by the Signature that supports
the identification of change in a timely manner.

Here are my comments.

I like the idea of building an application characteristic called Application Signature. As described in the paper it is actually based on typical (standard) deviations of Capacity usage during the peak hours of a day.

Looking closely to the approach I see it is similar with one I have developed for SEDS but it is a bit too simplified. Anyway it is great attempt to use SEDS methodology to watch application capacity usage.

I think the weekly IT-CONTROL CHART ( see other previous post ) is a way to compare usual weekly profile with last 168 hours of data (Base-line vs. Actual), so the base-line in the format of IT-Control Charts without actual data IS AN APPLICATION SIGNATURE but in much more accurate way. It even looks like somebody’s signature:

The actual data could be significantly different, as seen below:

And that diference should be automatically captured by SEDS-like system as an exceptions and calculated how much it differs from the "Signature" using EV meta metric as a weekly sum of each hour EV values  or as a EV-Control Charts like showed here.

For instance, in this example week the application had took a bit more than 23 unusual CPU hours as calculated below:

So, if weekly EV number is 0, that means the most recently the application (server or LPAR and so on) stayed within the IT-Signature, which is GOOD – no changes happend!

The paper also shows the “calendar view“ report that consists of set of daily control charts. It is another good idea. I used to use that approach before I switched to weekly IT- charts that cover 1/4 of a month or bi-weekly ones that cover 1/2 of a month. So if you have IT-charts there is no need for the "calendar view" that sometimes is not easy to read.

Another feature could be important for capacity usage estimates: it is a balance of hourly capacity usage for the day or week vs. overall average (e.g. weekdays vs. weekends or daily “cowboy hat” profile with lunch time drop). That is supposed to be an additional IT-Signature feature. There was another CMG’11 paper that presents some interesting approach to analyze/calculate that. I plan to publish my comments about that paper. So please check my next post soon.....

Tuesday, December 6, 2011

Application Signature: some of my SEDS ideas are at work

I am at CMG'11 conference now (in DC) presenting nothing this year (1st time for the last 11 years!), but I enjoy the conference and especially when my work is referenced.

Here is the example from paper called "Application Signature: A Way to Identify, Quantify and Report Change" which s presenting today at 4 pm by Richard Gimarc from CA Technologies, Inc and Kiran Chennuri from Aetna Life Insurance Company:

'...We readily admit that we are “standing on the shoulders of giants”; leveraging the work of others in the field to develop our own interpretation, implementation and use of an Application Signature....
... Perhaps the most influential work is by Igor Trubin. Starting in 2001, Trubin built on the ideas proposed by Buzen and Shum to develop the Statistical Exception Detection System (SEDS). Basically, SEDS “is used for automatically scanning through large volumes of performance data and identifying measurements of global metrics that differ significantly from their expected values”. Again, we see common ground with our use of an Application Signature. The points we leverage from Trubin’s work are:
  • Identify when performance metrics exceed of fall below expectation
  • Note and record the exceptions
  • Estimate the size of each exception rather than just recording its occurrence
  • Use control charts as a visual tool for examining current performance versus expected performance
 ...
What do you do when a change is identified?
  • Quantify the change. Does your current measurement exceed the Signature by 5%, or 100%? We are considering implementing a technique similar to what was described by Trubin.
  • Grade the change as either good or bad. If a metric increases, is that an indication of a bad change? Not always. Consider workload throughput; an increase in workload throughput is probably a good change. We need to find a way to customize each Application Signature metric to recognize and highlight both good and bad changes.
  • Develop a historical record of changes. Again, this is an idea developed by Trubin. A historical record will provide the application development and support staff with a quantitative description of sensitive application characteristics that may warrant improvement. 
...'
Some other anthers' work are referenced. I need to read that carefully and will report here about that in the other posts. Looking forward to attend that presentation! 

Richard and Kiran, thank you for referencing my work!