The more you read/study/implement/collect/analyze metrics and data, more your interest grows in web analytics.
Data is huge but you need to take care what exactly to look for. Thats the key. An interesting case study I am currently thinking is how do the IT staff measure the benefit being acheived by created a knowledge base articles/quick pointers to issues/help webpage. one User is staying on the web page for 50 minutes and another for 5 minutes, which one is earning benefits to the company? also which one is exiting happily from the website?
Think
Wednesday, January 6, 2010
Tuesday, January 5, 2010
My International Publications reaches 5 :)
1) Jain, A. (2009): “E3 – 3 effective ways of increasing test coverage”, International Conference of Information Systems andSoftware Engineering, India 2009.
2) Jain, A. (2009): “Retrospective Analysis and Prioritization Areas for Beta Release Planning Improvement”, 27th Annual Pacific Northwest Software Quality Conference (PNSQC), USA, 2009.
3) Jain, A. (2009): “Sprint Retrospective Checklist – Method and Mechanism to Track Project Health and Dev‐QE Goal”, 9th International Software Testing Conference (STC), India 2009.
4) Jain, A. (2009): “Well Processed, Well Done”, Software Process Improvement and Capability Determination Conference, Finland, TUCS General Publication No. 54, 2009.
5) Jain, A. (2008): “Power of Glide Path: Statistical Approach for Controlling and adding Predictability in a Testing Project”, 8th International Software Testing Conference (STC), India 2008.
2) Jain, A. (2009): “Retrospective Analysis and Prioritization Areas for Beta Release Planning Improvement”, 27th Annual Pacific Northwest Software Quality Conference (PNSQC), USA, 2009.
3) Jain, A. (2009): “Sprint Retrospective Checklist – Method and Mechanism to Track Project Health and Dev‐QE Goal”, 9th International Software Testing Conference (STC), India 2009.
4) Jain, A. (2009): “Well Processed, Well Done”, Software Process Improvement and Capability Determination Conference, Finland, TUCS General Publication No. 54, 2009.
5) Jain, A. (2008): “Power of Glide Path: Statistical Approach for Controlling and adding Predictability in a Testing Project”, 8th International Software Testing Conference (STC), India 2008.
Enter the Mysterious world of Web Analytics
that's my latest love on technology front. Web Analytics, hot happening and interesting subject. Would like to share the 10/90 principle as called out by Avinash Kaushik. (Courtesy: Avanish's blog). I will be sharing more on web analytics practicalities and optimization opportunities and building your websites better for each user.
Goal: Highest value from Web Analytics implementation.
Cost of analytics tool & vendor professional services: $ 10.
Required investment in “intelligent resources/analysts”: $ 90.
Bottom-line for Magnificent Success: Its the people.
The rule works quite simply. If you are paying your web analytics vendor (Omniture, WebTrends, ClickTracks, CoreMetrics, HBX, etc) $25,000 for a annual contract you need to invest $225,000 in people to extract value from that data. If you are actually paying Omniture, WebTrends, HBX etc $225,000 each year then…. well you can do the math.
Most people reading this post probably think this is way overblown or silly or just plain stupid. I can understand that. Here are some of the reasons I have come to formulate this rule:
If your website has more than 100 pages and you get more than 10k visitors a month you can imagine the complexity of the interactions that are happening with your website. Drop in marketing campaigns, a dynamic site, SEM, more pages, more traffic, promotions and offers and you have a very tough situation to understand.
Most web analytics tools will spew out data like there is no tomorrow. We seem to be a rat race, one vendor says I can do 100 reports, the next says 250 and the one after that says I can measure the eye color of people who look at your web pages and on an on. Bottom line is that it will take a lot of intelligence to figure out what is real in all this data and what is fake and what, if anything in the canned reports, is meaningful in all this.
It is a given that if you open most web analytics tools that they show the exact same metrics, almost all of them measured and computed differently! You are going to have to sort this out.
Finally actionable Web Insights (or as I have now copywrited: KIA’s, key insights analysis) does not come simply from ClickStream, you are going to have to have people who are smart and have business acumen who can tie clickstream behavior to other sources of data / information / company happenings.
Goal: Highest value from Web Analytics implementation.
Cost of analytics tool & vendor professional services: $ 10.
Required investment in “intelligent resources/analysts”: $ 90.
Bottom-line for Magnificent Success: Its the people.
The rule works quite simply. If you are paying your web analytics vendor (Omniture, WebTrends, ClickTracks, CoreMetrics, HBX, etc) $25,000 for a annual contract you need to invest $225,000 in people to extract value from that data. If you are actually paying Omniture, WebTrends, HBX etc $225,000 each year then…. well you can do the math.
Most people reading this post probably think this is way overblown or silly or just plain stupid. I can understand that. Here are some of the reasons I have come to formulate this rule:
If your website has more than 100 pages and you get more than 10k visitors a month you can imagine the complexity of the interactions that are happening with your website. Drop in marketing campaigns, a dynamic site, SEM, more pages, more traffic, promotions and offers and you have a very tough situation to understand.
Most web analytics tools will spew out data like there is no tomorrow. We seem to be a rat race, one vendor says I can do 100 reports, the next says 250 and the one after that says I can measure the eye color of people who look at your web pages and on an on. Bottom line is that it will take a lot of intelligence to figure out what is real in all this data and what is fake and what, if anything in the canned reports, is meaningful in all this.
It is a given that if you open most web analytics tools that they show the exact same metrics, almost all of them measured and computed differently! You are going to have to sort this out.
Finally actionable Web Insights (or as I have now copywrited: KIA’s, key insights analysis) does not come simply from ClickStream, you are going to have to have people who are smart and have business acumen who can tie clickstream behavior to other sources of data / information / company happenings.
Wednesday, November 12, 2008
10 habits continued...
Recently I submitted this topic to a conference and watching if this will be published :)
anyways moving to the habits that makes a tester inefficient, here we discuss the next habit
//This is a hypothetical picturization as if there is an interview being conducted of an in-efficient tester.
Habit # 3: Ask yourself: “Why should I Test?”
This habit motivates me to ask the very first question when I start any project. This question is very thought-provoking and pushes my senses to think again and again as to why I am testing. I am surrounded by lots and lots of good friends, effective developers—believe me, I trust all my friends, I trust my developer friends, I trust my designers, I trust my product management folks. This trust gives me confidence as to why my testing is needed. I trust all people who are contributing to the software development life cycle, I trust their activities and contributions and being friends I should not be identifying faults in their deliverables.
When I think of these points I get answers to the question that starts coming in as soon as we have new projects, that why should I test when I know my experienced friend has developed the code and unit tested it.
anyways moving to the habits that makes a tester inefficient, here we discuss the next habit
//This is a hypothetical picturization as if there is an interview being conducted of an in-efficient tester.
Habit # 3: Ask yourself: “Why should I Test?”
This habit motivates me to ask the very first question when I start any project. This question is very thought-provoking and pushes my senses to think again and again as to why I am testing. I am surrounded by lots and lots of good friends, effective developers—believe me, I trust all my friends, I trust my developer friends, I trust my designers, I trust my product management folks. This trust gives me confidence as to why my testing is needed. I trust all people who are contributing to the software development life cycle, I trust their activities and contributions and being friends I should not be identifying faults in their deliverables.
When I think of these points I get answers to the question that starts coming in as soon as we have new projects, that why should I test when I know my experienced friend has developed the code and unit tested it.
Thursday, September 18, 2008
10 habits of highly in-efficient tester
1) Believe in Assumptions: Core aim of testing is to defy assumptions. If the tester is having assumptions (and that too not getting clearification around those) then it will be very dangerous.
Testing means trying different scenarios where some bugs may or may not be expected.
If the tester is assuming that the developer is right or the code is already tested once or the end customer is not using this platform or this application is running from last 10 months why will it crash now...then you can be sure that the product will be left untested...
2) Not using his thinking power, if the tester can not imagine sscenarios, if he can not replicate the user then he is not an efficient tester...
remaining habits to be shared later..Enjoy reading
Testing means trying different scenarios where some bugs may or may not be expected.
If the tester is assuming that the developer is right or the code is already tested once or the end customer is not using this platform or this application is running from last 10 months why will it crash now...then you can be sure that the product will be left untested...
2) Not using his thinking power, if the tester can not imagine sscenarios, if he can not replicate the user then he is not an efficient tester...
remaining habits to be shared later..Enjoy reading
Subscribe to:
Posts (Atom)