Archive for the Agile Metrics Category

Individual Performance in Agile Team, Assessment and Individual Burndown Charts

Posted in Agile, Agile Metrics, People Management with tags , , , , , , , on January 7, 2009 by vikramadhiman

Every few months, on almost any discussion thread on Agile, the issue of assessing individual performance, individual assessment, checking how each person is performing, how to identify whether someone is performing well or bad, how to do appraisals etc. Most people respond by saying that at the heart of these discussions is “desire to control as well as fear of not being able to channelize”.  Some of the posts where authors have taken this viewpoint are Review Process for Agile Team Member, Aboloshing Performance Appraisals and Performance Management Trouble. I will not take this viewpoint in this post. I will tackle the issue from the standpoint that yes there is a problem – there is an erring employee and now you need to take action.

One of the key things with this premise [there is an erring employee] is to be factually correct as well as appear to be correct. The first is most often dealt with by clearing to an appraisee the expectations from someone and sometimes this being done in the writing, along with effective methods of measuring the performance as per these expectations. The problem in this generally is “the communication”. Managers are generally myopic in what the expectations from people are – they will lay down expectations which result in a person aiming for personal brownie points rather than help the team grow or becomes just too much of a good samaritan, almost not growing himself/ herself. The challenge in an Agile [or even any team] is to achieve this balance. Hence, your expectations would be a mix of individual as well as team behavior. Once this is achieved, the next challenge is to get the team member to agree to your expectations. There is always “Either you agree or you go strategy” – you can leverage this only if you feel you have drafted fair expectations and the employee is lowering standards unreasonably. Once the buy in is there are your expectations are clear and accepted, the bigger challenge awaits – collecting data. The best data is the one you collect automatically, and not one which others enter in spreadsheets. This truly becomes a challenge as machine collected data is easy to fake and often throws a challenge for users to fudge and overcome. In this scenario, the next best bet is to use Jeff Sutherlands technique. Basically, tweak the 360 degree feedback for something better and meaningful.

If you can get both the above points going, it would be transparent as clean air on who is performing and who is not or better still who is an asset and who is not. Now you have two options : firing or improvement. I personally try the latter. This means you have to really get to the heart of the situation on why someone is performing bad [what parameters the score is bad on leads you to this further analysis]. Once someone is beyond repair, either they would have got the hint already or you then just need to do the needful.

All things said and done, I believe and my own experience suggests that there is a better technique to manage individual and team performance and I call it SCO technique : Small Team, Close Team, Open Team.

  • Small Team : Not more than 9 people, manager but does not manage 99% of time
  • Close Team : The team generally works free of interference from outside environment
  • Open Team : You can sense when there is a problem and discuss it openly

Managers know better about people in this case and you can have a heart to heart talk and decide future course of action in this scenario. But this is not without its problems : people can sometimes become so close that they can not make the distinction between their relation and their work. Its funny how this is also the crux of Bhagavad Gita – you should know what is your karma and do it, not allowing anything you long for or love to interfere. If the team and particularly the manager is able to follow this, this is a good model. My guess is most people are not strong enough or evolved enough to follow this – hence, they keep a distance, not let go and hide behind setting expectations and measuring them.

The Nokia Test for Scrum

Posted in Agile Metrics, SCRUM with tags , , on October 18, 2008 by vikramadhiman

Probably, the most popular method for a team to assess at what level [and if at all] they are doing Scrum [arguably the most popular Agile framework] is through the “Nokia Test”. The test was first conceived by Certified Scrum Trainer Bas Vodde in 2005. It is simple [as you can see below] and strict [almost 80% of teams claiming to do Scrum do not even pass Level 1]. The team just answers Yes/ No to these questions [any ambiguity means No]:

LEVEL 1

  • Do your sprints start and end on planned dates [some teams ask Iterations are time-boxed to less than six weeks]
  • Is the software completely tested and working at the end of an iteration [some teams would ask, Is there a definition of shippable code that everyone agrees on and abides by to declare a feature as done]
  • Can the iteration start before the specification is complete

LEVEL 2

  • Does the team know who the product owner is
  • Is there a product backlog prioritized by business value
  • Does the product backlog have estimates created by the team
  • Does the team generate its burndown charts and knows its velocity
  • Does the team have outside people disrupting the work of the team during the sprint

The Nokia Test is not a bible and there have been criticisms of the test [like this one from InformIT]. However, it may be noted that Nokia Test was devised in Nokia [keeping the work culture at Nokia in mind]. it might work or not work for all. Given that its minimal and worked for all the teams in Nokia is a good indication and as long as the teams take this as Step 1 of their Goal [rather than the Goal itself], it should be a good starting point. You can also use slightly elaborated versions like Nokia Test for Scrum Certification and Nokia Test Scoring System.

Agile Metrics – Technical/ Code Quality Measures

Posted in Agile Metrics with tags on April 1, 2008 by vikramadhiman
I have been discussing with a few friends about their favorite Agile Metrics. The metrics can be divided into following categories:
  • Technical/ Code related measures
  • Business Perspective
  • Process Perspective
Here is a list we have come up with to measure Code Quality:
  • Code Coverage : Code coverage is a measure used in software testing. It describes the degree to which the source code of a program has been tested. It is a form of testing that inspects the code directly and is therefore a form of white box testing
  • Cohesion : In computer programming, cohesion is a measure of how strongly-related and focused the various responsibilities of a software module are. Cohesion is an ordinal type of measurement and is usually expressed as “high cohesion” or “low cohesion” when being discussed. Modules with high cohesion tend to be preferable because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability whereas low cohesion is associated with undesirable traits such as being difficult to maintain, difficult to test, difficult to reuse, and even difficult to understand.
  • Low Coupling: In computer science, coupling or dependency is the degree to which each program module relies on each one of the other modules. Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa.
  • Cyclomatic Complexity: Cyclomatic complexity is a software metric (measurement). It is used to measure the complexity of a program. It directly measures the number of linearly independent paths through a program’s source code.
  • Number of unit test cases per feature or method or class. Minimum 4 test cases per feature ( +ve, -ve and stress, system interaction)
  • Defect Density : Number of defects per lines of code
Here are some tools you can explore further:
  • .Net – nDepend
  • Java – jDepend

Agile Metrics – Part I

Posted in Agile Metrics with tags on July 26, 2007 by vikramadhiman

In the recent past we have wondering about what metrics to track. Our aim from the metrics study is to:
a. Improve the process by which things get done
b. Foster project collaboration practices and environment
c. Create an energized and informative workspace

Our current thinking is that we must track the following metrics:

a. Project Backlog – exists, when was it updated, reflects current realities, everyone has access to it, everyone contributes to it, are changes in different versions recorded?
b. Sprint Backlog – mostly the above but is it accompanied with sprint retrospectives for previous sprints and sprint plans for previous as well current sprints?
c. Sprint Plans – developer availability [hours] and total work output documented and communicated to the management and the client
d. Retrospectives – how often are they done, who attend the same and what actions are taken on identified items
e. Engineering Practices – how does the team actually build software, how much of it is automated workspace, how does information flow, what percentage of code is unit test code driven
f. Trainings – how often the team is trained in new soft and hard skills
g. Innovation – what did and how did the team innovate
h. Knowledge Management – how did the learning translate into knowledge, was this knowledge available and used

We have tried to keep it simple and not unnecessarily complicated. However, even the above seem like quite handful to measure. A team member also pointed out that “How to measure” is an interesting aspect of debate as well.

Part I is rarely a destination, its just a start of a journey. Here is hoping for an exciting and adventure filled journey.