Wednesday, June 10, 2009

How to measure Tester/QE Performance?

I read an article from Cem Kaner, in this article he talked about measuring performance of Testers. I am sharing my thoughts about this topic.

“We can not measure performance of Quality engineer/Test engineer”. We often used to encounter such statements from project managers or QA managers. So the question arises that how we can measure performance of QE.

We can consider “Bug Count” as a measure of QE performance but consider a situation where we assign two different modules, Module A and Module B ,to testers, consider following points regarding assigned modules.

· Module A is quite mature that it is gone through stable release; certainly it is thoroughly tested already.
· Developer who worked on this module is quite competent, and he covered all major scenarios, and spent a good time to unit test his module.

On the other hand, Module B has following points to consider.
· This module is pushed first time into to QA department.
· Module is developed with lots of assumptions and requirements are not cleared to the developers.
· Due to strict deadline, development team could not run unit tests.

There can be different aspects of these modules as well, but we can see obviously Module A will show lesser bug count as compared to Module B, so can we make any judgment regarding performance of testers working on these modules?, I think certainly not.
I think we can measure tester’s performance, efficiency and ability by considering following points.


1. Ability to find bug as early as possible in SDLC.

2. Ability to write effective, clear bug report.

3. Ability to convince developers and management about particular bug.

4. Working with developer without any ego (find bugs in program, not in programmer).

5. Ability to write good test plan and test strategies.

6. Ability to lead a QA team.

7. Good knowledge of different concepts like Database optimization, application security, Web application concepts and most importantly the framework on which product is based on.

8. Ability to find out of the box test scenarios.

9. Innovative and willing to improve quality of a product.

10. Good code breaking attitude.

11. Ability to write automation scripts.

We can judge QE performance by analyzing these points, this thing is quite sure that it is not possible to measure QE performance by bug count.

Looking for your comments :)

5 comments:

  1. Nice Post Kashif.

    I would like to ask a couple of questions.
    Q. What is your take on statistical testing methods?
    Q. What do you think is the scope of model based testing like Markov Model being used to test products?

    Keep up the good work.

    ReplyDelete
  2. Thanks Zahid,

    i will definatly answers your queries in my next posts.

    Keep in touch:)

    ReplyDelete
  3. While I agree with most of what you said but I thought I should add something. As for your first point, I think if you're a part of a Scrum team then it becomes hard to find good bugs early in the Sprint. Also, abilitiy to bridge the gap between user model and system model is helpful. Further, ability to identify areas that need automation (likely not 100% of software can be automated). Also, IMO, they should also actively take part in grooming product backlog(though that's not their job). Mostly, it's a great post and it got me thinking about the issue. Keep posting.

    ReplyDelete
  4. I think this criteria isn't tangible enough, it only focuses on non-functional skills !, I think a report in terms of physical statistics concerning the bugs & his wide consideration to all of the aspects of the Item of the CR he's assigned to, which means there must be an experienced reviewer on the test-cases creation & the test execution !!

    ReplyDelete
  5. Thanks Nabwy for your valuable feedback. I believe points 1,2 and 5 are quite tangible criteria,

    whats your thoughts?

    ReplyDelete