Wednesday, August 26, 2009
"The best tester isn’t the one who finds the most bugs or who embarrasses the most Programmers. The best tester is the one who gets the most bugs fixed."
Enough for today's post :) , looking for our feedback
Saturday, August 22, 2009
- Finding the same silly bugs year after year whatever program you are testing - leaving an input box blank, finding that 99999999999 overflows. etc
- Calling testers as QA (Quality Assurance), QA is process but not a designation title.
- Unreliable test environments where most of my time and effort is expended identifying
environment issues rather than product issues.
- Code that isn't architected in a testable manner, causing testers to have to write ntegration tests when simple unit tests could find the same bugs.
- Testers who get comfortable with what they already know and stop pushing themselves to learn more.
- Developers believing they know HOW to test, WHEN to test and WHAT to test. (Note: Not aimed at ALL devs. Just some of them.) (Note: If they do know testing - then why do we find so many bugs?)
- Delivery date stays the same when development time spills over (way over).
- No/Bad/Clueless responses from project managers.
- Sometimes we view ourselves as second citizens.
- The perception that a career in testing is somehow of less value.
- Successful testing is about passing tests.
- Testers testing on a developer's PC, and bad lab environment for testing.
- Generate and share meaningless reports/metrics.
- Waiting for the release to happen from development team.
- Testing team not informed about the changes.
- Testing time being crunched because developers ran over
You can read the detailed thread here
Tuesday, August 18, 2009
In my last post, I described about “kaizen”, a continuous improvement process, but now the question is how we can introduce “kaizen”, in our lifes and partcularly in software development. Its always a great practice to review your work , either you are working in agile mode or some other other incremental approach, in some organizations, there are Post Project Report,or when we talk about agile , we have sprints and product owners often call for Sprint review meeting.
Lets suppose we faced a critical issue during sprint end while deploying our application to the production , the problem was that one of the key component was throwing exception on the server , but working fine in our environment with same code, team suffered a lot due to this issue ,suppose we solved the problem on priority basis, in order to complete our sprint, but now is the time to analyze about this issue using five why analysis
Why the component was not working in production environment?
The component was dependent upon the utility , whose files (Dll, EXE etc ) was not installed on the server.
Why it was not installed on the server?
During sever set up, team missed this utility, because it was not mentioned in the check list prepared for server set up.
Why it was not in the check list?
Because the respective code(utility) was not checked-in the code repository by the development team, and resource who preapred check-list, listed all componenets that were available on the Repository.
Why it was not checked in to the code repository?
Because developer did not push the utiliy to the code repository, and conducted unit tests on his machine,because he was not aware of that utility as well.
How testing team missed that dependency?
Because, the utility was already deployed to the QA server long time ago, so testing team was unaware of that utility.
So after these question,. We concluded that :
- There should be properly reviewed installation checklist for server set up.
- Before production, there should be a Pre-Production server, where Deployment team set envoirnment by taking with the code repository and executing checklist.
Its just an example, there are many scenarios, so just conducting a five why analysis at the end of each miles stones, we can reduce our gaps,if this practice continues, we can have our improvement process continue.
you can read more about five why form here
you can read more about five why form here
Thursday, August 13, 2009
Kaizen is a daily activity, the purpose of which goes beyond simple productivity improvement. It is also a process that, when done correctly, humanizes the workplace, eliminates overly hard work ,and teaches people how to perform experiments on their work.
In software development, people talk a lot about incremental approach, specially in agile mode, where we plan sprints and release product with minimal features, and then we keep on improving the product/application in each sprint, this is where we implement “kaizen” in software development.
There are ways by which we can continue our improvement process, that I will write in my coming posts :)
Tuesday, August 11, 2009
We used to compile different status report during our testing phases like, number of test cases executed, number of test cases failed or passes etc. theses status reports are quite important for QA managers or project managers, to decide various thing like release time etc but these status reports do not reflect the impact on business that we are facing due to bugs found during testing cycle.
So one type of data that would be of value is direct business impact statistics like:
1. Sales missed due to bugs,
2. Customer renewal rate and relative cost of defect
3. Defect Density
This type of information is like gold to decision-makers. Brief description of the defect density and containment is given below
Defect Density: Information that shows number of bugs in the context of the size of the particular project module or phase. For example, if a requirements spec for a particular project is 120 pages, and 52 defects were collected relating to this spec, then the Defect Density is 0.43/pg. Similarly, you can measure Defect Density per kilo line of code or per object or module design.
Containment: This measures the success of preventing errors from propagating. As we all know, the earlier a bug is identified, the lower the cost of fixing it. Measuring containment, therefore, can tell us where we can save money and time.
You can read the detailed article from http//utest.com
Friday, August 7, 2009
- Getting access to all the material that can provide input for the intended product behavior. This can include input provided to the developer to build the application, feature list, brief write-up on application capabilities and partially written use cases etc.
- Knowledge transfer from product owners and subject matter experts (SME).
- Gaining insights into the capabilities needed from the product under test by exploration of comparable products, reading user manuals and user documentation, exploring help text via user interface and taking an application tour through the GUI.
- The testing team should be well trained to scope the work, ask the right questions, and deliver a valuable output within a few hours. Exploratory test engineers should be able to analyze a product, think critically to evaluate risks and craft test cases that systematically explore the product.
Saturday, August 1, 2009
Definition Of Testing
It is quite adequate that purpose and objective should be cleared before starting any activity, the same is true with software testing, people use different definitions of testing, e-g we come across statements like
“Testing is the process to make sure that errors are not present in the software.”
“ The purpose of testing is to show that program is performing its intended functionality.”
“ Testing is the process of establishing the confidence that a developer does what it is supposed to do.”
If we analyze these statements, we will notice that these are reflecting exactly the opposite to the objective of software testing because testing is an activity which requires human resources and effort, so one can think what should be the outcome of software testing?
If we analyze the definition such as “Testing is the process to make sure that errors are not present in the software.” this definition has psychology effect, because it is not possible that any software is error free and this goal is quite impossible to achieve so when any testers start testing with this mindset, it will not end up in good results because psychology studies show that human perform poorly in a job which they know it is not feasible or impossible. So defining software testing as a process of uncovering errors makes it a feasible task, and thus overcoming the psychology problem.
In the same way, we have a definition “The purpose of testing is to show that program is performing its intended functionality”. It is quite evident that error would be there if the software is not doing what it is supposed to do, but errors can be there if the software is doing what it is not supposed to do.
Correct Definition of Testing
The outcome of software testing should be that software application should be reliable, it can only be reliable when it is error free, it can only be error free when testing is started with assumption that it contains errors in it. If we start testing with this frame of mind, then testing can add lot of value in software/application development. So the correct definition of software testing can be stated as:
“ Testing is the process of executing the program with the intent of find errors”.
Here human psychology comes into play because humans are highly goal oriented, and establishing a proper goal has an important psychological effect. If the testing is done with a goal that program has no error then testers will subconsciously steered towards this goal, and he will un consciously select the test data that has low probability of finding errors, on the other hand if testing is done with the goal that errors are there in the application, then software testing will add a great value to the application, and testers unconsciously steered towards test cases and test data that has high probability of finding errors.
To summarize this vital discussion, we can say that software testing is a destructive and sadistic process of finding the uncovered errors in the program Its human nature that most of us are geared toward building objects rather ripping them apart, that is why testing is a difficult job to do.
A successful test case is one who progresses in this direction and finds error in the software, because purpose of software testing is to establish a confidence that software does what it is supposed to do, and does not do what it is not supposed to do.