Agile development and selective implementation of methodologies

Abraham Lincoln said, “If you call a tail a leg, how many legs does a dog have? Four. Because calling it a leg doesn’t make it a leg.”

In the same vein, calling a project team agile does not really make it agile. Lets admit it. The term "agile" is a buzzword that is cool to use. Sometimes (or should it be oftentimes) folks pick up some aspects of a methodology that suits them and then attempt to fit it into their existing process with even worser results than before. And then they wonder why the new methodology isn't working ! The same situation occurs with adoption of agile. In this entry we look at some of the agile principles/practices that are easy to selectively pick and choose.

a) An agile team is capable of releasing the software to customers at the end of each iteration (generally between 2-4 weeks duration). Yes, be able to ship it to the customers, all developed, integrated, tested, wrapped up and mailed. Customers are able to see a working software that has features being available in increments and understand the progress being made. Of course, customers can provide quick feedback to enable any course corrections as needed too. Decisions can be made on whether additional features are to be added, existing functionality to be changed, or even to stop further development without having to wait for the complete release time frame.

When an organization selectively chooses to implement this aspect of agile without following the other aspects of the chosen agile methodology, it can lead to compressed release schedules and require squeezing of activities such as development and testing to get a product out sooner. Merely, compressing schedules will only lead to worser results than before while following non-agile methods.

b) Agile does away with distinct phases of development. With agile, you are no longer required to have distinct phases such as coding followed by testing and so on. This means that developers and testers (along with other required functions) work together in parallel as one team. However, this can be easily misinterpreted as a license to keep coding until the last minute when the release is due. And we can easily realize what happens when development continues coding till the end of the release while following a non-agile development methodology.

c) In the agile context, the software is a moving target. In agile, change is the norm. Developers can add new features or make changes at any time. In traditional methods, testers push for an early code freeze so they have time to test the software that is feature-complete. However in an agile context, this is not usually possible. This can have significant challenges for testing software. At the same time, this freedom can easily be misused by allowing developers to continue making random changes to the software at any time without fully embracing an agile methodology. It isn't too hard to imagine the consequences when we have developers making changes all through the release until the end while the organization follows a mostly non-agile methodology or a mix of agile and non-agile techniques.

d) Agile values working software over comprehensive documentation. This reduced emphasis on documentation in agile is substituted with increased human face-to-face communication and collaboration. For example in Scrum (an agile methodology) there are daily standup meetings where the team (developers, testers, etc.) get a common understanding of where each one on the team is, any obstacles to progress, achievements, etc. Also, techniques such as retrospective meetings, co-location and others foster communication which documents cannot match. It is however, easier to just pick this one aspect as an excuse to do away with documentation entirely or reduce it to an insignificant activity that sits on the back-burner. Needless to say, traditional methodology minus good documentation is a recipe for a poor quality product.

e) An agile software development team can add features in any order. Yes, but it can quickly get out of hand if this is not implemented right. In the context of agile development, features are added in the order that they make the most business sense. That is a significant change from allowing developers to choose features they wish to add. The natural propensity of developers would be to add features they think are best for them or easy or cool to add. This may not be the best order for the customers or business. Given the fact that there is limited time and resources available to deliver a product with a set number of features, it would seem that the best thing to do would be to add those features that have the most relevance to the customer/business within the available time. Focus on adding the important (from customer point of view) features as early as possible leaving the less important features towards the end of the project. That way, if the project were to run out of time or resources, we know that the features that could get dropped from the release are the ones that are of lesser priority.

These are some of the common agile principles that are easy to pick and adopt, albeit incorrectly and inappropriately. As Alexander Pope's poem states, "A little learning is a dang'rous thing; Drink deep, or taste not the Pierian spring: There shallow draughts intoxicate the brain, And drinking largely sobers us again …
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testing & Buying stuff to create a new habit !

I enjoy reading and do not stop at software testing or technical subjects. My reading covers a variety of topics and sources which include other blogs, books, magazines, web sites, etc. Included in this category of content that I consume is - personal finance. Recently, while reading on this subject, i came across an article about "Buying stuff to create a new habit". It sure struck a chord and I could relate to what was being said in the article.

The article talks about a human tendency to buy stuff in the desire to create a new way of life or habit. For example, lets say that one day you feel attracted towards exercising. You decide to work-out at the gym regularly. You now enroll in a gym and the gym requires a minimum membership of say 6 months. At that moment, you are very passionate about exercising and pay up the necessary membership fees. You plan to get up early each morning and hit the gym for about an hour of fitness training and exercising.

Early next day morning you are all perked up and ready to begin a new routine. The first day at the gym feels great and you think you have made a good decision of enrolling. You think that this is something you want to do regularly from then on. If you are like most of us regular people, after a week or couple of weeks, you might find that it becomes harder to get off the bed in the morning and hit the gym. You find that you have a lot of tasks that require your attention and very little time to spend exercising. Or some other reason causes you to postpone going to the gym to another day. You might think that - "after all i am just skipping the gym visit for a day. I'll make up for it tomorrow; probably do a few extra sets of exercises." Before long, your passion and interest in exercising wanes and before you realize it you are busy with something else.

On hindsight, you might observe that you spent a significant sum of money on the gym membership (plus probably on other paraphernalia too, such as suitable clothing and shoes for working out at the gym) without really having used it much. You have actually lost a lot of money trying to chase a passing interest. Of course, there are exceptions and there are folks who continue to pursue such interests with sustained passion. On average, people tend to engage in similar activities (not necessarily exercising but could involve a range of other passions) that affect their financial well being without bringing in any real benefit.

There are many such examples and these are not hard to find. Look at your own lives and see if you can find such instances where you paid money or bought stuff in the hope that you would get better at something or be something /someone else or acquire a new habit ? For example, have you spent money on expensive sporting gear hoping that it would help improve your game ? Or enrolled for courses or programs in the hope that you would somehow be miraculously transformed into whatever you were hoping to be ? Or purchased some equipment or tools with a similar hope in mind ? I am sure that if we look hard enough, such instances would show up. This period of fascination with our new attraction is termed as the "honeymoon period". Due to our focus on this new activity or project we tend to think we will want to continue it for a long time to come. Consequently, we feel the urge to also equip ourselves for the long haul and "invest" in outfitting ourselves appropriately. 



If you are still reading this and wondering why I am talking about personal finance on a testing blog, worry not. I was just trying to see if I could draw a parallel to how organizations behave with regard to tools used in software testing and automation in particular. The organization may have set lofty goals for test automation and is looking to obtain a tool. It may be that a vendor has sold the decision makers with the virtues of their automation tool and convinced them about the extra-ordinary (probably bordering on the super-natural) capabilities of their automation software.

Sales pitches may expound the - "simple record and play" automation capabilities of the tool, ability to drastically cut down test cycle time from weeks or days to few hours, ability to reduce or even eliminate the need for human testers, ability to automate all of the tests, possibility of having a "one-click" automation suite up and running in very little time irrespective of the complexity and nature of the software being tested,  and many such entertaining comments. What happens next is that without much due diligence the organization pays significant sums of money to procure licenses for the tool and the mandate goes across the testing group to start automation using the shiny new toy. What are the chances that the tool will meet the requirements of the testing group, be compatible with the software being tested, have a short and easy learning curve, can automate the existing tests (web based, client server, GUI/CLI/API, etc), be able to handle the volume and load of real world testing, support the software requirements (such as multiple browsers and versions in the case of web based software, different OS platforms, different databases, various environments, etc), and satisfy many other details & needs that would determine whether the tool will really be of use or turn out to be a waste of money, time and resources.



From experience, i have come across instances where tools procured by various groups were either under-utilized or remained unused despite huge sums of money having been paid to procure them. Once realization dawns that the purchase was a mistake or not the right choice, it often tends to be a downhill ride from there for automation and usage of the tool. The impact of choosing an inappropriate tool can be the subject of another blog post. Suffice it to say that procuring a tool without analyzing your specific needs and requirements, evaluating various tools and vendors, performing due diligence, trying out a trial version of the tool against your product and setting the right expectations on automation, can be an expensive proposition.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testing: To fail is to succeed

Here's an entry that goes back to the basics of software testing. If you are wondering what the title of this entry means, let us quickly re-visit the definition of testing: "Testing is the process of executing a program with the intent of finding errors".

Testers can sometimes find themselves getting entangled in the definitions of the terms "success" and "failure" in relation to testing. The question is, "When is testing considered to be successful ?" To add to their woes, many stakeholders tend to refer to a successful test campaign as one wherein the tests have passed without finding any issues. Going by our above definition, testing is performed with the intent to unearth issues. Which goes to say that success in testing is when a test fails rather than when it passes. A test that has failed and thereby found an issue has actually succeeded.

As a basic analogy, let us assume that you take your car that rattles and probably leaks some oil to a mechanic for inspection. The mechanic runs a battery of tests which do not find any issues with your car and based on the results of these tests your car is certified to be in perfect condition. Would you call the result of such a testing as successful ? In this case you still seem to have the problem plus you have incurred expenses towards testing the vehicle which turned out to be unsuccessful. If the tests had found an issue, you would naturally be inclined to consider your investment in testing to be worthwhile.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

QA, QC and Testing ...

In an earlier blog entry that talked about QA (Quality Assurance) and QC (Quality Control), I said that testing was a QC activity. Recently I received a message that asked, "If QC is the same as testing: In what way, by means of testing, are you controlling the quality?"

Software Testing is one of the QC techniques. Other QC techniques include,  inspections, reviews, walk-throughs of work products like requirements, designs, code and documentation.

QA aims to assure that quality work and quality deliverables will be built in before work is completed. QA focuses on the ability of a process to produce or deliver a quality product or service. The intent of QC is to determine that quality work was done after the work has completed. For software the QC function may involve checking the software against a set of requirements and verifying that the software meets the defined requirements. QC examines the results of a process to determine the degree to which it conforms to expectations. The "control" in QC involves detecting problems with a product, or catching "poor quality" before shipping to customers. Looking at it another way, when QC finds instances of "poor quality", it implies that the group has spent resources and time to produce a product that has poor quality built in.

QC includes all tactical activities necessary to produce a quality product or service, while QA looks at quality from a strategic perspective. QC focuses on identifying problems after they occur. QA is focussed on preventing problems from occurring. Inputs from QC may feed into the QA process. For example when QC finds recurring issues in any area, QA can look at improving processes involved in producing that functionality or feature to minimize occurrence of similar issues in that area going forward.

Regarding the debate on whether it is appropriate to call testing as either QA or QC, i tend to agree with Michael Bolton's view about testing & testers: "We don’t own quality; we’re helping the people who are responsible for quality and the things that influence it. “Quality assistance; that’s what we do.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testing & Boredom at work

[A copy of this article is available freely for download here.]

The motivation for this blog entry came from a tester who recently told me that he was bored and his job seemed monotonous.

Before jumping right in at trying to suggest possible solutions, lets digress a bit to take a closer look at the concept of boredom ! (as if it weren't boring enough to talk about)

According to psychoanalyst Otto Fenichel, boredom occurs, "When we must not do what we want to do, or must do what we do not want to do." Though the feelings of being bored by routine tasks are often transitory, longer-term boredom can set in from a lack of meaning or purpose in life.

Most people blame boredom on the circumstances, but psychologists say this emotion is highly subjective and rooted in aspects of consciousness and that levels of boredom vary among people. Some individuals are less and others considerably more likely to be bored than others. Boredom is not a unified concept but may comprise several varieties, including the transient type that occurs while waiting in line and so-called existential boredom that accompanies a profound dissatisfaction with life. Boredom is linked to both emotional factors and personality traits. A person may feel bored when the individual
  • perceives that there is little value in doing the job
  • feels that what is being done is not challenging
  • feels there is not much to contribute
These are not the only reasons for boredom. Boredom could occur when a person feels that their skills or talents are not being used, their efforts are not valued or what they do is of little or no value. Sometimes people lose motivation, are closed to new ideas or only consider things that fit into their “comfort zone”. Some people have problems with attention that also play a role, and thus techniques that improve a person's ability to focus may diminish boredom.

Boredom can be a motivator too : It may be telling you that it is time to wake up and make some changes to what you are doing.

Can software testing ever be boring?

As someone who claims to be a software testing professional ! I am tempted to say “never”. Testing is fun and challenging. However, the fact is that testing can at times seem tedious and monotonous. Good software testers must be able and willing to accept a certain degree of repetitive activity as part of their testing job. Monotony becomes a problem when it becomes a regular part of the testers job. There may be times when your job requires performing tasks that may seem boring. For example, being asked to execute a set of tests manually on the same version of a product across multiple platforms to check for compatibility. When boredom sets in, the normal human tendency is to try to short-circuit the testing activity. For example, it may be in terms of executing fewer tests than required, not paying close attention to the test results, assuming that a test that has passed in the previous run will pass now, not being open to any new issues that may be lurking around, etc; all of which are detrimental to the quality of the product being tested.

A generally suggested solution to overcome boredom in manual test execution is to automate the tests. My advice is to not jump in immediately to automate tests. Manual testing has a lot of value. Of course, on the face of it test automation does make sense but there are considerations to be made before embarking on such an exercise. Test automation must be approached with the same rigor and discipline as your organization would approach a software development project for its customers.

Also, having your tests automated does not mean that testers will no longer face tedium or boredom. Ask around with those testers who need to code and regularly maintain test automation for feedback and you would realize that some element of fatigue and boredom will creep in even with automated testing. 

So, as testers do we resign ourselves to the fact that some part of a testers job will always be boring? Is there a choice? What can you do if you're frequently bored?

"Don't blame your job, the traffic or your mindless chores," says Anna Gosline in a December 2007 article at Scientific American. Instead, look to yourself for options you may have to relieve boredom. Find a way to inject variety and stimulation into routine tasks.

We operate at our best when we are utilizing our strengths. Look for creative ways to alter your tasks or the way you approach your work to utilize your strengths.

If you feel that you do not have enough to do, talk with your manager and ask for more work, including more challenging responsibilities. A positive conversation with your manager could lead to a altered job description which may alleviate any feeling of boredom you might have.

If you are part of a larger testing team, check if you can exchange some tasks with your colleagues. That way both of you get to work on something different.

Constantly ask yourself as to how you can add more value to what you are doing. Adding value will result in at least two things – one, you will definitely feel much better and satisfied and two, the organization will view you as being enthusiastic, interested and pro-active. All very good qualities to exhibit which may in turn lead to better responsibilities going forward.

The power of your thoughts cannot be emphasized enough. Begin to think differently about your work. Realize how your thoughts drive your actions.

When working continuously, take a break. It often helps to clear your mind and relieve some stress.

Be the employee who takes active interest in not just individual performance but also keeps the larger interest of the group or organization in mind. Look for opportunities to suggest changes and try something different.

It is a good idea to find a mentor. Someone who can coach and guide you will prove very useful. Do not think that coaches and mentors are just for the junior employees. Even CEOs need coaches. Everyone at any level in the organization can benefit from a good mentoring relationship.

Of course, a list of tips to alleviate boredom at work should have this one – learn more about your product and area of work. Do you already know your product well ? If yes, then set a goal to be the master of your product; strive to learn all that you possibly can. If no, start learning more. Being in software testing does not put you at a disadvantage in mastering your product. As a matter of fact, you have a head start at mastering your area since you would have a greater breadth of knowledge of most areas of the product which your development counterparts may not. Strive to become the “go-to” person of your group, the one everyone turns to when they need information about the product. I have been and seen such an expert many times and the satisfaction you get is tremendous.

Finally, always aim to improvise. "Restructure the job in your own mind," says renowned psychology professor Csikszentmihalyi. "Approach it with the discipline of an Olympic athlete. Develop strategies for doing it as fast and as elegantly as you can. Constantly strive to improve performance - doing it in the fewest moves, with the least effort, and with the least time between moves."
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Share & Bookmark

Software Testing, where conflict is normal

In our daily lives, conflict is generally viewed as undesirable. This could be true in our personal and professional relationships. However, while producing software, conflict and problem solving are key elements involved in enabling delivery of quality products. Organizations on their part must encourage constructive conflict while keeping in place a structure to manage conflict.

Lets face it, if you are the type who prefers a stress-free, non-confrontational role, then software testing is not for you. Software testing is not just about having the requisite technical competencies and analytical skills to perform testing. Software Testers need a bunch of soft-skills and a form of mental make-up that can enable them to survive and thrive amidst conflict. If you like being out in the front, dealing with conflict, are not worried about how folks will react to the information you convey, then you could be on the way to being a software tester. Software Testers must not shy away from taking up an adversarial position … when required.

Software Testers report problems. Testers are the bearers of "bad news". The recipients of the news may react in a myriad different ways which could be ego-deflating, sarcastic or plain rude.  Software Testers need to walk a fine line between being overly jingoistic about the issues they have observed / being judgmental or going soft / worrying if they should invite conflict by even relaying information about the issue.

For new software testers, their initiation into the process of finding a defect and reporting it can be an experience to remember. Over time, as testers discover more defects of differing severities, they become more confident about their own abilities while building a rapport with developers. Instead of viewing the inherent conflict as confrontational, testers begin to engage in meaningful discussion about problems identified and addressing them. All software testers will experience some form of push back from their development counter parts. This is not bad and can actually be very healthy. When either side (software testing or software development) simply agrees with what the other says without debating and clarifying the issue, it could be a sign that something's amiss. A certain degree of healthy conflict and debate will help in thorough analysis of issues being reported and development of better quality solutions.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Software Testers focus on Software Users

Knowing what the user wants is important in producing software.
Also, knowing how the user would use the software is important to producing quality software.

Software testers should focus on both: understanding user requirements as well as how an user might interact with and use the software. This understanding helps testers test scenarios that are closer to reality.

It is usually straight-forward to come up with the positive tests. Even developers can do it to a large extent ! When developers develop software they have an expectation of how users "should" use and interact with the product which is related to how they have designed the system.

Smart testers design their tests, especially their negative & error tests keeping in mind how users would behave. In the real world, users will make mistakes as part of the learning process, not read the complete documentation or manuals for your product, interact with the product in ways the developers do not expect, provide inputs that may not be the values that your system expects and do various other things that could show up chinks in your product's armor. In fact, we all make "mistakes" as users of different software products. As we try to familiarize ourselves or explore the product features, we end up doing things that the developers of the software may not have envisaged. Testers need to incorporate testing for errors, "nonsensical" actions, invalid inputs, etc. to try and mimic real world actions by users.

This user focus by testers, translates into how defects are reported. Testers assign a severity value to each defect which reflects the tester's estimation of impact of the particular defect on the user. Severity also factors in the likelihood / frequency of users facing the issue. To focus on users, testers need to be encouraged to think independently and not just go with what developers think testers must test. Developers come with a perspective of how the system is designed and their baggage of expectations on how the software should be used. Getting testers exposed to customers/users or customer facing groups could help them to approach testing with the user perspective in mind.

Software Testers as Generalists

A generalist may often seem to be someone who is like the proverbial "Jack of all trades but master of none". In producing software, one of the significant differences between the software testing and software development functions is the presence or absence of generalists. Typically, a software developer is a specialist. A software developer is expected to specialize in a specific area. The emphasis in software development is on the depth of knowledge acquired. There is little scope for a generalist here unless you move up the food chain and occupy a senior managerial position. In contrast to this, the software testing function values generalists. The emphasis on generalists in software testing is on their breadth of knowledge acquired rather than depth alone.

A generalist software tester is able to test and comment on a product or feature without needing to know about its internal workings. Generalist software testers are often required to quickly come to speed on a new product or feature and test from an end user perspective. This requires them to gain a broad understanding of the various aspects of the product in a short time. These software testers bring in a different perspective in comparison to the software developers.

On the face of it, this emphasis on breadth vs depth of knowledge may cause generalist software testers to be viewed as "ignorant". However, it is this very "ignorance" that helps these software testers examine the application under test (AUT) the way an user would without being too familiar with the internal workings or the technological underpinnings. Generalist software testers however do need to be familiar and well aware of the customers usage, their domain and environment. This domain knowledge coupled with a broad based understanding of the product helps generalist software testers add significant value to the organization.

Software Testing and Software Development

Two significant functions involved in producing software; different like chalk and cheese, yet co-dependent and (must) work together to produce a quality deliverable.

Speaking of dependence, both functions are very much dependent on one another. On close examination, Software testing may seem to have a greater degree of dependence on Software development. From needing relevant documentation from development to kick start test planning and tests development, incorporating testability in software development, providing builds to test, providing timely fixes to issues, the software testing function is very much dependent on software development. When testers hit test stopper issues, testing is halted until the stopper issues are addressed. Software development too is very much dependent on the software testing function. Testing provides critical and valuable information to development (and stakeholders) on the software being developed. Software testing by its very nature is a support function, providing services to the overall organization. A valuable output from testing is the information about the software being produced which enables informed decision making.

It is not uncommon to see testers face the challenge of time crunch or time compression more than developers. This usually occurs because software development normally happens prior to formal and extensive testing.  Even when testing is distributed across phases of the software development life cycle, generally the more rigorous and formal testing activities / phase is slated to occur post feature complete, i.e. once development has finished implementing the planned features. When software development tends to slip the estimated development schedule, what usually happens in projects where the release date is fixed is that the time available for downstream activities such as software testing tends to get cut back. This tends to support the view that developers have more flexibility (time-wise) when compared to testers. The other aspect of this is that testers need to have better contingency planning to handle changes to schedules and staffing requirements.

The general view supports the notion of software development being a constructive function leading to development of a product or feature, while software testing is often viewed as a destructive function that attempts in various ways to break what has been developed. However, these opposing functions and view points are necessary to deliver a quality product similar to the Yin & Yang principle of Chinese philosophy.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.