Corporate fire fighting

What does it mean to you ?

A sense of being busy, on the run, panic, stress, rush of adrenaline, hardly any time to stop and think, and of course, heroic efforts to douse fires that seem to be springing up everywhere. If any or all of these sound familiar, you aren't alone. Many organizations have integrated fire fighting into their very DNA, so much so that you could be considered a slacker if you appear calm and unruffled. Lack of time is often cited as a reason for short-circuiting adequate planning & risk management. There is never enough time to do the job right, they say. We'll somehow have the time to fix issues and patch things up later. The constant busy-ness and worry take the focus off of prevention of fires and towards combating them.

When fighting fires is a regular part of work, often times organizations tend to reward and recognize the heroes, the ones who are adept at dousing these fires. It is useful to remember that what you reward is what you will get more of. Also, bear in mind that in a corporate setting, some of the best firefighters could be the best arsonists too. Regular rewards and recognition of the heroes who fight fires rather than the ones who have not caused any fires can quickly lead to a flaming inferno that's hard to manage.

What ? This doesn't happen ? Look at your group's reward structure. Whom do you recognize and reward - the individual who works all night to meet a deadline while producing average quality code, the individual who stays up late to fix issues … in code they themselves have produced or the individual who delivers solid output that has been adequately tested within the given time ? It is easy to miss a hard-worker who delivers without much fanfare while doing the right thing.

When faced with a fire, take a step back to see the big picture. It is easy to miss the forest for the trees here. Divide your resources – it is not advisable to pull in all your resources to  fight a fire unless the situation truly demands it. Some project staff need to be insulated from firefighting so they continue to deliver on critical areas. Some fires may not really need to be doused. Evaluate the consequence of letting a fire burn. What is the opportunity cost of involving your resources to fight the fire versus letting the fire burn. Whom does the fire impact most and how important is it to them ? Such and related questions should help you create a strategy to fight your fire.

While it is strongly recommended that we prevent fires, there will be emergency situations. What we must do is to perform a thorough post-mortem of each fire, analyze its cause, the factors contributing to the fire, cost of the fire in terms of both the damage as well as effort involved in fighting it and steps to prevent such a fire from happening again. An often cited requirement for preventing fires is – more time. We are very busy fighting fires now. Give us more time and we will work on preventing fires. Frankly, that most often does not work. Work and busy-ness have a tendency to expand to fill any available time.

The Energy Bus

I recently read the book, “The Energy Bus” by Jon Gordon. It is an interesting read and offers ten rules for infusing your life, work and team with positive energy. The book is in the style of a fable that takes readers on an inspiring and insightful ride while revealing ten rules for life and work.

It's Monday morning and George walks out of his home to his car and finds a flat tire. Great way to start the week, but this is probably the least of his problems. His home and family life is in shambles, while his team at work is disillusioned and looking set to fail. With a big new product launch coming up in just two weeks, George has to find a way to pull it off or risk losing both his job and marriage. Trying to fix the flat tire shows up other problems with the car requiring additional repairs that literally force George to take the bus to work. Here, he meets a special bus driver and a diverse mix of co-passengers who, over the course of two weeks, share the ten rules for his life and work. During this process, they help George turn around his work and life from failure and destruction.

As the book says, everyone faces challenges. And every person, organization, company and team has to overcome negativity and adversity to define themselves and create their success. No one goes through life untested and the answer to these tests is positive energy; the kind of positive energy that consists of the vision, trust, optimism, enthusiasm, purpose and spirit that defines great leaders and their dreams. The book provides an actionable plan for overcoming life and work obstacles and bringing out the best in yourself and your team.

The 10 Rules to Fuel Your Life, Work, and Team with Positive Energy

1.    You’re the Driver of the Bus.
2.    Desire, Vision and Focus move your bus in the right direction.
3.    Fuel your Ride with Positive Energy.
4.    Invite People on Your Bus and Share your Vision for the Road Ahead.
5.    Don’t Waste Your Energy on those who don’t get on your Bus.
6.    Post a Sign that says “No Energy Vampires Allowed” on your Bus.
7.    Enthusiasm attracts more Passengers and Energizes them during the Ride.
8.    Love your Passengers.
9.    Drive with Purpose.
10.  Have Fun and Enjoy the Ride.

Testing as part of Development activity

Testing should be a part of the development activity and not be relegated as a separate function, to be performed post development. Here are a few “things to do” to enable this.

Involve testers from the start

It makes a lot of business sense to involve testers from the requirements phase itself. Testers can better understand the product to be developed, test the requirements and help clarify requirements better. As is common knowledge, it is least expensive to fix a defect at this early phase rather than at a later stage, such as a post implementation phase of testing or even later, when the cost of fixing defects escalates drastically. Testers can begin working on developing test plans while also checking to ensure testability of requirements.

Require Developer Testing


The minimum requirement from development should be to perform Unit testing. Testing groups should ideally receive report of tests run and results, along with information on any open issues or workarounds, before accepting a build for more formal testing. Unit testing helps catch issues much sooner with lesser turn-around time involved in addressing issues. Also, a “stable” build to the testing team enables testers to be more effective and reduces time spent on test / fix / test cycles. Other useful practices that may be adopted include – test driven development which is part of an agile development methodology. Also, having developers perform a set of integration tests, with their module integrated into the larger application, can be pretty useful in identifying more common and basic issues. These integration tests need not be extensive or complex and can involve a basic set of tests. Often developers tend to stop with unit testing their module or area of work. However, on integration with the larger application, newer issues tend to show up. Having a set of integration tests also being run, in addition to the module level tests can be very useful. The idea is to avoid delivering a “broken” or “poor quality” build to the testing group. Having testers blocked on basic features / items wastes a lot of time and effort as well as involving a lot of back-and-forth interactions to communicate, analyze, fix, check-in, re-build and re-test.
Leverage test automation

Test automation is not just for testers. In fact, developers can and do leverage test automation to test their work. Testers and developers working together on a common automated framework to develop and run tests is a good idea. Tools must be chosen that can support such a scenario and does  not involve a steep learning curve to learn a new language required by the tool. Tests may be added incrementally – developers as they develop new code, can add in new tests while testers can use the framework to build more complex tests. A common framework eases communication and helps to benefit from synergies of working together. Other good practices to follow include, building automated test suites (generally regression) and having these run against regular builds (often on a nightly basis).
If you are not already using it, you might want to consider using a continuous build / integration system and tying in your automated regression suite to it. When a build is generated, you can have a set of automated tests run on it and mark the build accordingly based on the results of the tests, observe the stability of builds, analyze test failures and be notified of any failures. We use Hudson at my present organization.
All of the above relates to the point I mentioned in an earlier post – Software Quality is the responsibility of everyone involved in producing the software. It is not confined to just the Quality / Testing team. Quality must be built into the product and the Development team (as also the Testing team) has an important role to play in building a quality product.

Testers and Developers

It is interesting to observe testers, especially folks new to testing, consider developers as the "other" group, the one's testers are up against. Some credit for such a notion must go to the organizations themselves for fostering this silo & antagonistic behavior without channelizing the efforts from both groups towards the common goal of releasing a quality product.

While newer methods such as Agile development require testers and developers to be partners and work together as one team, work needs to be done to make needed cultural changes and facilitate smooth cross-functional interactions and team work.

Looking at the relationship between testers and developers, one could probably say that testers exist because of what developers do ! There are many more such statements about the relationship between testers and developers. Here's one more - "If debugging is the process of removing software bugs, then programming must be the process of putting them in" and yet another helpful quote from a developer, "My software never has bugs. It just develops random features"

Producing and delivering a quality software product calls for all involved to work together towards a common goal while realizing that quality cannot be an afterthought or confined to a QA or QC team; quality must be the responsibility of both developers as well as testers and quality must be built into the product.

Thoughts on agile development, continued ...

Continuing on the subject of Agile development -
Agile, makes the assumption that neither customers nor software producers have a full understanding of what needs to be built. Contrast this with the traditional approach where the aim is to have the requirements defined and signed-off at the beginning of the software development lifecycle.
Agile projects involve learning through the project, leading to changes in requirements and how the system gets built. While traditional models are about "controlling" and "mitigating" change, agile introduces a different paradigm of accepting and accommodating changes. Traditional models involve extensive planning and regular monitoring to minimize deviations. Any such deviations are viewed as undesirable and attempts are made to get conformance of actual observed results to plan. In an agile model, any deviations from plan are treated as sources of additional information that help modify the plan to conform to reality.

Thoughts on Agile Software Development

Agile development presents an alternative to document-driven and rigorous process oriented software development methods. Agile, contrary to what is often believed, values planning, documentation, processes and tools. Before you wonder if this isn't a contradiction, let me clarify.
While, agile does value all of the items stated above, an organization that practices agile development must be able to state what it values "more". When push comes to shove, something must give. The organization needs to be clear on what is important and what gives. In an agile context, greater value is placed on working software.
While Agile practices may be applied to a wide range of projects, they are best suited for projects involving change and complexity; projects that involve risk, uncertainty and change.

Organizations that intend to adopt agile development must realize that the benefits from agile, such as increased productivity, shorter time to release, better quality, ability to embrace change, accrue from working differently and not just by working quicker. So, unless your organization is willing to change the way it works, going agile may not prove to be all that it's supposed to be.

Training Camp: What the Best Do Better Than Everyone Else

Just finished reading the book “Training Camp: What the Best Do Better Than Everyone Else” by Jon Gordon. It is an interesting read and here's a brief summary.

This book looks at what makes someone great in their field of work. The best in any field - sales, sports, business, etc. share a set of similar characteristics. There are things that the best do that others do not and things that they do better than everyone else. There is a way that the best of the best approach their life and work and craft that makes them stand out from the rest.

The book, in the words of the author, tries to inspire the reader to strive to be your best and bring out the best in your team - be it at work or elsewhere. The book is in the form of an engaging story of an un-drafted rookie footballer, Martin Jones, trying to make it to the NFL. Martin has spent his entire life proving to critics that a small guy with a big heart can succeed against the odds. In his first pre-season game, Martin stuns everyone with his performance and gains attention. However, during the game, Martin sprains his ankle pretty badly and is out of action. When he thinks that his dream of making it to the NFL is lost, he meets a special coach who shares eleven life changing lessons that could make him the best of the best. It is an inspiring story filled with nuggets of wisdom and insights on what it takes to excel as individuals and teams.

Irrespective of the field you are in, these eleven lessons have wide applicability.

1. The Best know what they truly want
2. The Best not only know what they want, but they want it more
3. The Best are always striving to get better
4. The Best don't do anything different. They just do the ordinary things better
5. The Best zoom‐focus
6. The Best are mentally tougher
7. The Best overcome their fear
8. The Best seize the moment
9. The Best tap into a power greater than themselves
10. The Best leave a legacy
11. The Best make everyone around them Better

The book has several interesting insights to offer. Some such as getting out of your comfort zone push folks to overcome their sense of inertia. If you are always striving to be better then you are growing which in turn means that you are not comfortable with the status quo. To be the best, you have to be willing to move out of your comfort zone and embrace discomfort as part of the process of growth. The book tries to break a popular myth about overnight success. Many people believe that star athletes, top performers, and others were born that way or simply stumbled on their success overnight. The best tend to make what they do look so easy and effortless that people either think anyone can do it or that there are the few chosen ones who alone can do it. People see the outcome and not the countless hours of toil, dedication, practice and preparation that lead to greatness. Do not settle for mediocrity, but strive for excellence every day .

Readers are exhorted to not focus on the past, nor look to the future. Focus on the "now". Success, rewards, fame are merely by-products for those who are able to seize the moment. Ironically, to enjoy success you must not focus on it. Instead, you must focus on the process that produces success. While striving to be the best, you must not ask what your greatness means to you but what impact it makes on others. The success you achieve now is temporary, but the legacy you leave behind is eternal.

Greatness, ultimately is a life mission and being the best really is not about being better than anyone else but about striving to be the best you can be and bringing out the best in others.

Google Wave

I've been wanting to play with Google Wave for a while but haven't had much time. Took some time off today to create waves and explore it some bit.

Initial observations of some features i thought were cool - playback of conversations and changes to the wave, inline replying, easy drag and drop of images to a wave (presently needs gears), creating new waves derived from existing waves, seemingly real-time instant messaging, collaborative authoring and editing that makes wikis seem dated, wave links and some interesting extensions.

I hope to spend more time waving and exploring in the days ahead.

Testing in the Agile World (Final part - 4)

Testing in the agile world, needs to adapt to change. The concurrency and shorter cycles introduced by agile development require not just testers but even their tools and processes to be adaptive too. Testers need to have a big-picture view and keep the customer's perspective in mind at all times. The tester mindset has to move from being considered as “custodians” or even “gatekeepers” of quality to being a participant in a larger group involved in defining and maintaining quality. Agile values the concept of a ”whole team” with everyone on the team being responsible for quality. Here, developers and testers are not pitted against each other unlike some of the other models; on the contrary these functions work together as partners to deliver quality artifacts.
The way in which tests get developed would need to change: from developing tests in isolation and based on documents such as requirements specifications, design, etc. to developing tests along-side code development, asking relevant questions, gaining quick understanding, refining tests on the go – also being able to quickly automate tests, test partial implementations as opposed to waiting for a completed artefact or feature - all are skills that are very much needed in the agile world. Testers do not necessarily get involved only when a feature is complete. Being able to pick up and test unfinished and in-process pieces while providing quick and useful feedback is important.

Testers should regularly communicate closely with customers or their representatives to both appraise them on the status of testing as well as obtain valuable feedback into the test activity.

Traditional models sometimes had formal QA teams that recommended processes and practices in a preventive way while the QC or testing group tested the finished product. In the agile world, teams are generally not keen on following much laid down processes although they do emphasize significantly on testing and its importance in product development.

In addition to the above, here are some more pointers for testers to add greater value in an agile world.
  • Performing incremental tests on work products as they are being produced. This does not in any way mean that testers perform unit testing nor that testers duplicate the unit test efforts that developers do. Testers should bring their expertise to play and develop and execute business focussed tests that could be exploratory in nature and augment the unit test efforts
  • Testers in the agile world need to be familiar with the tools of the trade. Being able to go beyond their areas of specialization is expected. Testers should be comfortable with the development environment and handle tasks such as checking out source code from the repository, use the version control system, build system, use an IDE, know the language and be familiar with technologies used in developing the product, be aware of the frameworks used and so on. Testers on agile teams cannot afford to remain detached from the tools and technologies involved in producing software
  • Tester need to be able to work on limited requirements specifications and communicate effectively with product owners / customers to better understand the requirements and clarify assumptions. The ability to integrate and work well together with a cross-functional team is a required skill in the agile world
  • Testing in the agile world is not merely about doing exploratory manual tests. Testers perform various test types that would be performed in a traditional model, albeit based on the importance and requirement as assessed by the needs of the customer. Also, test automation is an important activity that happens in addition to manual test efforts
  • Testers should focus on tests that tend to integrate different features and operations. Maintaining a solutions approach to testing helps identify any issues which may not be captured by way of feature or unit focused test efforts
Quality in an Agile world is the responsibility of the entire team. Agile testers need to learn and apply agile principles to enable the whole team to produce a quality product. Agile testing requires testers to be  pro-active, creative, willing to take up different tasks, quick learners and adaptable to change, in short – be Agile !

Testing in the Agile World (Part 3)

Testing in agile is not something that happens at “the end” of the development or implementation phase. Testing happens as development happens. Incremental testing is the norm and increments of functionality that gets developed is tested and issues reported immediately. The short & quick feedback loop helps deliver better quality code and reduce cost of fixing defects much later when the cost of fixing them is higher. Also, the ability to have working and demonstrable piece of software at the end of each sprint is a huge benefit – customers and product owners can quickly review and “play around” with the developed artefact and provide feedback quickly. This helps ensure that the agile team is developing what the customer needs and also keeps the customer appraised of the team's progress in real time.

An important aspect of agile development is the ability to “release” after each iteration or sprint. A working copy of the product is expected to be ready for “release” at the end of each iteration. This ability to release frequently is important. It may be the case that the team may not wish to actually release after each iteration but having the capability and working towards delivering working software by the end of each iteration is key. Customers may choose to pick up a deliverable after an iteration (could be every few days or few weeks) and either review that the development is on track or even choose to deploy in increments. The agile team can get regular updates on their development and incorporate into their efforts quickly rather than wait for the complete product to be developed and then released to customers.

Customers get a better say in how development happens. The product owner can make decisions to even stop further development on some areas or suggest changes where needed. The focus on getting working software out at the end of each iteration also brings together the various functions together as a close knit team – everything from development to installation, documentation and testing needs to be taken care of rather than leave any item for later in the release. Issues are identified sooner and as stated earlier, the short feedback loop helps incrementally deliver better quality software faster.

Agile development also involves lesser documentation than traditional models of development. Agile methods focus on face-to-face interactions and meetings to keep the communication channels open and clear. In the Scrum methodology which we follow, daily stand-up meetings are conducted where all members of the agile team share their status updates, plans and impediments encountered. In addition, planning and retrospective meetings are held at the start and end of each sprint. Testers work along with their counter-part developers and help to regularly test every testable bit of work product providing regular feedback to ensure a better quality feature goes in. Communication on agile teams tend to be quick and direct, with agile methodologies favouring co-located teams and human interactions. It gets to be far more effective and easier to pop-in to your neighbouring cubicle and get something clarified rather than start an email thread and await responses.

Agile development requires a customer representative to be part of the team. This gets better than having to rely on a requirements document. You can always ask the representative for clarifications directly and get first hand feedback. Requirements are prioritized based on what is most important to the customer and listed in what is normally known as the backlog for the product. The agile team goes through the backlog list in order and picks up the items that they can commit to delivering within the iteration.

Testing in the Agile World (Part 2)

Continuing from the previous post


In such a scenario, the role of individuals who consider themselves dedicated testers, may be questioned. When agile development already emphasizes practices such as test-first development, developers writing unit tests and so on, is there a need for dedicated or specialist testers on agile teams ? The answer is very much a resounding, Yes !

Testers bring to the table a range of special skill sets and abilities that help enhance the quality of the work product. Testers can perform testing that goes beyond the unit and component level tests which developers / programmers may perform. Testers on agile teams can, like their counterparts in the traditional models, add a lot of value by performing tests from a customer / end user perspective; develop and execute a variety of different test types such as performance, functional, security, interoperability, compatibility and so on.

In our group at my current organization, we follow an agile development method called called Scrum. In brief, product development activity happens in short iterations called “sprints” which may be of a few weeks duration (generally up to ~4 weeks). Members from the different functional groups come together and form a single team that works together on delivering the features which  the team commits to. The list of features, enhancements, defects to be addressed is put up in a prioritized list known as the backlog. The sprint team picks up tasks from this list which members think they can accomplish during the duration of the sprint. The scrum process is co-ordinated and facilitated by an individual who dons the role of a “scrum master”. Daily stand-up meetings happen where members share information on their achievements since the last meeting, any obstacles faced and plans for the next day. Reports such as burn down charts and information captured during the meetings help introduce a greater degree of transparency into the development activities when compared to traditional models that were followed. Testers are paired with developers, normally a tester works together with a developer on a particular area and works in tandem on producing the product.

Testing in the Agile World (Part 1)

In this and subsequent few posts, i shall post content from my paper on Agile testing which was recently published by the Quality Assurance Institute. We start with a look at the concept of Agile development and progress towards testing in the agile context with specific emphasis on the Scrum model.

Agile Software Development refers to a philosophy, a mind-set based on iterative development. Agile methodologies support the agile values based on the agile philosophy. The Agile Manifesto lists the following agile values

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

The manifesto goes on to state that while there is value in the items on the right, agile development values items on the left more.

Greater transparency into how software is produced, better predictability, faster time-to-market, frequent releases, increased productivity, higher levels of quality ... all this and more, lure organizations that have been following “traditional” development models to “try out” agile methodologies.

Traditional methods of development generally follow a set of models that usually define phases of activity involving release planning, requirements definition, sign-off, design, implementation, testing and so on. Work products are passed on from one phase to the next. Experience shows that these models tend to involve release cycles that are fairly long, thereby delaying the time-to-market and being pretty inflexible to changes through the development lifecycle. The general aim tends to be to deliver all functionality captured in the initial requirements specification as a completed finished product. The testers in this case prepare for testing by way of creating test plans based on the requirements & other documentation that are available and then await the finished work product after the implementation is complete to begin formal testing.

Agile methods however, introduce a paradigm shift in how products are produced. Development happens in short cycles of a few weeks duration; at the end of a cycle a working product or artefact is ready and available to be displayed to the customer and even shipped if need be. The product owner / customer sets the quality criteria for each iteration or sprint. This definition of quality signifies what is important from a customer's view point rather than what a formal testing team may choose to define. For example, it may so happen that issues which testers might think are important to be addressed immediately may actually be deferred to a later iteration if the customer does not think it is a priority. Agile development would not usually encourage specialized roles, such as a tester or developer. Members from various groups are drawn together to form an agile team. This team could comprise representatives from development, testing, technical writing, internationalization, etc. as required for producing the product.

Coming up ...

In the next and subsequent few blog entries, i intend to post from my paper published by QAI on the subject of "Testing in the Agile World". We will look at the concept of Agile development, how testers fit in and the qualities needed by testers to succeed in this interesting world.

Thanks for reading.

- John

De-centralized / distributed test teams

Here, testers are organized by line of business or products. Having testers assigned to specific products enables them to develop subject matter expertise to a level that is not usually possible when operating in a centralized test team. Test coverage and effectiveness of testing is enhanced since testers have greater knowledge of the product and use that knowledge to test wider & more complex scenarios. The interaction between development and testing tends to be better. Both groups mostly work together closely and interact often through-out the product life cycle. Processes for engagement and interaction between functional groups is pretty well set and happen more smoothly than a centralized model.

Testers normally report into the development organization or application owner. Testers, though considered peers with development at a product / project level, are often viewed as part of the development engineering team at a higher level.

Resource constraints can more easily affect test teams in such a model, since there is'nt a pool from which resources may be drawn when needed. Managing resources through the demand highs and lows can be challenging. Also, processes, tools and techniques followed tend to be local to individual test teams, with little consistency across product groups. Some element of redundancy  exists, while issues could crop up when trying to integrate different products. Generally owing to the smaller size of de-centralized teams, the opportunities to specialize may also be fewer.

Having looked at both the centralized and de-centralized approaches, organizations may choose to follow either of these or even a mixed approach by centralizing some areas that can benefit from a central group while de-centralizing areas that would work best by being part of the product group. Irrespective of the approach taken, the testing team needs to be allowed to function as independently as possible, be responsible for important decisions affecting testing and receive sufficient senior management support.

Centralized Software Testing - some drawbacks

Continuing from our earlier post on Centralized testing, lets look at some drawbacks of this type of test group organization.

Organizing testing into a centralized group tends to promote silo behavior and creates barriers to effective collaboration between cross-functional teams; mainly with development. Testing tends to happen “later”. Development creates a piece of code and often “throws poor quality code” over the wall to testing. Rather than working together closely as partners, development and testing often tend to be pitted against each other. Formal boundaries between functional groups lead to longer defect detection and fixing cycles, affecting schedules.

The nature of shared resource pools can affect development of subject matter expertise. Given the nature of resource allocation which happens on an as-needed basis, there is not much time for testers to really specialize in a particular domain. Transient test teams could also mean that building and maintaining robust regression test suites for the various projects can be a challenge.

QAI Software Testing Certifications -CMST, CSTE and CAST

While attending the recently concluded “9th Annual International Software Testing Conference in India 2009” organized by QAI, i came across a "new" Manager level certification program called Certified Manager of Software Testing (CMST). Thought i'd share with readers of this blog while summarizing the current Software Testing certifications being offered by the Quality Assurance Institute.

QAI presently has three certification programs for Software Testers.

1.    CAST (Certified Associate in Software Testing)
  • foundation level

  • targeted at folks who are relatively new to testing

  • costs approximately US$ 200

  • exam format: 1.5 hours examination, two parts of 45 minutes each, multiple choice questions

2.    CSTE (Certified Software Tester)
  • 
practitioner level
  • 
targeted at experienced software testers, test leads and test architects

  • costs approximately US$ 350

  • exam format: 4 hours examination, 2 subjective parts of 75 minutes each, 2 objective parts of 60 minutes each

3.    CMST (Certified Manager of Software Testing)

  • managerial level

  • targets Software Test Managers and Software Project Managers – both folks who are presently working at these levels or expected to work at the management level

  • costs approximately US$ 600

  • exam format: Written documentation supporting real-world experience in Software Testing, four part subjective examination
Note that the prices are indicative and can vary. The CMST exam is presently being offered at an introductory price of US$ 450. All programs offer a PDF version of the CBoK. For more &updated information on these programs, eligibility requirements, etc. refer the software certifications web site  -  http://softwarecertifications.org

Centralized Software Testing

A common question that arises amongst testing professionals is around the ideal way to organize test teams - should organizations have centralized test groups or de-centralized test groups ? In this and the next set of posts, let us look briefly at each of these types of test group organization as well as some of their benefits and drawbacks.

Today's post will look at "Centralized test groups" and their benefits.

Centralized test groups comprise a pool of resources that are shared across applications and projects. Each tester may work on one or more projects at a time. While developing a centralized testing group, it is important to assemble testers with a diverse set of technical and other skills. Testers may get assigned to projects on a part-time or full-time basis depending on the project requirements. As the need for testers increase, more specialized testers may get assigned to support the project. Centralized test groups have a defined test leadership hierarchy in place.

Advantages
  • Process consistency - deployment of standard testing methodology & processes helps improve quality and efficiency of testing
  • Benefits from economies of scale and centralized spending
  • Hardware and software licensing consolidation helps reduce costs
  • Centralized groups can more easily & consistently implement practices such as CMM, continuous improvement and common metrics
  • Allows better focus and specialization in test processes and tools
  • Affords flexibility in utilization of resources. On-demand allocation of resources to projects ensuring better resource utilization
  • Better sharing & leveraging of best practices
  • Better career paths for testers to build a “career” in testing. More opportunities for testers to gain expertise and skills on a wide range of applications, tools and techniques
  • Testers have more opportunities for specialization
  • Testers have better mentoring opportunities from more senior testers in the centralized organization
  • Better objectivity in testing. Since testers do not report into a development organization or project managers, they remain insulated from any outside pressures or influence. Testing can better position itself as a peer to development in the organization's reporting hierarchy

Coming up ...

I've been away attending the "9th Annual International Software Testing Conference in India (STC 2009)" being organized by the Quality Assurance Institute (QAI).

A paper on Agile Testing, that i put together has been published by QAI. I shall post contents of that paper in subsequent posts on this blog.

Meanwhile, while the last blog post talked about Business Strategy ("Blue Ocean Strategy"), the next post is on the subject of Centralized vs de-centralized testing.  This is a common question that pops-up - what are the benefits of either approaches (and some of their drawbacks). I hope to have this posted tomorrow.

Thanks for reading,

- John

Blue Ocean Strategy

I recently participated in reviewing the book - “Blue Ocean Strategy” and found it to be a pretty interesting exercise. The book is about business strategy and is written by W. Chan Kim and Renée Maubborgne of INSEAD business school. Here's a brief summary.

The book classifies the business universe as consisting of two distinct kinds of spaces - red and blue oceans. Red oceans represent all the industries in existence today-the known market space. In red oceans, industry boundaries are defined and accepted, and the competitive rules of the game are well understood. Here, companies try to outperform their rivals in order to grab a greater share of existing demand. As the space gets more and more crowded, prospects for profits and growth are reduced. Products turn into commodities, and increasing competition turns the water bloody.

Blue oceans denote all the industries “not” in existence today-the unknown market space, untainted by competition. In blue oceans, demand is created rather than fought over. There is ample opportunity for growth that is both profitable and rapid. There are two ways to create blue oceans. In a few cases, companies can give rise to completely new industries, as eBay did with the online auction industry. But in most cases, a blue ocean is created from within a red ocean when a company alters the boundaries of an existing industry.

Blue oceans differ from traditional models which are focused on competing in the existing market space. While the term “Blue Oceans” may be new, the concept has always been around. Take a look back over the past century and try to find out how many of today's industries were then unknown. You would notice that many of today's fundamental industries such as automobiles, aviation, petrochemicals, pharmaceuticals and many others were not just unheard of but people then would not have even thought these industries were possibilities. If a hundred years seems long, try looking back at a shorter period of a few decades ago and try to find the answer to the same question. You are sure to find several new industries – such as mobile phones, biotechnology, satellite television, internet start-ups and many more that were not around then. Now, look ahead at the next few decades and ask yourself the question – how many industries that are unknown today will exist in the future – a decade or two from today ? If the past is any indicator of the future, the answer would be obvious – we are sure to have many new industries that we are not aware of now.

Organizations have a tremendous capacity to create new industries and recreate existing ones. Various factors such as rapid technological advances, enhanced industrial productivity, falling trade barriers between nations and regions, ready global availability of information on products and prices – and such others are contributing towards the contraction of niche markets and monopolies.  Prospects in many established market spaces a.k.a. Red oceans are steadily declining. This situation has speeded up the commoditization of products and services, led to price wars and reduced profit margins. With commoditization, most brands across categories tend to become more and more alike. This leads to consumers increasingly basing purchase decisions on price. In overcrowded market spaces, differentiation between brands becomes harder.

So, why do organizations still focus their strategies greatly on the red oceans ? A possible answer would be to trace the roots of corporate strategy - which seems to be heavily influenced by military strategy. References to officers, headquarters, troops, front line, etc. are borrowed from the military. Strategy in the military context is all about red ocean competition – fighting an opponent and taking over the battlefield or limited territory. Blue ocean strategy is however, about doing business where there is no competition. It is about creating new land and not dividing existing land. Red ocean focus implies an acceptance of the limitations of war – limited land and the requirement to beat an enemy to be successful.

Blue ocean strategy rejects a fundamental principle of traditional strategy – of a trade-off between cost and value. According to conventional strategy, organizations can either create greater value for customers at a higher cost or create moderate value at a lower cost. The relationship between value and cost seems to be proportionally driven, higher value driven by higher cost and vice versa. However, organizations that have successfully followed blue ocean strategy pursue both value differentiation and lower costs together and not as a trade-off. Blue ocean strategy works when organizations adopt a total-system approach wherein all systems of the organization such as the value offering, price and costs are well aligned. Observance of companies that have created blue oceans show that they are able to benefit without facing strong challenges for over a decade. This is due to the nature of blue ocean strategy which creates significant economic and cognitive barriers to competition.

Both blue and red oceans have always existed and will continue to do so. When organizations understand the rationale behind both types of strategies, they will be better able to balance their efforts across both strategy types and create more blue oceans.

Coming up ...

Well, after having looked at the Waterfall development model in the earlier two posts, lets look at a different topic - Business Strategy. I'm putting together a post on "Blue Ocean Strategy" which is based on the book written by W. Chan Kim and Renée Maubborgne of INSEAD business school. The concept of Blue Oceans is not limited to large corporations and can very well be applied by each of us in our respective areas of activity. I hope to post this latest by tomorrow.

Thanks for reading,

- John

Advantages / disadvantages of the Waterfall model

Continuing from the previous blog entry that talked about the Waterfall model, this post presents some of it's advantages and disadvantages.

Some advantages of the Waterfall model
  • Clearly divides the problem into distinct phases that may be performed independently
  • Simple concept
  • Natural approach to solving the problem
  • Fits well into a contractual setting where each phase is considered a milestone
Some of the drawbacks of the Waterfall model

In many projects, the strict sequencing of phases advocated by the waterfall model is not followed. The model assumes that one builds an entire system all at once, perform end-to-end testing after all the design and most of the coding is completed. In reality, feedback from downstream phases are passed upstream to make refinements. For example, while implementing a design, issues with the design may be observed which would require the design to be improved upon. Similarly during other phases. There could be quite a few such iterations to firm up requirements, design and get to actual implementation.

Evidence of failures in practicing the waterfall model comes from one of its most frequent users, the US Department of Defence (DoD). The DoD required most of its projects to follow the waterfall model which was documented in the standard DoD STD 2167. A report on project failure rates showed that up to 75 percent of the projects failed or were never used. Subsequent analysis recommended replacing the waterfall model with an iterative and incremental approach to development.

Some of the assumptions in the waterfall model include
  • A well-defined set of requirements is available. These are assumed to be reasonably well stated and the attempt is to freeze these early. The onus is then on making sure these requirements are well-understood and implemented
  • Any changes to defined requirements would be small enough to be able to be managed without having to make significant changes to the development plans or schedule
  • Software development and associated research & development activity can fit into a predictable schedule
  • Integration of the various pieces of the monolithic system, their behavior, performance and other attributes are predictable and that the architectural plans and designs would be able to handle any integration issues
In real world development, it is not feasible to assume the above. Having a clear set of requirements firmed up at the outset is nearly impossible. Assuming that requirements thus defined are unlikely to change much is another fallacy. Experience shows that requirements do change and in many cases change significantly requiring re-work and re-design. The greater the time between gathering requirements and delivery of the finished product, the greater the likelihood of changes to the requirements. While trying to integrate the various pieces of the system, even thorough analysis and plans would not be able to accurately predict nor control the process. Often, assumptions made around integration tend to be wrong. Any upstream slippages in schedule tend to compress the time available for later phases and importantly for adequate system integration testing.  The model could also lead to early finalization of technological and hardware related decisions which may not turn out to be the most appropriate. Real world observations of software development highlights the fact that the “big-bang” approach of trying to deliver a monolithic solution is too risky and prone to cost and schedule overruns.

The Waterfall model

The waterfall model is generally attributed to Royce (1970). The model encourages the product development team to specify what the software is supposed to do (gather & define requirements) before implementing the system. Product development is split into multiple sequential steps (design, implement, test) with intermediate deliverables leading to a final product.

To ensure proper execution with good quality, each step has defined entry and exit criteria. The ETVX (Entry-Task-Validation-eXit) model proposed by IBM fits the waterfall approach wherein each phase may be considered as an activity structured using the model.

The waterfall model may be viewed as a divide-and-conquer approach to development. It allows for tracking project progress across phases and forces the organization to adopt a more structural approach to developing software. The model also requires documentation to be generated which will be used to test and maintain the system. The waterfall model emphasizes careful analysis before building the system. The idea is to avoid wasted effort in building something which does not meet the customer's requirements. Hence, attempt is made to fully specify and finalize customer requirements as early as possible. Requirements are documented in the requirements specification document. In subsequent phases, “verification” activities are performed to ensure conformance to requirements listed in the requirements specification document. A problem with this sort of reliance on the requirements document is the possibility of incomplete or incorrect requirements being specified. Adequate “validation” with the customer is required.

At a high level, the waterfall model comprises the following phases.
  • Requirements
  • Design
  • Development / Implementation
  • Testing
  • Maintenance
In the next blog post, we shall briefly look at some of the advantages and disadvantages of this model.


Coming up ...

I'm putting together a blog entry on the waterfall model. I know its a "traditional" model and most folks would have worked on it. Yet, i am sure it is useful to revisit and post a fairly detailed analysis of the model, including a look at some of it's advantages and disadvantages. Hoping to post this out by end of day today or latest by tomorrow.

- John

Dealing with information overload

Continuing from the previous post, help is at hand for dealing with “information overload” and the problems it presents. Various technological methods are available to deal with the problem and to aid both individuals and organizations. In addition there are some non-tech methods that involve changes to mind-set and culture, requiring individuals and organizations to modify current thinking and behaviour towards managing information.

Listed below are some tips, summarized from the HBR issue, to “reduce e-mail overload”.

As a recipient
  • To avoid constant distractions, turn off automatic notifications of incoming mails. Establish specific times during the day when you check and take action on messages
  • Do not waste time sorting messages into folders. Today's powerful inbox search features make that unnecessary
  • Do not highlight messages you intend to deal with later by marking them as “unread”. In email clients such as Microsoft Outlook, accidentally typing in the wrong keyboard shortcut will irrevocably designate every item in your inbox as “read”
  • If you will not be able to respond to an email for several days, acknowledge receipt and tell the sender when you are likely to get to it
As a sender
  • Make messages easy to digest by writing a clear subject line and starting the body with the key point
  • To eliminate the need for recipients to open very short messages, put the entire contents in the subject line, followed by “eom” (end of message)
  • Whenever possible, paste the contents of an attachment into the body of the message
  • Minimize email ping-pong by making suggestions such as “should we meet at x time ?” rather than asking open ended questions such as “when should we meet?”
  • Before you choose “reply to all”, stop and consider the email burden on each recipient
  • Send less email: an outgoing message generates, on average, roughly two responses
The above represent a few of the many suggestions and tips to manage information. There's a lot more information available online on how to manage information overload !

Information Overload

Based on the Harvard Business Review, September 2009 article on “Death by Information Overload”

The article talks about the phenomenon of information overload which most of us would be familiar with. I have attempted to summarize points from the article for the benefit of readers of this blog.

In the knowledge economy, information is considered to be our most valuable commodity. And, these days it's available in infinite abundance delivered automatically to our electronic devices or easily accessible. Current research suggests that the surging volume of available information and its interruption of people's work can adversely affect not only personal well-being but also decision making, innovation and productivity. Today, information rushes at us in seemingly infinite formats – email, text messages, twitter tweets, facebook alerts, voice mail, instant messaging, rss feeds and so many other ways. People are drawn towards information that in the past did not exist or that we did not have access to earlier but now that it's available, we dare not ignore.

What does this deluge of information mean for individuals ?

The stress of not being able to process information as fast as it arrives – combined with the personal and social expectation that, say, you will answer every message – can deplete and demoralize you. Edward Hallowell, a psychiatrist and expert on attention deficit disorders argues that the modern workplace induces what he calls “attention deficit trait” with characteristics similar to that of the genetically based disorder. Also, a study commissioned by Hewlett Packard reported that the IQ scores of knowledge workers distracted by email and phone calls fell from their normal level by an average of 10 points – twice the decline recorded for those smoking marijuana ! While a section of people feel overwhelmed with the information overload, there are some who seem to be stimulated by it and display what is termed as “information addiction”. An AOL survey of 4000 email users in the United States reported that 46% of the email users surveyed were “hooked” on e-mail. We must also be aware of the tendency of always-available information to blur the boundaries between work and home thereby affecting personal lives in unexpected ways.

What does this information overload mean for companies ?

An email notification or a message alert means more than just time spent reading and responding to the message. There's also time required to recover from the interruption and re-focus attention. A study by Microsoft researchers tracking the email habits of coworkers found that once their work was interrupted by an email notification, people took, on average, 24 minutes to return to the suspended task. Why is so much time wasted if all that needs to be done is to simply read a message? Studies further indicate that dealing with the message was only a portion of the time off task. People used the interruption as an opportunity to read other un-opened messages, engage in unrelated activities such as surfing the web, text-messaging, etc. Also, surprisingly over half the time was spent cycling through open applications on their computers to determine what they had been doing when interrupted and reestablishing their state of mind once they had finally arrived at the application they had initially abandoned when they were interrupted. Distractions caused by email and other types of information also have more subtle consequences. Research has identified reduced creative activity on days when work is fragmented by interruptions.

While it is not easy to quantify the costs of the consequences of information overload, one calculation by researchers put Intel's annual cost of reduced efficiency, in the form of time lost to handling unnecessary email and recovering from information interruptions, at nearly $1 billion. The researchers go on to say that organizations ignore that kind of number at their peril.

In the next post, we'll look at some ways to manage this information overload.

A change in this blog

As this blog continues to grow and evolve, you would notice a few changes and tweaks in the days ahead. The idea is to make the blog more insightful & appealing to a larger audience.

One of the more significant changes to this blog is it's location.

This blog is now available at the url: http://www.techmanageronline.com

Please do update any bookmarks and references to the brand new location. Of course, the blog is under transition to the new location and is expected to be available at both the new as well as the existing blogspot location for more time.

Also, i plan to include content on subjects that touch upon different aspects of software development, quality, management and allied areas plus a few quick reviews too. Expect to see a few more changes or "improvements" if you will, as we go along. Thanks for reading ! Your inputs are most welcome.

Can test automation run without human intervention ?

A common assumption with regard to test automation is that automated test suites can be executed with zero human intervention. After all isn't that what the tool vendors claim their products can do. Theoretically, you should be able to move your human testers to other tasks once they complete automating their tests.

In the real world, automation does not make human testers redundant. Almost all automation test suites, require human intervention in order to remain effective. Consider two simple instances requiring skilled human intervention - analyzing the results of automated test execution and maintaining the automated tests. Also, one must realize that practically getting a complex automation test suite to execute without issues is itself a difficult task.

When the underlying product being tested changes, it is but natural that the automation which tests the product is affected. Even seemingly minor changes to the product, can require fixes to the automation tests. Regular monitoring of changes and the automated tests requires skilled human testers. In the real world, it is common to find that various external factors such as issues with the file system, memory, networking, product dependencies, etc., can also easily disrupt smooth execution of automated tests.

We must also remember the fact that test automation development is very much a software development project by itself and must be treated as such. Like any software that is developed, automated tests are also not bug-free. Regular testing of the automated tests and monitoring of their execution is essential so you can know whether your automated tests are doing what you expect it to be doing. Any changes to the automation tests must follow a similar process akin to a comparable change in the software product, requiring reviews and testing to make sure fixes do not introduce additional defects.

Exploratory testing

Exploratory testing assumes significance in the context of agile development. Testers in agile, need to plan to explore the product during each iteration. The goal of exploratory testing should be to – (obviously) unearth bugs, look for missing functionality and areas for improvement.

Exploratory testing is a simultaneous process of discovery and learning, followed by dynamic development of tests and their execution. Exploratory testing is useful when – testers are trying to go beyond what is known or else not much is known about the software; information gathered through exploratory tests can help design new tests or improve existing tests. Documentation can only help so much; the tester needs to use the software to best understand it.

Agile testers ... role and requirements

Given the close association of testers with developers in agile, and the nature of incremental testing of partial work products, the tasks that testers perform may sometime seem hazy. Would testers do the unit testing on the partially implemented or incremental bits of code ? Or would testers duplicate the unit tests that developers have already run ?

The role of testers in agile is not to perform the unit tests of programmers nor to duplicate the unit tests that developers perform. Agile testers perform a significant amount of manual (yes, manual) exploratory type testing. The purpose of these tests is to reveal issues which the unit tests would not have discovered. The exploratory tests need to be as wide as possible, meaning they need to get to as end-to-end an effort as possible. Given that unit tests focus on the specific module or area of code, the exploratory tests which focus on the interactions between modules and user scenarios tend to throw up issues that were not found earlier. Such end-to-end tests find issues quicker.

For testers to be able to successfully operate in an agile environment, it is important that they be familiar with the tools-of-the-trade. Testers need to know the language used in development, be able to checkout and build the source code, work with the development environment (IDEs, version control, continuous build integration systems, unit test frameworks such as xUnit, etc), be able to configure the system and its dependencies, wherever needed write code / scripts to workaround any as yet undeveloped interfaces or harness, add to the existing automation suites as needed, be able to work together and communicate comfortably with programmers.

Agile testing ...

One of the challenges for testers in agile is – the definition of the software is not fixed. It is a moving target. In the non-agile (traditional) models, testers await the code freeze to perform extensive system testing. However, this is not really an option in the agile world. Agile makes change inevitable. There may be iterations where new features may not be developed and developers may focus on fixing bugs. Testers need to embrace change and work on testing as early as possible rather than holding off a major chunk of testing until all the requirements are firmed up and implementation is complete.

Agile testers need to make the most of available time for testing. Unlike the traditional models where testers are allotted several weeks to perform testing, in the agile world the iterations / sprints are short. Tester feedback needs to be earlier and faster. It is not enough to stick to trying to accomplish the same type of testing in a faster manner. Testers in agile need to rethink their processes and how testing is performed, to make this happen.

Agile testers ...

One of the characteristics of agile teams is that they test – early, often and continuously. Most agile teams perform extensive unit testing and collaborate with users on creating automated acceptance tests. Agile teams that practice test-first development tend to write automated unit tests before writing the code those tests will exercise.

Quality in agile is the responsibility of the entire team. Agile developers hold themselves accountable for the quality of the code, and therefore view testing as a core part of software development, not a separate activity to be performed after implementation is completed. However, things have not all been smooth with regard to accepting testers as a valuable part of the agile team. Earlier, some agile practitioners suggested that early unit testing and automated customer-driven acceptance testing reduced the need for independent testers. Things have however changed over time. Testers today are viewed as a definite value-addition to the agile team. Experience shows that professional and skilled testers can detect useful defects that do not show up during the developer tests or the automated tests.

Dilbert !

This would not exactly qualify as a "Software and Quality" subject. Yet, in the spirit of adding an element of fun to this blog i have included the Dilbert widget on the side-bar. Take a look and catch up on your daily dose of Dilbert right here !

Agile ... continued

Continuing on the subject of agile ... agile teams accept change as inevitable and adapt their processes to manage change. Short iterations imply that stakeholders can see steady progress and provide frequent feedback. The emphasis on working software means that stakeholders can see & use the working prototype rather than merely look at metrics and documentation highlighting status of the team. Continuous integration means that if one part of the system isn’t playing nicely with others, the team will find out almost immediately. Merciless refactoring, where programmers improve the code internally without changing its external behaviour, prevents code from becoming fragile over time. Extensive automated unit tests ensure that fixing one bug won’t introduce further regressions.


Agile

The subject of this post is probably not a suitable indicator of what's in this entry. I was intending to talk about Agile testing and then shifted gears to begin with trying to shed some light on the basic question - what is Agile ? In subsequent posts, i hope to focus more on testing in an agile context.

What is Agile (in the context of Software development) ? Is it a buzzword ? Is it what the dictionary defines "agile" to be - adaptable, able to move quickly, respond quickly ? Well if the dictionary definition were true, then i'd say almost everyone involved with producing software would want to be agile.

Agile refers to a collection of methodologies that enable agility in producing software. Common agile methods include Scrum, XP, etc. The Agile manifesto describes the values of the agile community. These are listed below.

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

While agile methods do not intend to do away with the items on the right, they lay a greater emphasis on the items on the left and that can present a significant difference in how software is produced vis-a-vis the non-agile / traditional models of development (and testing).

Testing needs to be business driven

Testing needs to be business driven and customer focused. Generally testers may not accept the fact that their test efforts are not really driven by what is important to the business or customer. It is fairly commonplace to have quality criterion defined only by the testing group without having consulted stakeholders or trying to figure out if the criteria defined matches what the business or customers think quality should be. The customer or business must decide and define the scope of the term quality in the context of what is being delivered and testing must focus on this definition.

Deciding on what needs to be tested, testing prioritization, testing risks management – needs inputs from the customer or their representatives. It should not be the case that the testing group alone decides on what to test, scope and extent of testing, areas to be tested more/less, etc. In the agile world, the ability of testing as a function to adapt to delivering what the business needs is critical for the group to add significant value. Test planning and development should not be limited to the testing team and should involve other stakeholders to understand what is important to the business as opposed to what the group thinks might be important.

NASSCOM Product Conclave 2009

I was away for couple of days attending the NASSCOM Product Conclave 2009.

I liked the talk by Guy Kawasaki. There were a few other thought provoking sessions and some interesting panelists / speakers.

Testing vs field observed defects

Myers put forth the counter-intuitive principle in software testing which states that the more defects found during formal testing the more that remained to be found later.

There seems to be a positive co-relation between the rate of defects found during formal testing with the rate of defects reported from the field. Higher rate of defects reported during a formal testing exercise usually means that there has either been a higher rate of error injection during the development process or that a new and more effective approach to testing has been followed. It could also be the case that a lot of additional and extra-ordinary test effort was expended resulting in the higher rate of defects being found.

A popular analogy to describe the relationship between defect rates during formal testing and field trials is to consider the overall defect rate as an iceberg. The tip that is visible is likened to the defects found during testing and the submerged portion as the latent field defect rate. The overall size of this iceberg is determined by the level of error injection during development. Formal testing normally happens once the code is developed and integrated by which time the “iceberg” is already formed. The larger the tip that is visible, the larger would be the entire iceberg.

This does not mean that we just get into a mode of acceptance about the latent defects that would be revealed during field usage. We can take steps to reduce the extent of the latent defects and bring up more of the iceberg above water. It must be stressed that managing of quality of the development process is important and can contribute towards reducing the rate of error injection. Prevention is definitely better than trying to determine and fix defects (in the process probably introducing other defects). Even with robust processes, some amount of error injection cannot be ruled out and this is where practices such as good design & code reviews and inspections are needed. Additionally, unit and integration tests by developers prior to checking in code into the repository should help reduce the number of defects that are left lurking around. The testing team must also continually enhance their tests, improve coverage and analyse defect rates & trends across releases to make sure that testing is doing its best to find as many issues as it can.

Defect & Effort

It is observed that while keeping other factors such as skill levels, processes, tools and technology constant, there tends to be a linear relationship between defect and effort. Human errors cause defects to be introduced at a constant rate. The rate of introducing defects may however be altered by making improvements in the development process, better training, changing schedules, improved staffing, use of better tools and techniques and so on. This could also be looked at in terms of the relationship between defect arrival rate to the code development rate which in turn is related to the effort.


Privacy of test data

Privacy of data used in testing is something that organizations must consider. It is not uncommon to observe organizations using a copy of their production data to facilitate testing of their applications. This usage automatically exposes private data to internal constituents such as testers, administrators of the database, developers and others who have access to the data. Organizations tend to assume that since the test data and its associated environment reside within the organization's firewall, this data would be safe. In addition, the focus on securing test environments is often not high on the priority list. However, the fact remains that employees now have access to private data which include items such as credit card information, financial data, ssns, etc. Providing such access violates privacy regulations, enables data theft and misuse by internal staff and even exposes the data to external hacking. Given the levels of security surrounding a test environment, all that hackers need to do is to break into the corporate network and help themselves to the data mine resident in the test databases.

The reasoning for use of production data in testing is to perform real-life and comprehensive testing of the application. While this may be true, organizations cannot ignore the risks involved in simply using a copy of production data as-is on the test databases. Couple of techniques that may be followed to mitigate the risks would be – to generate test data and to mask sensitive data.

Generating test data eliminates the need to use copies of production data. Organizations may choose to use a mix of production (non-sensitive) data along with generated (sensitive fields such as card numbers, etc) data. Test data generation is not as simple as it sounds. Difficulties in generating data that represent the various possible real life use cases is not an easy task. The greater the complexity of the application being tested, the greater is the difficulty in generating suitable test data.

Masking of production data is another technique that may be used to maintain data privacy. Masking is also known as scrubbing or sanitization of data. Sensitive data is masked using various algorithms so that private data remains hidden from view. Several vendors offer data masking solutions. The advantage of masking data is that testing can happen with real data. However, data masking for larger and complex applications requires considerable effort and expense to implement.

The pesticide paradox

An interesting analogy comparing Software Testing with the use of pesticides in farming was presented by Beizer in his book on Software Testing techniques. He called it the pesticide paradox.

Repetitive use of the same pesticide mix to eliminate insects during farming will over time lead to the insects developing resistance to the pesticide thereby rendering the mix ineffective. A similar phenomenon may be seen while testing software. As testers keep repeating the same set of tests over an over again, the software being tested develops immunity to these tests and fewer defects show up. As you execute the same set of repetitive tests over an over again, your software eventually builds up resistance resulting in nothing new being revealed by the tests.

Further, every method you use to prevent or find bugs leaves a residue of subtler bugs against which those methods are ineffectual. In order to overcome the pesticide paradox, testers must regularly develop newer tests exercising the various parts of the system and their inter-connections to find additional defects. Also, testers cannot forever rely on existing test techniques or methods and must be on the look out to continually improve upon existing methods to make testing more effective.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Defects are useful

Defects provide real, observable data regarding a project's progress both in terms of quality and schedule. I once came across an interesting analogy of comparing defects to pain experienced in the human body. Pain is a great way for the body to provide feedback without which we could cause serious harm to ourselves without even realizing it. Defects may be considered to be the equivalent while developing software. While both defects and pain are something we wish to avoid and eliminate, their presence signifies underlying symptoms that need cure. Analysis of defects and pain by competent professionals leads to unearthing and cure or fixing of the issues which in turn ensure better health of the system.

A characteristic of defects are – they are real and observable manifestations. Defects help indicate the progress of software development, the effectiveness of the development process and potential for improvement and the quality of the product being developed. Defects can be counted, charted, predicted and provide a wealth of information and insight into the product as well as the development effort.

Wide band Delphi (WBD)

Wide band Delphi (WBD) is a structured estimation technique involving an expert group. There is a lot of literature around the details of implementing this technique. In brief, this technique involves getting a group of “experts” to make estimates, discuss their assumptions and arrive at a consensus estimate. The estimates made by a group of experts with their varied perspectives are expected to be better than that made by any single individual who may not have the breadth or depth of understanding about the various activities involved.

In this technique, the team of experts begins by analysing the scope / specification of the work being estimated, brainstorms assumptions and creates a work-breakdown structure (WBS). Members of the team then make estimates individually for items in the WBS and note any further changes to the WBS and assumptions. The team then meets together to arrive at a consensus on the estimates. The meeting is facilitated by a moderator who charts the estimates without revealing the estimators and guides the group towards understanding the range of estimates, clarifying any assumptions, revising estimates, in a cyclical process until a consensus is reached.

While implementing the WBD technique it is important to assemble the appropriate team to generate estimates. It is a good idea to involve representatives from different functions who have a stake in the product so they together can agree upon the estimates and feel a sense of ownership of the plan. The technique is useful for new projects or projects where there are multiple factors and uncertainty. WBD helps refine and develop the WBS as well as clarify assumptions around estimates. The technique however, does take time and requires multiple experts to come together and make estimates.

Software Complexity

In the previous post we looked at a Software complexity metric known as Cyclomatic Complexity. In addition to this measure, there are various other metrics used to measure complexity and these include - measure of size by way of lines of code or function points (in turn translated to lines of code), Halstead's Complexity measure, Information Flow Complexity (IFC - IEEE 982.2), metrics for measuring complexity of Object Oriented code, etc.

The basic theory behind measuring complexity is – the greater the complexity of the code, the more difficult it is to test & maintain. Increased complexity leads to higher probability of defects and greater difficulty with maintaining the code. Complexity metrics are used to predict defect proneness and maintenance productivity, help identify code that needs to be simplified as well as areas at greater risk of defects and areas where additional testing may be needed.

While these metrics focus on complexity of the structure of the software, we must also remember that software complexity is not limited to structure or design and includes aspects such as complexity of the computation being performed (Computational Complexity – from a computational standpoint, not necessarily the human perspective) and complexity in understanding (Conceptual Complexity – from a human programmer standpoint).

Cyclomatic Complexity

One of the more popular complexity measures is McCabe's Cyclomatic Complexity (CC). The theory behind CC is simple: CC is a measure of the number of control flows within a module. A module is defined as a set of executable code that has an entrance and an exit. Control flow helps determine the number of paths through the module. The greater the number of paths through the module, the greater is the module's complexity.

The cyclomatic number for a module is equivalent to the number of linearly independent paths through the module and can be used to determine the minimum number of distinct tests that must be executed to test every executable statement at least once.

CC measurements may be performed ..

1. by counting the nodes (correspond to the corners) and edges (correspond to the bodies of the arrows) of the module graph
CC = # of edges - # of nodes + 2
2. by counting the number of binary decision points.
CC = # of binary decisions + 1

After we calculate the CC number for a module, what do we do with it and what does the CC number mean ? Stated simply, a higher CC signifies greater complexity of the module and corresponds to greater difficulty to test and maintain the module. Rules have been put forth on interpreting CC numbers. One such rule indicates that a CC > 20 signifies a high degree of complexity and risk of code being prone to defects. There are also rules that try to predict the probability of introducing regressions or inserting defects while trying to fix another defect, using the CC number. Here too, the higher CC corresponds to a greater probability of introducing new / additional defects while trying to make fixes to other defects. CC is helpful in trying to gain an insight into the difficulty to maintain and test code.

The following are extensions of Cyclomatic Complexity.
  • CCD (Cyclomatic Complexity Density) is used to predict maintenance productivity and is derived by dividing CC by LOC (Lines of Code). Higher CCD corresponds to lower maintenance productivity.
  • ECC (Essential Cyclomatic Complexity) measures the cyclomatic complexity after the structured constructs (such as if, while, case, sequence) are removed.

I've been away

I've been away from blogging and fairly busy over the past few weeks. Hope to be back with posts and updates on the very interesting subject of Software Quality & Testing.

Feel free to send any comments or feedback my way.

Simplified V-model

The picture below depicts the V life-cycle model, the different phases in which development happens and the corresponding test activities.

Verification and Validation

The terms verification and validation are often used in the context of Software Testing practice. Here's a brief look at these terms.

Verification involves evaluating a system or component to determine whether the output of a given life cycle phase satisfies the conditions imposed at the start of the phase. The process of verification attempts to answer the question - are we building the system or component right ? Examples of verification activities include, reviews, inspections, static analysis, walk throughs, etc.

Validation involves evaluating a system or component to determine whether it meets specified requirements. The process of validation attempts to answer the question - are we building the right system or component ? Validation generally takes place after verification is performed.

In simple terms, it would be fair to equate the preventive Quality Assurance activities to Verification and the reactive Quality Control / Testing activities to Validation.

Verification testing when done thoroughly, helps eliminate defects earlier and facilitates validation activities such as unit, integration, system and acceptance testing to focus better on determining whether the system or component being built meets the real needs of the user.

Project Management in an Agile World

This is from an article that i put together and was published in the September '09 issue of the Project Management Institute's magazine - available online at http://www.pmi.org.in/

Managing projects in the agile word requires the ability to balance stability with flexibility, order with chaos, planning with execution, optimization with exploration and control with speed while dealing with project unpredictability and dynamism by recognizing and constantly adapting to change.

In their whitepaper, "The New Product Development Game", Takeuchi and Nonaka, suggests that "the rules of the game in product development are changing.” Under the traditional approach, a product development process moved like a relay race, sequentially from one phase to the next: requirements, design, development and so on. Problems could occur at the points where one group passes the project to the next. A bottleneck in one phase can slow the development process.

Takeuchi and Nonaka discuss the "rugby approach" of dedicated, self-organizing teams, the members of which, like actual rugby scrum teams who work together to gain control of a ball and move it up the field, all work together to deliver product.” The new approach has characteristics such as – built-in instability, self-organizing project teams and overlapping development phases. These self-controlled & self-organizing teams require little direct project management as we know it.

Agile projects value working software, which is a profoundly different emphasis from traditional, projects. Traditionally, one would measure a project's progress by the percent complete of the functional milestones (analysis complete, documentation complete, code complete ...). In agile projects, however, working software is the ultimate quantification of project status. At the end of each short iteration, a working product is delivered and available for review.

While agile methodologies have gained popularity, the role of the project manager (PM) in many groups remains unclear. Traditionally, the project manager is typically “the outsider” who controls the teams progress and makes assignments. In the agile world, PMs are expected to - be part of the team and function from within the team's boundary itself while acting as a facilitator who collaborates with the team.

Being more specific, in the case of Scrum (a popular agile methodology which we follow in our group), it would be safe to state that the responsibilities of a traditional project manager have been distributed among the Scrum Master, the product owner and the team. In Scrum, the project team meets at a sprint planning meeting where the team itself plans and schedules its own work using a sprint backlog. The sprint backlog is a list of tasks to be tackled during the duration of a sprint (~4 weeks). The project manager generally plays the role of the scrum master who facilitates daily meetings of the team, understands any impediments and works to remove them. Skills needed for the role include – influencing, negotiation and facilitation, which are needed when dealing with the team that comprises representatives from various functional areas and in working across organizational hierarchies and divisions to resolve any impediments that the team is facing. The role of the scrum master can be viewed as a servant-leader who works to help the team become productive. The team decides what tasks to take up and estimates the time needed to complete the same. Team members derive metrics based on their daily activities and reports the same. Some of the responsibilities of the project manager in the agile world include the following.

Remove impediments: These could be administrative, requirements or technology challenges. Impediments are reported during the daily meetings by the team members. The project manager (acting as the scrum master) takes note of the issues, tries to remove them and reports back on status.

Facilitate sprint planning meetings: Before starting a sprint (iteration), the PM facilitates the planning meeting to get the team to decide & commit to tasks they will perform. Any dependencies between requirements could also be considered and a plan for the sprint is prepared.

Facilitate sprint retrospective meetings: Unlike the “lessons learnt” meetings that used to happen at the end of a project, retrospectives happen after each iteration and is facilitated by the project manager.

Facilitate, track and monitor estimation: The agile team makes the estimates while the project manager captures and tracks the estimates. The project manager's job focus is on leading the project rather than micro-manage the team's activities.

Handle reporting: The team generates most of the data in the course of their normal work. The project manager can take this input and present it in a way thats appropriate for different entities interested in this information.

Facilitating daily meetings: Running the daily meeting as per the rules and timelines, keeping the team focussed, facilitating status reporting by all members, capturing action items are part of the PM's profile.

The project manager in the agile world is called to lead. The PM has to keep the team on track, help resolve issues, have good inter-personal skills to handle any people issues within the team, communicate & negotiate with stakeholders and report on project status. The project manager represents the team to the world outside and is responsible for protecting the team from external influence and distractions.

Errors - Load, Hardware

LOAD CONDITIONS
  • Required resource not available
  • Doesn't return a resource
    • Doesn't indicate that it's done with a device
    • Doesn't erase old files from mass storage
    • Doesn't return unused memory
    • Wastes computer time
  • No available large memory areas
  • Input buffer or queue not deep enough
  • Doesn't clear items from queue, buffer, or stack
  • Lost messages
  • Performance costs
  • Race condition windows expand
  • Doesn't abbreviate under load
  • Doesn't recognize that another process abbreviates output under load
  • Low priority tasks not put off
  • Low priority tasks never done

HARDWARE
  • Wrong device
  • Wrong device address
  • Device unavailable
  • Device returned to wrong type of pool
  • Device use forbidden to caller
  • Specifies wrong privilege level for a device
  • Noisy channel
  • Channel goes down
  • Time-out problems
  • Wrong storage device
  • Doesn't check directory of current disk
  • Doesn't close a file
  • Unexpected end of file
  • Disk sector bugs and other length-dependent errors
  • Wrong operation or instruction codes
  • Misunderstood status or return code
  • Device protocol error
  • Underutilizes device intelligence
  • Paging mechanism ignored or misunderstood
  • Ignores channel throughput limits
  • Assumes device is or isn't, or should be or shouldn't be initialized
  • Assumes programmable function keys are programmed correctly