Testing in the Agile World (Part 3)

Testing in agile is not something that happens at “the end” of the development or implementation phase. Testing happens as development happens. Incremental testing is the norm and increments of functionality that gets developed is tested and issues reported immediately. The short & quick feedback loop helps deliver better quality code and reduce cost of fixing defects much later when the cost of fixing them is higher. Also, the ability to have working and demonstrable piece of software at the end of each sprint is a huge benefit – customers and product owners can quickly review and “play around” with the developed artefact and provide feedback quickly. This helps ensure that the agile team is developing what the customer needs and also keeps the customer appraised of the team's progress in real time.

An important aspect of agile development is the ability to “release” after each iteration or sprint. A working copy of the product is expected to be ready for “release” at the end of each iteration. This ability to release frequently is important. It may be the case that the team may not wish to actually release after each iteration but having the capability and working towards delivering working software by the end of each iteration is key. Customers may choose to pick up a deliverable after an iteration (could be every few days or few weeks) and either review that the development is on track or even choose to deploy in increments. The agile team can get regular updates on their development and incorporate into their efforts quickly rather than wait for the complete product to be developed and then released to customers.

Customers get a better say in how development happens. The product owner can make decisions to even stop further development on some areas or suggest changes where needed. The focus on getting working software out at the end of each iteration also brings together the various functions together as a close knit team – everything from development to installation, documentation and testing needs to be taken care of rather than leave any item for later in the release. Issues are identified sooner and as stated earlier, the short feedback loop helps incrementally deliver better quality software faster.

Agile development also involves lesser documentation than traditional models of development. Agile methods focus on face-to-face interactions and meetings to keep the communication channels open and clear. In the Scrum methodology which we follow, daily stand-up meetings are conducted where all members of the agile team share their status updates, plans and impediments encountered. In addition, planning and retrospective meetings are held at the start and end of each sprint. Testers work along with their counter-part developers and help to regularly test every testable bit of work product providing regular feedback to ensure a better quality feature goes in. Communication on agile teams tend to be quick and direct, with agile methodologies favouring co-located teams and human interactions. It gets to be far more effective and easier to pop-in to your neighbouring cubicle and get something clarified rather than start an email thread and await responses.

Agile development requires a customer representative to be part of the team. This gets better than having to rely on a requirements document. You can always ask the representative for clarifications directly and get first hand feedback. Requirements are prioritized based on what is most important to the customer and listed in what is normally known as the backlog for the product. The agile team goes through the backlog list in order and picks up the items that they can commit to delivering within the iteration.

Testing in the Agile World (Part 2)

Continuing from the previous post


In such a scenario, the role of individuals who consider themselves dedicated testers, may be questioned. When agile development already emphasizes practices such as test-first development, developers writing unit tests and so on, is there a need for dedicated or specialist testers on agile teams ? The answer is very much a resounding, Yes !

Testers bring to the table a range of special skill sets and abilities that help enhance the quality of the work product. Testers can perform testing that goes beyond the unit and component level tests which developers / programmers may perform. Testers on agile teams can, like their counterparts in the traditional models, add a lot of value by performing tests from a customer / end user perspective; develop and execute a variety of different test types such as performance, functional, security, interoperability, compatibility and so on.

In our group at my current organization, we follow an agile development method called called Scrum. In brief, product development activity happens in short iterations called “sprints” which may be of a few weeks duration (generally up to ~4 weeks). Members from the different functional groups come together and form a single team that works together on delivering the features which  the team commits to. The list of features, enhancements, defects to be addressed is put up in a prioritized list known as the backlog. The sprint team picks up tasks from this list which members think they can accomplish during the duration of the sprint. The scrum process is co-ordinated and facilitated by an individual who dons the role of a “scrum master”. Daily stand-up meetings happen where members share information on their achievements since the last meeting, any obstacles faced and plans for the next day. Reports such as burn down charts and information captured during the meetings help introduce a greater degree of transparency into the development activities when compared to traditional models that were followed. Testers are paired with developers, normally a tester works together with a developer on a particular area and works in tandem on producing the product.

Testing in the Agile World (Part 1)

In this and subsequent few posts, i shall post content from my paper on Agile testing which was recently published by the Quality Assurance Institute. We start with a look at the concept of Agile development and progress towards testing in the agile context with specific emphasis on the Scrum model.

Agile Software Development refers to a philosophy, a mind-set based on iterative development. Agile methodologies support the agile values based on the agile philosophy. The Agile Manifesto lists the following agile values

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

The manifesto goes on to state that while there is value in the items on the right, agile development values items on the left more.

Greater transparency into how software is produced, better predictability, faster time-to-market, frequent releases, increased productivity, higher levels of quality ... all this and more, lure organizations that have been following “traditional” development models to “try out” agile methodologies.

Traditional methods of development generally follow a set of models that usually define phases of activity involving release planning, requirements definition, sign-off, design, implementation, testing and so on. Work products are passed on from one phase to the next. Experience shows that these models tend to involve release cycles that are fairly long, thereby delaying the time-to-market and being pretty inflexible to changes through the development lifecycle. The general aim tends to be to deliver all functionality captured in the initial requirements specification as a completed finished product. The testers in this case prepare for testing by way of creating test plans based on the requirements & other documentation that are available and then await the finished work product after the implementation is complete to begin formal testing.

Agile methods however, introduce a paradigm shift in how products are produced. Development happens in short cycles of a few weeks duration; at the end of a cycle a working product or artefact is ready and available to be displayed to the customer and even shipped if need be. The product owner / customer sets the quality criteria for each iteration or sprint. This definition of quality signifies what is important from a customer's view point rather than what a formal testing team may choose to define. For example, it may so happen that issues which testers might think are important to be addressed immediately may actually be deferred to a later iteration if the customer does not think it is a priority. Agile development would not usually encourage specialized roles, such as a tester or developer. Members from various groups are drawn together to form an agile team. This team could comprise representatives from development, testing, technical writing, internationalization, etc. as required for producing the product.

Coming up ...

In the next and subsequent few blog entries, i intend to post from my paper published by QAI on the subject of "Testing in the Agile World". We will look at the concept of Agile development, how testers fit in and the qualities needed by testers to succeed in this interesting world.

Thanks for reading.

- John

De-centralized / distributed test teams

Here, testers are organized by line of business or products. Having testers assigned to specific products enables them to develop subject matter expertise to a level that is not usually possible when operating in a centralized test team. Test coverage and effectiveness of testing is enhanced since testers have greater knowledge of the product and use that knowledge to test wider & more complex scenarios. The interaction between development and testing tends to be better. Both groups mostly work together closely and interact often through-out the product life cycle. Processes for engagement and interaction between functional groups is pretty well set and happen more smoothly than a centralized model.

Testers normally report into the development organization or application owner. Testers, though considered peers with development at a product / project level, are often viewed as part of the development engineering team at a higher level.

Resource constraints can more easily affect test teams in such a model, since there is'nt a pool from which resources may be drawn when needed. Managing resources through the demand highs and lows can be challenging. Also, processes, tools and techniques followed tend to be local to individual test teams, with little consistency across product groups. Some element of redundancy  exists, while issues could crop up when trying to integrate different products. Generally owing to the smaller size of de-centralized teams, the opportunities to specialize may also be fewer.

Having looked at both the centralized and de-centralized approaches, organizations may choose to follow either of these or even a mixed approach by centralizing some areas that can benefit from a central group while de-centralizing areas that would work best by being part of the product group. Irrespective of the approach taken, the testing team needs to be allowed to function as independently as possible, be responsible for important decisions affecting testing and receive sufficient senior management support.

Centralized Software Testing - some drawbacks

Continuing from our earlier post on Centralized testing, lets look at some drawbacks of this type of test group organization.

Organizing testing into a centralized group tends to promote silo behavior and creates barriers to effective collaboration between cross-functional teams; mainly with development. Testing tends to happen “later”. Development creates a piece of code and often “throws poor quality code” over the wall to testing. Rather than working together closely as partners, development and testing often tend to be pitted against each other. Formal boundaries between functional groups lead to longer defect detection and fixing cycles, affecting schedules.

The nature of shared resource pools can affect development of subject matter expertise. Given the nature of resource allocation which happens on an as-needed basis, there is not much time for testers to really specialize in a particular domain. Transient test teams could also mean that building and maintaining robust regression test suites for the various projects can be a challenge.

QAI Software Testing Certifications -CMST, CSTE and CAST

While attending the recently concluded “9th Annual International Software Testing Conference in India 2009” organized by QAI, i came across a "new" Manager level certification program called Certified Manager of Software Testing (CMST). Thought i'd share with readers of this blog while summarizing the current Software Testing certifications being offered by the Quality Assurance Institute.

QAI presently has three certification programs for Software Testers.

1.    CAST (Certified Associate in Software Testing)
  • foundation level

  • targeted at folks who are relatively new to testing

  • costs approximately US$ 200

  • exam format: 1.5 hours examination, two parts of 45 minutes each, multiple choice questions

2.    CSTE (Certified Software Tester)
  • 
practitioner level
  • 
targeted at experienced software testers, test leads and test architects

  • costs approximately US$ 350

  • exam format: 4 hours examination, 2 subjective parts of 75 minutes each, 2 objective parts of 60 minutes each

3.    CMST (Certified Manager of Software Testing)

  • managerial level

  • targets Software Test Managers and Software Project Managers – both folks who are presently working at these levels or expected to work at the management level

  • costs approximately US$ 600

  • exam format: Written documentation supporting real-world experience in Software Testing, four part subjective examination
Note that the prices are indicative and can vary. The CMST exam is presently being offered at an introductory price of US$ 450. All programs offer a PDF version of the CBoK. For more &updated information on these programs, eligibility requirements, etc. refer the software certifications web site  -  http://softwarecertifications.org

Centralized Software Testing

A common question that arises amongst testing professionals is around the ideal way to organize test teams - should organizations have centralized test groups or de-centralized test groups ? In this and the next set of posts, let us look briefly at each of these types of test group organization as well as some of their benefits and drawbacks.

Today's post will look at "Centralized test groups" and their benefits.

Centralized test groups comprise a pool of resources that are shared across applications and projects. Each tester may work on one or more projects at a time. While developing a centralized testing group, it is important to assemble testers with a diverse set of technical and other skills. Testers may get assigned to projects on a part-time or full-time basis depending on the project requirements. As the need for testers increase, more specialized testers may get assigned to support the project. Centralized test groups have a defined test leadership hierarchy in place.

Advantages
  • Process consistency - deployment of standard testing methodology & processes helps improve quality and efficiency of testing
  • Benefits from economies of scale and centralized spending
  • Hardware and software licensing consolidation helps reduce costs
  • Centralized groups can more easily & consistently implement practices such as CMM, continuous improvement and common metrics
  • Allows better focus and specialization in test processes and tools
  • Affords flexibility in utilization of resources. On-demand allocation of resources to projects ensuring better resource utilization
  • Better sharing & leveraging of best practices
  • Better career paths for testers to build a “career” in testing. More opportunities for testers to gain expertise and skills on a wide range of applications, tools and techniques
  • Testers have more opportunities for specialization
  • Testers have better mentoring opportunities from more senior testers in the centralized organization
  • Better objectivity in testing. Since testers do not report into a development organization or project managers, they remain insulated from any outside pressures or influence. Testing can better position itself as a peer to development in the organization's reporting hierarchy

Coming up ...

I've been away attending the "9th Annual International Software Testing Conference in India (STC 2009)" being organized by the Quality Assurance Institute (QAI).

A paper on Agile Testing, that i put together has been published by QAI. I shall post contents of that paper in subsequent posts on this blog.

Meanwhile, while the last blog post talked about Business Strategy ("Blue Ocean Strategy"), the next post is on the subject of Centralized vs de-centralized testing.  This is a common question that pops-up - what are the benefits of either approaches (and some of their drawbacks). I hope to have this posted tomorrow.

Thanks for reading,

- John

Blue Ocean Strategy

I recently participated in reviewing the book - “Blue Ocean Strategy” and found it to be a pretty interesting exercise. The book is about business strategy and is written by W. Chan Kim and Renée Maubborgne of INSEAD business school. Here's a brief summary.

The book classifies the business universe as consisting of two distinct kinds of spaces - red and blue oceans. Red oceans represent all the industries in existence today-the known market space. In red oceans, industry boundaries are defined and accepted, and the competitive rules of the game are well understood. Here, companies try to outperform their rivals in order to grab a greater share of existing demand. As the space gets more and more crowded, prospects for profits and growth are reduced. Products turn into commodities, and increasing competition turns the water bloody.

Blue oceans denote all the industries “not” in existence today-the unknown market space, untainted by competition. In blue oceans, demand is created rather than fought over. There is ample opportunity for growth that is both profitable and rapid. There are two ways to create blue oceans. In a few cases, companies can give rise to completely new industries, as eBay did with the online auction industry. But in most cases, a blue ocean is created from within a red ocean when a company alters the boundaries of an existing industry.

Blue oceans differ from traditional models which are focused on competing in the existing market space. While the term “Blue Oceans” may be new, the concept has always been around. Take a look back over the past century and try to find out how many of today's industries were then unknown. You would notice that many of today's fundamental industries such as automobiles, aviation, petrochemicals, pharmaceuticals and many others were not just unheard of but people then would not have even thought these industries were possibilities. If a hundred years seems long, try looking back at a shorter period of a few decades ago and try to find the answer to the same question. You are sure to find several new industries – such as mobile phones, biotechnology, satellite television, internet start-ups and many more that were not around then. Now, look ahead at the next few decades and ask yourself the question – how many industries that are unknown today will exist in the future – a decade or two from today ? If the past is any indicator of the future, the answer would be obvious – we are sure to have many new industries that we are not aware of now.

Organizations have a tremendous capacity to create new industries and recreate existing ones. Various factors such as rapid technological advances, enhanced industrial productivity, falling trade barriers between nations and regions, ready global availability of information on products and prices – and such others are contributing towards the contraction of niche markets and monopolies.  Prospects in many established market spaces a.k.a. Red oceans are steadily declining. This situation has speeded up the commoditization of products and services, led to price wars and reduced profit margins. With commoditization, most brands across categories tend to become more and more alike. This leads to consumers increasingly basing purchase decisions on price. In overcrowded market spaces, differentiation between brands becomes harder.

So, why do organizations still focus their strategies greatly on the red oceans ? A possible answer would be to trace the roots of corporate strategy - which seems to be heavily influenced by military strategy. References to officers, headquarters, troops, front line, etc. are borrowed from the military. Strategy in the military context is all about red ocean competition – fighting an opponent and taking over the battlefield or limited territory. Blue ocean strategy is however, about doing business where there is no competition. It is about creating new land and not dividing existing land. Red ocean focus implies an acceptance of the limitations of war – limited land and the requirement to beat an enemy to be successful.

Blue ocean strategy rejects a fundamental principle of traditional strategy – of a trade-off between cost and value. According to conventional strategy, organizations can either create greater value for customers at a higher cost or create moderate value at a lower cost. The relationship between value and cost seems to be proportionally driven, higher value driven by higher cost and vice versa. However, organizations that have successfully followed blue ocean strategy pursue both value differentiation and lower costs together and not as a trade-off. Blue ocean strategy works when organizations adopt a total-system approach wherein all systems of the organization such as the value offering, price and costs are well aligned. Observance of companies that have created blue oceans show that they are able to benefit without facing strong challenges for over a decade. This is due to the nature of blue ocean strategy which creates significant economic and cognitive barriers to competition.

Both blue and red oceans have always existed and will continue to do so. When organizations understand the rationale behind both types of strategies, they will be better able to balance their efforts across both strategy types and create more blue oceans.

Coming up ...

Well, after having looked at the Waterfall development model in the earlier two posts, lets look at a different topic - Business Strategy. I'm putting together a post on "Blue Ocean Strategy" which is based on the book written by W. Chan Kim and Renée Maubborgne of INSEAD business school. The concept of Blue Oceans is not limited to large corporations and can very well be applied by each of us in our respective areas of activity. I hope to post this latest by tomorrow.

Thanks for reading,

- John

Advantages / disadvantages of the Waterfall model

Continuing from the previous blog entry that talked about the Waterfall model, this post presents some of it's advantages and disadvantages.

Some advantages of the Waterfall model
  • Clearly divides the problem into distinct phases that may be performed independently
  • Simple concept
  • Natural approach to solving the problem
  • Fits well into a contractual setting where each phase is considered a milestone
Some of the drawbacks of the Waterfall model

In many projects, the strict sequencing of phases advocated by the waterfall model is not followed. The model assumes that one builds an entire system all at once, perform end-to-end testing after all the design and most of the coding is completed. In reality, feedback from downstream phases are passed upstream to make refinements. For example, while implementing a design, issues with the design may be observed which would require the design to be improved upon. Similarly during other phases. There could be quite a few such iterations to firm up requirements, design and get to actual implementation.

Evidence of failures in practicing the waterfall model comes from one of its most frequent users, the US Department of Defence (DoD). The DoD required most of its projects to follow the waterfall model which was documented in the standard DoD STD 2167. A report on project failure rates showed that up to 75 percent of the projects failed or were never used. Subsequent analysis recommended replacing the waterfall model with an iterative and incremental approach to development.

Some of the assumptions in the waterfall model include
  • A well-defined set of requirements is available. These are assumed to be reasonably well stated and the attempt is to freeze these early. The onus is then on making sure these requirements are well-understood and implemented
  • Any changes to defined requirements would be small enough to be able to be managed without having to make significant changes to the development plans or schedule
  • Software development and associated research & development activity can fit into a predictable schedule
  • Integration of the various pieces of the monolithic system, their behavior, performance and other attributes are predictable and that the architectural plans and designs would be able to handle any integration issues
In real world development, it is not feasible to assume the above. Having a clear set of requirements firmed up at the outset is nearly impossible. Assuming that requirements thus defined are unlikely to change much is another fallacy. Experience shows that requirements do change and in many cases change significantly requiring re-work and re-design. The greater the time between gathering requirements and delivery of the finished product, the greater the likelihood of changes to the requirements. While trying to integrate the various pieces of the system, even thorough analysis and plans would not be able to accurately predict nor control the process. Often, assumptions made around integration tend to be wrong. Any upstream slippages in schedule tend to compress the time available for later phases and importantly for adequate system integration testing.  The model could also lead to early finalization of technological and hardware related decisions which may not turn out to be the most appropriate. Real world observations of software development highlights the fact that the “big-bang” approach of trying to deliver a monolithic solution is too risky and prone to cost and schedule overruns.

The Waterfall model

The waterfall model is generally attributed to Royce (1970). The model encourages the product development team to specify what the software is supposed to do (gather & define requirements) before implementing the system. Product development is split into multiple sequential steps (design, implement, test) with intermediate deliverables leading to a final product.

To ensure proper execution with good quality, each step has defined entry and exit criteria. The ETVX (Entry-Task-Validation-eXit) model proposed by IBM fits the waterfall approach wherein each phase may be considered as an activity structured using the model.

The waterfall model may be viewed as a divide-and-conquer approach to development. It allows for tracking project progress across phases and forces the organization to adopt a more structural approach to developing software. The model also requires documentation to be generated which will be used to test and maintain the system. The waterfall model emphasizes careful analysis before building the system. The idea is to avoid wasted effort in building something which does not meet the customer's requirements. Hence, attempt is made to fully specify and finalize customer requirements as early as possible. Requirements are documented in the requirements specification document. In subsequent phases, “verification” activities are performed to ensure conformance to requirements listed in the requirements specification document. A problem with this sort of reliance on the requirements document is the possibility of incomplete or incorrect requirements being specified. Adequate “validation” with the customer is required.

At a high level, the waterfall model comprises the following phases.
  • Requirements
  • Design
  • Development / Implementation
  • Testing
  • Maintenance
In the next blog post, we shall briefly look at some of the advantages and disadvantages of this model.


Coming up ...

I'm putting together a blog entry on the waterfall model. I know its a "traditional" model and most folks would have worked on it. Yet, i am sure it is useful to revisit and post a fairly detailed analysis of the model, including a look at some of it's advantages and disadvantages. Hoping to post this out by end of day today or latest by tomorrow.

- John

Dealing with information overload

Continuing from the previous post, help is at hand for dealing with “information overload” and the problems it presents. Various technological methods are available to deal with the problem and to aid both individuals and organizations. In addition there are some non-tech methods that involve changes to mind-set and culture, requiring individuals and organizations to modify current thinking and behaviour towards managing information.

Listed below are some tips, summarized from the HBR issue, to “reduce e-mail overload”.

As a recipient
  • To avoid constant distractions, turn off automatic notifications of incoming mails. Establish specific times during the day when you check and take action on messages
  • Do not waste time sorting messages into folders. Today's powerful inbox search features make that unnecessary
  • Do not highlight messages you intend to deal with later by marking them as “unread”. In email clients such as Microsoft Outlook, accidentally typing in the wrong keyboard shortcut will irrevocably designate every item in your inbox as “read”
  • If you will not be able to respond to an email for several days, acknowledge receipt and tell the sender when you are likely to get to it
As a sender
  • Make messages easy to digest by writing a clear subject line and starting the body with the key point
  • To eliminate the need for recipients to open very short messages, put the entire contents in the subject line, followed by “eom” (end of message)
  • Whenever possible, paste the contents of an attachment into the body of the message
  • Minimize email ping-pong by making suggestions such as “should we meet at x time ?” rather than asking open ended questions such as “when should we meet?”
  • Before you choose “reply to all”, stop and consider the email burden on each recipient
  • Send less email: an outgoing message generates, on average, roughly two responses
The above represent a few of the many suggestions and tips to manage information. There's a lot more information available online on how to manage information overload !

Information Overload

Based on the Harvard Business Review, September 2009 article on “Death by Information Overload”

The article talks about the phenomenon of information overload which most of us would be familiar with. I have attempted to summarize points from the article for the benefit of readers of this blog.

In the knowledge economy, information is considered to be our most valuable commodity. And, these days it's available in infinite abundance delivered automatically to our electronic devices or easily accessible. Current research suggests that the surging volume of available information and its interruption of people's work can adversely affect not only personal well-being but also decision making, innovation and productivity. Today, information rushes at us in seemingly infinite formats – email, text messages, twitter tweets, facebook alerts, voice mail, instant messaging, rss feeds and so many other ways. People are drawn towards information that in the past did not exist or that we did not have access to earlier but now that it's available, we dare not ignore.

What does this deluge of information mean for individuals ?

The stress of not being able to process information as fast as it arrives – combined with the personal and social expectation that, say, you will answer every message – can deplete and demoralize you. Edward Hallowell, a psychiatrist and expert on attention deficit disorders argues that the modern workplace induces what he calls “attention deficit trait” with characteristics similar to that of the genetically based disorder. Also, a study commissioned by Hewlett Packard reported that the IQ scores of knowledge workers distracted by email and phone calls fell from their normal level by an average of 10 points – twice the decline recorded for those smoking marijuana ! While a section of people feel overwhelmed with the information overload, there are some who seem to be stimulated by it and display what is termed as “information addiction”. An AOL survey of 4000 email users in the United States reported that 46% of the email users surveyed were “hooked” on e-mail. We must also be aware of the tendency of always-available information to blur the boundaries between work and home thereby affecting personal lives in unexpected ways.

What does this information overload mean for companies ?

An email notification or a message alert means more than just time spent reading and responding to the message. There's also time required to recover from the interruption and re-focus attention. A study by Microsoft researchers tracking the email habits of coworkers found that once their work was interrupted by an email notification, people took, on average, 24 minutes to return to the suspended task. Why is so much time wasted if all that needs to be done is to simply read a message? Studies further indicate that dealing with the message was only a portion of the time off task. People used the interruption as an opportunity to read other un-opened messages, engage in unrelated activities such as surfing the web, text-messaging, etc. Also, surprisingly over half the time was spent cycling through open applications on their computers to determine what they had been doing when interrupted and reestablishing their state of mind once they had finally arrived at the application they had initially abandoned when they were interrupted. Distractions caused by email and other types of information also have more subtle consequences. Research has identified reduced creative activity on days when work is fragmented by interruptions.

While it is not easy to quantify the costs of the consequences of information overload, one calculation by researchers put Intel's annual cost of reduced efficiency, in the form of time lost to handling unnecessary email and recovering from information interruptions, at nearly $1 billion. The researchers go on to say that organizations ignore that kind of number at their peril.

In the next post, we'll look at some ways to manage this information overload.

A change in this blog

As this blog continues to grow and evolve, you would notice a few changes and tweaks in the days ahead. The idea is to make the blog more insightful & appealing to a larger audience.

One of the more significant changes to this blog is it's location.

This blog is now available at the url: http://www.techmanageronline.com

Please do update any bookmarks and references to the brand new location. Of course, the blog is under transition to the new location and is expected to be available at both the new as well as the existing blogspot location for more time.

Also, i plan to include content on subjects that touch upon different aspects of software development, quality, management and allied areas plus a few quick reviews too. Expect to see a few more changes or "improvements" if you will, as we go along. Thanks for reading ! Your inputs are most welcome.

Can test automation run without human intervention ?

A common assumption with regard to test automation is that automated test suites can be executed with zero human intervention. After all isn't that what the tool vendors claim their products can do. Theoretically, you should be able to move your human testers to other tasks once they complete automating their tests.

In the real world, automation does not make human testers redundant. Almost all automation test suites, require human intervention in order to remain effective. Consider two simple instances requiring skilled human intervention - analyzing the results of automated test execution and maintaining the automated tests. Also, one must realize that practically getting a complex automation test suite to execute without issues is itself a difficult task.

When the underlying product being tested changes, it is but natural that the automation which tests the product is affected. Even seemingly minor changes to the product, can require fixes to the automation tests. Regular monitoring of changes and the automated tests requires skilled human testers. In the real world, it is common to find that various external factors such as issues with the file system, memory, networking, product dependencies, etc., can also easily disrupt smooth execution of automated tests.

We must also remember the fact that test automation development is very much a software development project by itself and must be treated as such. Like any software that is developed, automated tests are also not bug-free. Regular testing of the automated tests and monitoring of their execution is essential so you can know whether your automated tests are doing what you expect it to be doing. Any changes to the automation tests must follow a similar process akin to a comparable change in the software product, requiring reviews and testing to make sure fixes do not introduce additional defects.

Exploratory testing

Exploratory testing assumes significance in the context of agile development. Testers in agile, need to plan to explore the product during each iteration. The goal of exploratory testing should be to – (obviously) unearth bugs, look for missing functionality and areas for improvement.

Exploratory testing is a simultaneous process of discovery and learning, followed by dynamic development of tests and their execution. Exploratory testing is useful when – testers are trying to go beyond what is known or else not much is known about the software; information gathered through exploratory tests can help design new tests or improve existing tests. Documentation can only help so much; the tester needs to use the software to best understand it.

Agile testers ... role and requirements

Given the close association of testers with developers in agile, and the nature of incremental testing of partial work products, the tasks that testers perform may sometime seem hazy. Would testers do the unit testing on the partially implemented or incremental bits of code ? Or would testers duplicate the unit tests that developers have already run ?

The role of testers in agile is not to perform the unit tests of programmers nor to duplicate the unit tests that developers perform. Agile testers perform a significant amount of manual (yes, manual) exploratory type testing. The purpose of these tests is to reveal issues which the unit tests would not have discovered. The exploratory tests need to be as wide as possible, meaning they need to get to as end-to-end an effort as possible. Given that unit tests focus on the specific module or area of code, the exploratory tests which focus on the interactions between modules and user scenarios tend to throw up issues that were not found earlier. Such end-to-end tests find issues quicker.

For testers to be able to successfully operate in an agile environment, it is important that they be familiar with the tools-of-the-trade. Testers need to know the language used in development, be able to checkout and build the source code, work with the development environment (IDEs, version control, continuous build integration systems, unit test frameworks such as xUnit, etc), be able to configure the system and its dependencies, wherever needed write code / scripts to workaround any as yet undeveloped interfaces or harness, add to the existing automation suites as needed, be able to work together and communicate comfortably with programmers.

Agile testing ...

One of the challenges for testers in agile is – the definition of the software is not fixed. It is a moving target. In the non-agile (traditional) models, testers await the code freeze to perform extensive system testing. However, this is not really an option in the agile world. Agile makes change inevitable. There may be iterations where new features may not be developed and developers may focus on fixing bugs. Testers need to embrace change and work on testing as early as possible rather than holding off a major chunk of testing until all the requirements are firmed up and implementation is complete.

Agile testers need to make the most of available time for testing. Unlike the traditional models where testers are allotted several weeks to perform testing, in the agile world the iterations / sprints are short. Tester feedback needs to be earlier and faster. It is not enough to stick to trying to accomplish the same type of testing in a faster manner. Testers in agile need to rethink their processes and how testing is performed, to make this happen.

Agile testers ...

One of the characteristics of agile teams is that they test – early, often and continuously. Most agile teams perform extensive unit testing and collaborate with users on creating automated acceptance tests. Agile teams that practice test-first development tend to write automated unit tests before writing the code those tests will exercise.

Quality in agile is the responsibility of the entire team. Agile developers hold themselves accountable for the quality of the code, and therefore view testing as a core part of software development, not a separate activity to be performed after implementation is completed. However, things have not all been smooth with regard to accepting testers as a valuable part of the agile team. Earlier, some agile practitioners suggested that early unit testing and automated customer-driven acceptance testing reduced the need for independent testers. Things have however changed over time. Testers today are viewed as a definite value-addition to the agile team. Experience shows that professional and skilled testers can detect useful defects that do not show up during the developer tests or the automated tests.

Dilbert !

This would not exactly qualify as a "Software and Quality" subject. Yet, in the spirit of adding an element of fun to this blog i have included the Dilbert widget on the side-bar. Take a look and catch up on your daily dose of Dilbert right here !

Agile ... continued

Continuing on the subject of agile ... agile teams accept change as inevitable and adapt their processes to manage change. Short iterations imply that stakeholders can see steady progress and provide frequent feedback. The emphasis on working software means that stakeholders can see & use the working prototype rather than merely look at metrics and documentation highlighting status of the team. Continuous integration means that if one part of the system isn’t playing nicely with others, the team will find out almost immediately. Merciless refactoring, where programmers improve the code internally without changing its external behaviour, prevents code from becoming fragile over time. Extensive automated unit tests ensure that fixing one bug won’t introduce further regressions.