Work Breakdown Structure (WBS)

Work Breakdown Structure (WBS) is an estimation tool in the project manager's arsenal. Using this method, the project's scope is broken down into manageable tasks by creating a work breakdown structure (WBS). In other words, a WBS is a deliverable-oriented hierarchy of the work that must be performed to accomplish the objectives of and create the deliverables for the project.

The principle behind the WBS is simple - a complex task may be subdivided into smaller tasks until you reach a level where further subdivision is not possible. At this level of subdivision, estimation of time to accomplish the task and cost to do so would be easier than at higher levels.

The WBS decomposes the project work into manageable pieces (work packages) that can be assigned to individuals. This helps define the responsibilities for the team members and is the starting point for building the schedule. Decomposition is a technique for subdividing the project deliverables into smaller, manageable tasks called work packages. The WBS is a hierarchical structure with work packages at the lowest level of each branch. Based on their complexity, different deliverables can have different levels of decomposition. Each component in the WBS hierarchy, including work packages, is assigned a unique identifier called a code of account identifier. These identifiers can then be used in estimating costs, scheduling, and assigning resources.

The WBS covers the entire scope of the project. If a task is not included in the WBS, it will not be done as a part of the project. The WBS is a good way to portray the scope of a project. A question that crops up while creating a WBS is – when do you stop the decomposition ? It is suggested that you stop when you reach a level where you can estimate the time and cost for doing the work at the desired level of accuracy or else the work itself will take the time that is equivalent to the smallest unit of time you intend to schedule.

The WBS links the entire project and portrays scope graphically. It enables resource assignments, estimation of time and cost to be prepared and provides inputs to the schedule and budget planning.

Function Point Analysis overview



One of the more popular software estimation methods is the Function Point Analysis (FPA) method by Allan Albrecht at IBM. Here's a high level overview. FPA helps measure the size of a software application based on the functions expected to be delivered. Measuring size of the software helps us derive other important metrics such as schedule, effort, cost, quality metrics, productivity, etc. The output of FPA is a count expressed in terms of function point (FP). The size measured via FPA is independent of the technology used to develop the software.

A function point is an unit of business functionality delivered through the software being measured. If an application has 100 function points, it denotes that an equivalent number of business functions (100 in this case) are being delivered to the user. The FPA method relies on five operational attributes for sizing any software application – External Inputs (EI), External Output (EO), External Query (EQ), Internal Logic Files (ILF) and External Interface File (EIF). The FPA method defines an additional set of fourteen General System Characteristics (GSC) which in turn defines the complexity of the application being measured.

The International Function Point Users Group (IFPUG) was formed to promote and encourage the effective management of application software development and maintenance activities through the use of Function Point Analysis and other software measurement techniques. IFPUG maintains a standard Function Point Counting Practices Manual (CPM), established FPs as an International Standards Organization (ISO) standard, and offers professional FP certifications.

Steps in performing FPA include -
  • Defining the boundary of the application - consists of the internal files and logic, while the five attributes listed earlier which interact with the internal files remain outside the boundary
  • Counting the data functions (ILF, EIF) - ILF and EIF are further defined in terms of complexity as Low, Average and High and map to a constant function point count (For ILF, FP counts are 7, 10 and 15 and for EIF, FP counts are 5, 7 and 10 for Low, Average and High respectively). Refer IFPUG tables for complexity levels and to assign point values for each countable function
  • Counting the transaction functions (EI, EO, EQ) -as with the data functions, each transaction function may be defined as Low, Average or High complexity and maps to a constant function point count (For EI, FP counts are 3 ,4 and 6; for EO, FP counts are 4, 5 and 7 and for EQ, FP counts are 3, 4 and 6 for Low, Average and High respectively). Refer IFPUG tables for complexity levels and to assign point values for each countable function
  • The FP count arrived at from the above two activities (counting data and transaction functions) is called as Unadjusted Function Points (UFP)
  • Compute the Value Adjustment Factor (VAF) and apply to the UFP to get the Adjusted Function Points (AFP)
    • The GSC, mentioned earlier, represent the technical attributes of the software being measured and may be specified in terms of their degree of impact on a scale of 0 to 5 (0 implying no impact and 5 implying maximum impact)
    • The Total Degree of Influence (TDI) is the sum of the impact values for the 14 GSCs. If the TDI = 0 (meaning that all 14 GSCs have no impact), then the VAF is 0.65; if the TDI = 70 (meaning that all 14 GSCs have maximum impact), then the VAF is 1.35. There is a variation of 0.70 between the two extreme VAF values. [VAF = (TDI * 0.01) + 0.65]
  • VAF is used to re-calibrate the UFP and compute the final AFP count [AFP = UFP * VAF]

Test Architect




Testing as a career offers multiple paths for testers to traverse in pursuit of their career goals and aspirations. Testers generally take the regular path from being a junior engineer to fairly senior quality engineering positions such as test leadership roles before coming across a fork in the road, one path leading to Management and the other pointing to a Technical growth path. While the Management career path is fairly well known and Managers seem like a dime-a-dozen!, the Technical path tends to seem a little unclear especially if you have few Senior Technical Leaders in the Testing space to look upto in your organization. The purpose of this post is to hopefully shed some light on the Test Architect role which is part of the Technical growth path in Testing.

Without much ado, lets dive right in.

The Test Architect (TA) role is a senior position in the organization and is treated on par with equivalent Management positions in terms of rewards, recognition, visibility and influence. However, one basic factor that distinguishes a TA from a Manager is the absence of direct-responsibility for managing people. While Management tends to have people management as a core feature of the job, the TA does not directly manage people. However, this in no way lets the TA off the hook, so to speak, from influencing, mentoring, coaching and providing direction to members of the Testing Organization - all very important responsibilities of the TA.

The Test Architect -
  • provides Technical Leadership and Strategic Direction to the Testing Organization (TO)
  • is responsible for Test Strategy formulation
  • helps Formulate & Develop effective Test Architecture per organizational needs
  • is Technically responsible for all the Testing performed by the TO
  • is the foremost Technical Authority and is responsible for the overall Quality of deliverables across all parameters, both functional and non-functional including performance, security, usability, etc.
  • is expected to pro-actively analyze current processes and practices and suggest/ drive improvements. Also, defines processes as needed
  • has wide-reaching scope, impact and influence extending beyond the confines of the TO and spans across the entire product organization
  • is the counter part to the development architect
  • is involved in driving organization-wide Quality Process initiatives and their implementation to ensure Quality of deliverables
  • maintains a “big and complete” picture view of the product, its dependencies, organizational goals, technology arena, etc. and helps guide & direct the functioning of the TO appropriately
  • influences the product organization's future direction, strategy and planning
  • collaborates effectively & on an on-going basis with all constituents involved in product development & release activity including development, testing, technical publications, marketing, program management and other entities to ensure execution & deliverables per plan
  • is involved in customer engagements and provides customer facing organizations with necessary technical product support in making presentations, demos, pocs, etc. Also, receives and analyzes existing customer feedback to identify gaps as well as work with deployment / sustaining organizations as needed. Customer engagement activity also spans alpha / beta trial opportunities and acts as a liaison with customers and partners while ensuring Test strategy is aligned appropriately
  • helps with Test plan development
  • is responsible for design & development of the TO's Test Automation framework / harness and any in-house tools required. Where tools do not fully meet requirements of the TO, the TA writes code / develops components that can extend available tools or even design & develop tools as needed
  • is involved in understanding Business requirements and works with the development architect to translate requirements into solution architecture designs. Reviews requirements and seeks clarity as required, participates in product design reviews and works with the development architect and development team to make any design improvements and refinement as needed. Also helps incorporate Testability requirements into design
  • Analyzes competitive products and technologies and makes appropriate suggestions (may use demos, pocs) to influence product / technology direction
  • has overall product knowledge and is able to guide both junior and senior team members
  • influences Technical direction and use of technologies after making necessary evaluations
  • involved in hiring activities for the TO and mentoring of TO team members
  • pro-actively seeks to make continuous improvements to Test coverage, execution and automation
  • is results oriented and has a high degree of accountability, commitment and responsibility. The expectation is that involving a TA in a project is a guarantee of obtaining positive outcomes
  • participates in test planning for all products handled by the TO and owns the test artifacts such as test specs, code, etc.
  • Growth upwards from a TA level is towards a more senior role with wider scope of activity & influence across the organization. Needless to state the obvious, there is considerable enhancement in responsibilities and charter as progress is made upwards on the growth path
Some of the attributes expected of a Test Architect
  • Extensive Technical skills covering Product, Technologies and Competitive knowledge. Sound knowledge of domain / areas being handled is essential. Its not sufficient to be a specialist in any one area or technology and requires a wide and fairly deep understanding of a gamut of technologies and tools
  • Knowledge of current industry wide Quality & Test processes and practices, Tools and techniques
  • Ability to work with teams. This point cannot be emphasized enough since at this level, the last thing that would be acceptable is silo behavior or merely trying to be an individual star performer. Being able to get the team to perform at an outstanding level is absolutely essential here. The “ability to influence” despite not having direct reporting relationships is very key. In this position, a high EQ is as much a necessity as a high IQ. The ability to collaborate and co-operate is important
  • Excellent communication skills – within and outside of the TO, across teams, with customers – both horizontally and vertically, is important. Effective negotiation skills are very important too.
  • Another facet that is extremely important is an excellent working relationship with the Manager. No i don't say this because i'm viewed as being on the Management side ! The fact is to be a successful TA, requires working in tandem and close co-operation with Management, keeping Management abreast and updated of developments, seeking and providing inputs and feedback, regular reporting, etc. is very important. This attribute cannot be stressed enough
  • Ability to focus and prioritize is important. Understanding the distinction between the urgent and the important and effectively prioritizing tasks is key
  • Needs to focus on the explicit & implicit customer / user needs
  • Self-management is a key attribute expected of a TA. Being able to work without the need for follow-up or “too much” management is important. The TA should be self-motivated and a self-starter. No, this does not absolve the Manager of the responsibilities of managing the TA as needed !, but a TA should require very little following up to get things done. The expectation is when a TA is assigned to a product, project or specific area, positive & agreed upon results are guaranteed almost always
  • Ability to motivate self and others is important. Also, vital is being able to set a good example for the other members of the TO to follow
  • Ability to set goals is also key. In many instances, the TA will need to define and set goals including stretch-goals as appropriate
  • Patience and a touch of humility is valuable, especially in all dealings with team members. This is especially true when trying to mentor or guide other team members, the ability to articulate in ways that are understood by the listener at their level is necessary while also possessing good listening skills. The humility to acknowledge need for continuous learning and to undertake a program of learning to constantly update skills and keep abreast of current developments in the industry is vital
  • Ability to strategize and look ahead and at the big picture
  • A great deal of maturity, accountability, high degree of integrity, highest levels of pro-active behavior, ability to take initiative and professional behavior is naturally expected of a TA
  • Sound Project Management abilities is important
  • Software Analysis & Design knowledge/experience is needed while also having a solid background in Software Quality & Testing. Must have hands-on experience having performed both functional and non-functional testing and be able to review requirements, design and even code as needed
Am hoping the above is useful in gaining a general understanding of the Test Architect role and some of the expectations surrounding this position. The above list is in no way complete nor a full representation of the responsibilities / requirements of the Test Architect role. Each organization and even groups within the larger organization would have its own expectations that form part of the Test Architect's charter. However, most or all of the elements listed above would be present in one form or the other.
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

Test Closure

If you think your testing team has completed its test activities for the release, check if the test closure tasks have been completed too. These include -

Making sure that testing is indeed complete – verify the testing that has been performed against your test plan; check for items such as - whether areas planned to be tested have all been tested, coverage is as planned, tests executed and nothing is skipped (unless known & agreed upon), defects addressed (fixed, evaluated & defer, known issue that is documented), etc.

Conduct retrospective meetings, discuss lessons learnt from the testing performed and document findings to enable improvements in future test campaigns. Evaluate how our estimates fared with actual time taken and effort expended. Identify reasons for any deviation and possible steps to handle this in future test efforts. Look at defect trends and issues such as finding defects late in the cycle, areas for improvements of test processes, better tools usage and so on. Identify any unplanned risks that affected testing and take steps to plan for future campaigns.

Also, make sure to archive the test artifacts produced during the test campaign in a configuration management system. The system should be able to link the artifacts to the version of the system that was tested. Artifacts would include the test reports, test plan, test data, results of testing each build, log files, test cases and other work products.

"Those who don't know history are destined to repeat it" - Edmund Burke

IEEE 829 Test Plan



A popular template for Test Plan preparation is the format specified by the IEEE 829 standard for Software Test Documentation.

Before we look at the contents of the template, we should bear in mind that templates are broad guidelines which should not lead to users of the template to stop thinking and focus on just filling up the blanks in the template document. While using the template, one should understand the organization's requirements and evaluate if the template fits your specific requirements or needs any modifications. Sticking to the stock template may result in some information which needs to be captured being left out.

With that short note, lets look at the template itself. The IEE 829 Test plan template includes
the following sections.
  • Test plan identifier : A unique name by which the test plan may be identified and may include version information
  • Introduction : Summary of the test plan, including type of testing, level of testing (master test plan, component test plan, unit test plan ...), any references to other documents, scope of testing and so on
  • Test items : The artifacts that will be tested
  • Features to be tested : The features or items of the specification that will be tested
  • Features not to be tested : The features or items part of the specification that will not be tested
  • Approach : Addresses “how” the testing will be performed
  • Item pass/fail criteria : This could be viewed as the criteria for completion of testing per this plan.
  • Suspension criteria and resumption requirements : List the criterion for pausing or resumption of testing
  • Test deliverables : The artifacts created by the testing team that will be delivered as per this plan. Examples include - test cases, test design specifications, output from tools, test reports, etc.
  • Testing tasks : The testing tasks involved, their dependencies if any, time they will take and resource requirements
  • Environmental needs : List needs such as hardware, software and other environmental requirements for testing
  • Responsibilities : List the people responsible for the various parts of the plan
  • Staffing and training needs : The people & skill sets needed to carry out the test activities
  • Schedule : List the schedule dates when testing will take place. A safe bet is to tie the schedule to the development schedule in a relative manner without listing hard dates since slippages upstream in development will mean that testing slips correspondingly. Hard dates would result in any development slippages causing compression of testing time.
  • Risks and contingencies : Identify the risks, likelihood and impact as well as possible mitigation steps
  • Approvals : Sign-off by the stakeholders denoting agreement
For a more detailed look at the IEEE 829 Test plan, view this comprehensive article - http://www.techmanageronline.com/2010/02/ieee-std-829-software-test-plan-ieee.html 
***
Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

    Test Strategy

    What is that supposed to be ? Test Policy, Test Plan and now Test Strategy ? Why all these terms ? Why not just pick up the build and try to find as many bugs as we possibly can within the time we've been allotted ? These are questions that you'll hear if you talk to enough folks in the industry. Testing, they say - is a simple task that anyone can do with little need for process or practice. Sure, as long as i'm not your customer. With that interesting insight, lets get back to taking a quick look at our topic for today – Test Strategy!

    The test strategy addresses the “how” of testing in an organization at a high level and is not specific to a particular project or product. It covers the methods of testing, the risks – quality and project risks, relation of risks to test goals, risk management, types of testing across different phases, common entry and exit criteria for the different test types, state at a high level the activities performed in each type of testing, aligns with the test policy, common test requirements (such as what to look for while testing – dependent on the test approach chosen) for each type of testing, discuss testing types and groups responsible for each type, the general approach to testing, any standards to be followed (mandatory vs recommended), automation types, test environment for each test type, methods for test control and reporting, metrics and measurement details and frequency of reporting these, defect tracking and management mechanisms, test configuration management process, cross-functional interactions and sharing of deliverables across functional groups (such as Testing and Development), etc. You could also call it a road map or blueprint that seeks to provide direction to your test efforts and tries to clarify how Testing will be performed.

    When working with products that fit in to a suite and share common characteristics or projects of a similar nature, such a strategy document helps reduce the need for project specific documentation. Although, the Test Strategy is supposed to be generic and spans across projects, it can be further tailored to meet special requirements of individual projects / products.

    Developing a Test Strategy involves asking a lot of questions and communicating with various stakeholders as you piece together the elements of your specific plan. One of the positive outcomes of this exercise is gaining a better understanding of stakeholder expectations from Testing and helping avoid expectation mismatch.

    Quoting from published material on the subject, here are some of the types of Test Strategy – analytical strategies, model based strategies, methodical strategies, process or standards compliant strategies, dynamic or heuristic strategies, consultative strategies and regression test strategies – in real world use, rather than stick to any one type of strategy, strategies are generally combined to suit the requirements of the organization.

    An obvious attribute of a Test strategy is that its not etched in stone, meaning it can and most likely will undergo changes. You start developing your strategy with the best information available at a given point of time. Of course you can wait until you have all the information you need but it just might be too late to be planning. Start with what you have and try to get at least the high level pieces together. You can add the details as you move ahead and gain more information.

    Testing based on analysis of Quality Risks

    is an approach to testing wherein risks identified along with the level associated with each risk is used to plan testing. This blog post gives an overview of testing based on analysis of risks and their levels plus a look at an informal method of performing the analysis.

    To begin, we look at what constitutes a risk. The definition states that risk is the possibility of a negative or undesirable event or outcome. From a testing perspective, we are concerned with two categories of risks. The first category is the Quality risk that affects the product, such as potential defects in the product that cause it crash, lose data, etc. The second category of risk is the Project risk that relates to management of the project and includes items such as inadequate resourcing, insufficient time to test, late binding features, etc.

    Our focus here is on the Quality risks. Risks once identified, need to be classified and ordered according to their risk level. A risk level signifies the importance of a risk as defined by its likelihood of occurrence and business impact. Risk level can be expressed as high, medium, low or in terms of a number. Risk levels help in determining the extent of testing to be performed against the particular risk. You would naturally want to focus the greater part of your test efforts on those areas that have the higher levels of risk. As testing progresses risks are re-assessed and reports will appraise stakeholders in terms of residual risk.

    Identification and analysis of risks can happen at each phase – requirements, design, development. It may also be viewed as a form of review to determine what the product might do that it should not be doing. Informal techniques of risk analysis may be performed for most projects that do not require heavy weight formal techniques of assessment. These require much lesser commitment of time and effort and also need little documentation. Here, stakeholder inputs based on their knowledge of requirements and experience, any historical information and checklists of risks are used to identify and classify risks. Since inputs from stakeholders is important, getting the right folks to participate is key so that risks are rightly identified and risk levels correctly assessed.

    A Thousand Tests equals Zero Coverage

    That title was indeed intended to draw your attention !

    However, the title is not far from the truth. We often hear testing teams claim test coverage numbers that are closer to 100 per cent. The same numbers may be looked at as equaling zero coverage. It all depends on what you base your measurement of test coverage. This is something most folks in testing realize.

    If test coverage is measured as a percentage of all the tests that could be run for a regular real world enterprise application, the coverage number would be near or equal to zero. It does not matter if you have executed a thousand or more tests for the application. This is due to the fact that the number of possible tests you could execute is almost infinite.

    Since a true hundred percent of testing of the application is neither feasible nor possible, the testing team has to make intelligent choices of subsets of tests to be executed. This subset would normally target the areas that are at higher risk levels (likelihood and impact) and represent most value for the stakeholders.

    Quality Risk Analysis: FMEA



    Continuing from the earlier post on Quality Risk Analysis, here's an approach known as FMEA (Failure Modes and Effects Analysis). FMEA is a proactive approach to defect prevention. FMEA involves analyzing failure modes, potential or actual, rating and ranking the risk to the software and taking appropriate actions to mitigate the risk. FMEA is used to improve the quality of the work products during the development life cycle and help reduce defects.

    Failure Modes are the ways or modes in which failures occur. Failures are potential or actual errors or defects. Effect Analysis is studying the consequences of these failures. Failures are prioritized according to how serious their consequences are, how frequently they occur and how easily they can be detected. This technique helps product teams anticipate product failure modes and assess their associated risks. Prioritized by potential risk, the riskiest failure modes can then be targeted to design them out of the software or at least mitigate their effects. Failure modes and effects analysis also documents current knowledge and actions about the risks of failures, for use in continuous improvement. Potential failure modes can be identified from many different sources. Some of them include – Brainstorming, Bug and triage data, Defect taxonomy, Root cause analysis, Security vulnerabilities and threat models, Customer feedback, Sustaining engineering fixes, Support issues and fixes and Static analysis tools.

    Software FMEA ROI is calculated in terms of a cost avoidance factor – the amount of cost avoided by identifying issues early in the life cycle. This is calculated by multiplying the number of issues found by the Software cost value of addressing these issues during a specific phase. The main purpose of doing a Software FMEA is to catch Software defects in the associated development phases: catching Requirements defects in Requirements phase, Design defects in Design phase, etc.

    Some benefits of Software FMEA
    • More robust and reliable software; better quality of software
    • Focus on defect prevention by identifying and eliminating defects in the software design stage helps to drive quality upstream
    • Reduced cost of Testing when measured in terms of cost of poor quality. Proactive identification and elimination of software defects saves time and money. If a defect cannot occur, there will be no need to fix it
    • Enhanced productivity by way of developing higher quality software in lesser time. Prioritization of potential failures based on risk helps support the most effective allocation of people and resources to prevent them
    • Since the technique requires detailed analysis of expected failures, it results in a complete view of potential issues leading to an informed and clearer understanding of risks in the system. Engineering knowledge is persisted for use in future software development projects and iterations. This helps an organization avoid relearning what is already known
    • Helps guide design and development decisions
    • Helps guide testing to focus on areas where more testing is needed and test design requirements
    Some watch areas affecting FMEA
    • The potential time commitment required can discourage participation.
    • Focus area documentation does not exist prior to the FMEA session and needs to be created, adding to the time requirements
    • Generally, the more knowledgeable and experienced the session participants are, the better the FMEA results. The risk is that key individuals are often busy and therefore unable or unwilling to participate and commit their time for the process
    High level summary of Software FMEA process
    • After the potential failure modes are identified, they are further analyzed, by potential causes and potential effects of the failure mode (Causes and Effects Analysis)
    • For each failure mode, a Risk Priority number (RPN) is assigned based on:
    • Occurrence Rating, Range 1-10; the higher the occurrence probability, the higher the rating
    • Severity Rating, Range 1-10; the higher the severity associated with the potential failure mode, the higher the rating
    • Detectability Rating, Range 1-10; the lower the detectability, the higher the rating
    • Another method is to use a rating scale of High, Medium and Low for Occurrence, Severity and Detectability Ratings
      • High: 9
      • Medium: 6
      • Low: 3
    • RPN = Occurrence * Severity * Detection; (Maximum = 1000, Minimum = 1)
    • For all potential failures identified with an RPN score of 150 or greater, the FMEA team will propose recommended actions to be completed within the phase the failure was found
    • A resulting RPN score must be recomputed after each recommended action to show that the risk has been significantly mitigated

    Quality Risk Analysis: Cost of Exposure

    There are different techniques to perform Quality Risk Analysis. One of these is the Cost of Exposure technique.

    The cost of exposure concept is borrowed from the financial world wherein the cost calculated is equal to the likelihood of risk occurring multiplied by the average cost of each occurrence of the risk. In a financial scenario, given a large sample of risks for a long time frame, the expectation is that the total amount lost tends towards the total costs calculated for all the risks.

    Cost of exposure quality risk analysis focuses on identifying the expected losses associated with the different risks and trying to determine how much should be spent to reduce those risks ? The cost of exposure technique allows the project management team to make economic decisions about testing.

    For each quality risk identified, the cost of testing as well as the cost of not testing i.e. the cost involved in taking the risk should be estimated. If the cost of testing is less than the cost of not testing then we expect testing to save us money in relation to that specific risk. If the cost of testing were estimated to be higher than the cost of taking the risk (not testing), testing would not be the right thing to do from a monetary perspective.

    The above estimates when expressed in terms of money, tends to make business sense. However, the ability to effectively use this technique depends on being able to make reasonably accurate predictions of likelihood of risk occurrence and cost. This requires sufficient data to be able to make any probable estimates. Also, the technique focuses primarily on the monetary aspect to decide whether to test something and if so how much to test. In many cases, the impact may not be easily quantifiable in monetary terms. Examples include loss of further business, tarnishing the organization's brand or image, loss of trust. This technique would be useful in a financial world wherein given sufficient data and tools, one could attempt to make reasonable predictions. It is normally not recommended for use in testing of critical software applications.

    Testing in an Agile world

    There are a few areas to watch out for while pursuing an Incremental or Iterative model. For the purpose of this blog post, here are some of them. These are from experiences using a specific model – Scrum.

    Incremental or Iterative models of development such as agile, have a fairly common theme – delivering an integrated, working system earlier in the life-cycle than would be possible when following a sequential model. The catch here being that only a part of the functionality and features would be available in the delivered builds and incremental addition of functions happen in chunks or increments. While having integrated and working systems available for testing early on does seem like a good idea, there are some areas to look out for.

    The load on testing tends to increase after the first chunk or increment. The testing team is usually called upon to perform dual roles - help with on-going incremental testing (pair/buddy testing along with development counterparts as part of an agile team) while also ensuring that the entire integrated system is thoroughly tested too. After the first increment, testing team has to ensure that full regression test cycle is executed to test all features and functionality delivered in the previous increments. These regression tests (on the previous increment's deliverable) are executed by the team while also working in parallel to help with the on-going incremental test efforts for new features being developed in the current increment.

    Scope for regressions increase. Typically in incremental development projects, the important & usually complex features and functionality are addressed at the start / initial increments. It is important to ensure that these priority functionality are not broken in subsequent increments. Given the nature of incremental development efforts, invasive changes to the code base often times necessitating changes that have wider impact cannot be ruled out and can be expected. In such a scenario, each new increment introduces a fair degree of changed and new code which increases the risk of regression.

    Overlap of tasks across increments and bug handling – as discussed in earlier points, while the testing team is busy working on testing the last increment, development and others are working on developing for the current and subsequent increments. There is overlap of activities across increments. One of the challenges here is when testing finds many defects that need to be addressed by development. However, given that development has already begun work on the current increment based on their plans and information available at the start of the current increment, such new bug fix activity tends to get pushed out to subsequent increments (unless the defect is of a stopper nature). Also, work can quickly pile up on the plate of development if testing finds many bugs to be addressed. This would necessitate re-planning and even in some instances having to cut back on some features to accommodate bug fixes. For the testing team, bugs that are not show stoppers but still important, may get addressed much later on in the cycle. The amount of work to be done by the testing team (including verifying the fixes, checking regressions, etc) can easily build up nearing the end of the release cycle and requires active monitoring, planning and control.

    Of course, the above are not barriers to adopting an incremental model and can be handled and managed while working closely with all constituents involved in the project or product development and delivery.
    ***
    Liked this entry? Join my community of professional testers to receive fresh updates by email. Use this link to add your email address to the community. Rest assured, I will neither spam nor share your email address with anyone else. Your email id will remain confidential. Subscriptions are handled by Google's FeedBurner service.

    QA is not Testing (QA vs QC)

    QA, QC, Testing – more often used inter-changeably and generally meant to imply "Testing".

    Lets get our facts straight - QA is not Testing; QC is not only about Testing; Testing is QC.
    More explanation follow.

    Quality Control (QC) - is oriented towards detecting defects and correction of these defects. QC works on the product rather than the process of producing the product. QC involves a set of tasks carried out to evaluate the product that has been developed. QC is normally the responsibility of the testing team and is considered to be a line function. Although testing is a QC activity, it is not the only type of QC activity. QC includes any activity that examines products to determine if they meet their requirements. Examples of QC activities apart from testing – inspections, reviews, walk-throughs of work products like requirements, designs, code and documentation.

    Quality Assurance (QA) - is oriented towards defect prevention and focuses on the process by which the product or application is built. QA involves a set of tasks to ensure that the development process is adequate to produce a system that meets its requirements. QA activities include reviewing design activities, setting standards to be followed in coding and such other process requirements aimed at ensuring that a quality product is built. QA ensures that the process is well defined and looks at methodology and standards development. QA is performed through the life cycle of the product and applies to all involved in developing the product. QA is normally considered to be a staff function. QA looks at items such as identifying areas for improvements in current methods and processes being followed, making processes effective, ensuring consistency in the way these are followed and so on. While QC evaluates the product, QA evaluates the activities involved in creating the product.

    Errors - Human vs Automation

    Test automation is only as good as the human testers who created it. Test automation can help minimize chances of human error in situations that require human testers to perform repetitive and mundane activities. However, any errors that creep into an automation suite tend to be magnified many times more than a human tester can achieve. Errors in automation suites are easy to miss and every time the suite is executed, tend to manifest itself. There have been many instances where errors in automation have gone unnoticed for a long while and things have seemed ok when there were issues lurking around. Few examples of errors include, logical errors in the automation, missing coverage of scenarios that are likely to have defects, hard coded data (or even results !), and such others which contribute to giving a false impression of normality.

    As Paul Ehrlich said, "To err is human, but to really foul things up you need a computer”. Human testing does have it's share of errors, but these tend to be relatively easier to detect using simple examination of test reporting and documentation.

    Test Automation should be treated just like any another full fledged Software development effort. Due diligence needs to be done to incorporate sound Software development practices. Extensive testing of the automated tests needs to be performed. Testing and verification of the automation suite is an on-going effort that needs to be factored in while planning for automation. Regular testing helps monitor for relevance of automated tests and detect any needed changes in accordance with any changes to the application being tested or environment.

    One thing humans can do is think. Human testers automatically interpret system behaviors and evaluate results based on a diverse awareness of the system being tested, its operating environment, inter-dependencies, the context, potential for changes and so on. In many cases, human testers may not fully realize their ability to model program behavior and adapt to changes. Human testers can observe much more than what an automation suite can.

    As the quote goes, “The question of whether computers can think is like the question of whether submarines can swim.

    Human testers have their share of shortcomings – automated systems can run tests faster, handle large volumes of data and interpret instructions quicker. Also, automated systems can better and more efficiently investigate internal system data such as execution threads, variables, program states, etc. Humans can get fatigued and lose focus especially when tasks become repetitive, take a long time or are mundane.

    Automation ... eliminating the need for manual testers ?

    Will test automation eliminate the need for manual testers ?

    There's a line of thought that suggests that increased test automation should be able to eliminate or reduce the number of manual testers. All i can say to this is - very untrue !

    Automation test scripts are only as good as the tests they are based upon. Test development is done by manual testers who know the application and its dependencies thoroughly. The automation tool helps testers do their jobs better and be more effective. Testers use test automation to move away from performing mundane repetitive tasks including running the same set of tests across multiple platforms. Testers can thus focus on developing and executing more complex and useful tests.

    Test automation scripts need regular maintenance to deal with changes and enhancements which in turn need testers assigned. Testers also need to regularly verify execution of test automation scripts, check failures, report issues and so on. All of this is manual work.

    The role of the manual tester is important. The tester helps with test planning, identifying risks, requirements, designs and develops test scenarios and cases, specifies the test requirement to achieve necessary test coverage, helps determine what areas may be automated and which areas are not, write, maintain and manually execute the non-automated tests.

    Automation does not eliminate or reduce the need for testers. Testers use automation to increase test coverage and perform more complex (and useful) scenarios that generate greater ROI from testing

    Goal of testing

    Testers test a program to demonstrate the existence of a fault and not the absence of it.

    Fact: all software has bugs.

    If all your tests pass and do not detect any bugs, it does not change the fact that the software still has undiscovered bugs. Take the example of a car mechanic. You hear a strange noise and ask the mechanic to investigate. The guy runs a battery of tests on your car and report that all of the tests passed and there is no problem while in fact there is a problem.

    Mere passing of existing set of tests do not prove the absence of bugs. The tester has to anticipate customer mistakes and verify that the product can handle them gracefully. Ultimately as a tester you cannot possibly detect all bugs in the software within the constraints of time and resources. What you can do is to find the ones that are most important to stakeholders.

    Because the goal is to discover faults, a test campaign is truly successful when a defect is discovered or a failure occurs. Defect detection is the process of identifying the defect and determining the cause of failure; Defect correction or removal is the process of making changes to the system to remove the defect.

    Functional Specification

    • Is a blueprint, for the product or feature
    • Describes how a piece of software works
    • Describes the product or feature from an external perspective and indicates - how it may be used or invoked and the interactions possible; how the module will look like
    • Is useful for the development team - provides members of that team with all the information they need to begin designing an application
    • Helps testers to develop black box test cases
    • Allows parallelism in testing and development activities . While development designs and implements the feature, testing has the tests ready to begin testing
    • Helps clarify requirements for the feature or product
    • Communicate to stakeholders and set the right expectations
    • Helps focus product development on client /stakeholder requirements. The functional specification is reviewed by clients / stakeholders and development can work in the knowledge that the requirements have been clarified and expectations set appropriately

    Bumper stickers for Testers

    On Testers
    • Software Testers: Always looking for trouble.
    • Software Testers don't break software; it's broken when we get it.
    • Software Testers: We break it because we care.

    On Testing
    • Software Testing is Like Fishing, But You Get Paid.
    • Software Testing: Where failure is always an option.
    • Software Testing: You make it, we break it.

    Software Testing

    Software Testing IS
    • Operating a System or Application under controlled conditions, evaluating the results and checking if performance meets expectations
    • "Organized Skepticism"; an inherent belief that things may not be as they seem
    • The process of executing a program with the intent of finding an error
    • Comparing the ambiguous to the invisible, so as to avoid the unthinkable happening to the unknown
    • About reducing uncertainty about the perceived state of the product
    • A vital support function that helps developers look good by finding their mistakes before anyone else does
    • A key element in determining whether a product release receives brickbats or bouquets
    On the lighter side, testers have their own set of quotes to pat themselves (ourselves!) on the back. Here are a couple of them -
    "To err is human; to find the errors requires a tester"
    "Software Testers: Improving the world, one Bug at a time"

    In subsequent posts, we will look at Software Testing in more detail.