Errors - Load, Hardware

LOAD CONDITIONS
  • Required resource not available
  • Doesn't return a resource
    • Doesn't indicate that it's done with a device
    • Doesn't erase old files from mass storage
    • Doesn't return unused memory
    • Wastes computer time
  • No available large memory areas
  • Input buffer or queue not deep enough
  • Doesn't clear items from queue, buffer, or stack
  • Lost messages
  • Performance costs
  • Race condition windows expand
  • Doesn't abbreviate under load
  • Doesn't recognize that another process abbreviates output under load
  • Low priority tasks not put off
  • Low priority tasks never done

HARDWARE
  • Wrong device
  • Wrong device address
  • Device unavailable
  • Device returned to wrong type of pool
  • Device use forbidden to caller
  • Specifies wrong privilege level for a device
  • Noisy channel
  • Channel goes down
  • Time-out problems
  • Wrong storage device
  • Doesn't check directory of current disk
  • Doesn't close a file
  • Unexpected end of file
  • Disk sector bugs and other length-dependent errors
  • Wrong operation or instruction codes
  • Misunderstood status or return code
  • Device protocol error
  • Underutilizes device intelligence
  • Paging mechanism ignored or misunderstood
  • Ignores channel throughput limits
  • Assumes device is or isn't, or should be or shouldn't be initialized
  • Assumes programmable function keys are programmed correctly

Errors - data handling, race conditions

ERRORS IN HANDLING OR INTERPRETING DATA

Problems when passing data between routines
  • Parameter list variables out of order or missing
  • Data type errors
  • Aliases and shifting interpretations of the same area of memory
  • Misunderstood data values
  • Inadequate error information
  • Failure to clean up data on exception-handling exit
  • Outdated copies of data
  • Related variables get out of synch
  • Local setting of global data
  • Global use of local variables
  • Wrong mask in bit field
  • Wrong value from a table
Data Boundaries
  • Unterminated null terminated strings
  • Early end of string
  • Read/write past end of a data structure, or an element in it
Read outside the limits of a message buffer
  • Compiler padding to word boundaries
  • Value stack under/overflow
  • Trampling another process' code or data
Messaging Problems
  • Messages sent to wrong process or port
  • Failure to validate an incoming message
  • Lost or out of synch messages
  • Message sent to only N of N+1 processes
Data Storage Corruption
  • Overwritten changes
  • Data entry not saved
  • Too much data for receiving process to handle
  • Overwriting a file after an error exit or user abort

RACE CONDITIONS
  • Races in updating data 

  • Assumption that one event or task has finished before another begins
  • 
Assumption that input won't occur during a brief processing interval 

  • Assumption that interrupts won't occur during a brief interval 

  • Resource races: the resource has just become unavailable 

  • Assumption that a person, device, or process will respond quickly
  • 
Options out of synch during a display change
  • 
Task starts before its prerequisites are met 

  • Messages cross or don't arrive in the order sent 


Errors - Control Flow

PROGRAM RUNS AMOK
  • GOTO somewhere
  • Come-from logic errors
  • Problems in table-driven programs
  • Executing data
  • Jumping to a routine that isn't resident
  • Re-entrance
  • Variables contain embedded command names
  • Wrong returning state assumed
  • Exception-handling based exits
  • Return to wrong place
    • Corrupted stack
    • Stack under/overflow
    • GOTO rather than RETURN from a subroutine
  • Interrupts
    • Wrong interrupt vector
    • Failure to restore or update interrupt vector
    • Failure to block or unblock interrupts
    • Invalid restart after an interrupt
PROGRAM STOPS
  • Dead crash
  • Syntax errors reported at run-time
  • Waits for impossible condition, or combination of conditions
  • Wrong user or process priority
LOOPS
  • Infinite loop
  • Wrong starting value for the loop control variable
  • Accidental change of the loop control variable
  • Wrong criterion for ending the loop
  • Commands that do or don't belong inside the loop
  • Improper loop nesting
IF, THEN, ELSE, OR MAYBE NOT
  • Wrong inequalities (e.g., > instead of >=)
  • Comparison sometimes yields wrong result
  • Not equal versus equal when there are three cases
  • Testing floating point values for equality
  • Confusing inclusive and exclusive OR
  • Incorrectly negating a logical expression
  • Assignment-equal instead of test-equal
  • Commands belong inside the THEN or ELSE clause
  • Commands that don't belong inside either clause
  • Failure to test a flag
  • Failure to clear a flag
MULTIPLE CASES
  • Missing default
  • Wrong default
  • Missing cases
  • Case should be subdivided
  • Overlapping cases
  • Invalid or impossible cases

Errors - Boundary, Calculation, States

Boundary related errors
  • Numeric boundaries
  • Equality as a boundary
  • Boundaries on numerosity
  • Boundaries in space
  • Boundaries in time
  • Boundaries in loops
  • Boundaries in memory
  • Boundaries within data structures
  • Hardware-related boundaries
  • Invisible boundaries
  • Calculation errors
  • Outdated constants
Calculation Errors
  • Impossible parentheses
  • Wrong order of operators
  • Bad underlying function
  • Overflow and underflow
  • Truncation and roundoff error
  • Confusion about the representation of the data
  • Incorrect conversion from one data representation to another
  • Wrong formula
  • Incorrect approximation
Initial and Later states
  • Failure to set a data item to 0
  • Failure to initialize a loop-control variable
  • Failure to initialize (or reinitialize) a pointer
  • Failure to clear a string
  • Failure to initialize (or reinitialize) registers
  • Failure to clear a flag
  • Data were supposed to be initialized elsewhere
  • Failure to reinitialize
  • Assumption that data were not reinitialized
  • Confusion between static and dynamic storage
  • Data modification by side-effect
  • Incorrect initialization
  • Reliance on tools the customer may not have or understand

Error Handling issues ... continued from earlier post

ERROR PREVENTION
  • Inadequate initial state validation
  • Inadequate tests of user input
  • Inadequate protection against corrupted data
  • Inadequate tests of passed parameters
  • Inadequate protection against operating system bugs
  • Inadequate version control
  • Inadequate protection against malicious use
ERROR DETECTION
  • Ignores overflow
  • Ignores impossible values
  • Ignores implausible values
  • Ignores error flag
  • Ignores hardware fault or error conditions
  • Data comparisons
ERROR RECOVERY
  • Automatic error correction
  • Failure to report an error
  • Failure to set an error flag
  • Where does the program go back to?
  • Aborting errors
  • Recovery from hardware problems
  • No escape from missing disk

User Interface Errors ... continued from earlier post

FUNCTIONALITY
  • Excessive functionality
  • Inflated impression of functionality
  • Inadequacy for the task at hand
  • Missing function
  • Wrong function
  • Functionality must be created by the user
  • Doesn't do what the user expects
COMMUNICATION
  • Missing information
    • No onscreen instructions
    • Assuming printed documentation is readily available
    • Undocumented features
    • States that appear impossible to exit
    • No cursor
    • Failure to acknowledge input
    • Failure to show activity during long delays
    • Failure to advise when a change will take effect
    • Failure to check for the same document being opened more than once
  • Wrong, misleading, or confusing information
    • Simple factual errors
    • Spelling errors
    • Inaccurate simplifications
    • Invalid metaphors
    • Confusing feature names
    • More than one name for the same feature
    • Information overload
    • When are data saved?
    • Poor external modularity
  • Help text and error messages
    • Inappropriate reading level
    • Verbosity
    • Inappropriate emotional tone
    • Factual errors
    • Context errors
    • Failure to identify the source of an error
    • Hex dumps are not error messages
    • Forbidding a resource without saying why
    • Reporting non-errors
  • Display bugs
    • Two cursors
    • Disappearing cursor
    • Cursor displayed in the wrong place
    • Cursor moves out of data entry area
    • Writing to the wrong screen segment
    • Failure to clear part of the screen
    • Failure to highlight part of the screen
    • Failure to clear highlighting
    • Wrong or partial string displayed
    • Messages displayed for too long or not long enough
  • Display layout
    • Poor aesthetics in the screen layout
    • Menu layout errors
    • Dialog box layout errors
    • Obscured instructions
    • Misuse of flash
    • Misuse of color
    • Heavy reliance on color
    • Inconsistent with the style of the environment
    • Cannot get rid of onscreen information
COMMAND STRUCTURE AND ENTRY
  • Inconsistencies
    • "Optimizations"
    • Inconsistent syntax
    • Inconsistent command entry style
    • Inconsistent abbreviations
    • Inconsistent termination rule
    • Inconsistent command options
    • Similarly named commands
    • Inconsistent capitalization
    • Inconsistent menu position
    • Inconsistent function key usage
    • Inconsistent error handling rules
    • Inconsistent editing rules
    • Inconsistent data saving rules
  • Time-wasters
    • Garden paths
    • Choices that can't be taken
    • Are you really, really sure?
    • Obscurely or idiosyncratically named commands
  • Menus
    • Excessively complex menu hierarchy
    • Inadequate menu navigation options
    • Too many paths to the same place
    • You can't get there from here
    • Related commands relegated to unrelated menus
    • Unrelated commands tossed under the same menu
  • Command lines
    • Forced distinction between uppercase and lowercase
    • Reversed parameters
    • Full command names not allowed
    • Abbreviations not allowed
    • Demands complex input on one line
    • No batch input
    • Can't edit commands
  • Inappropriate use of the keyboard
    • Failure to use cursor, edit, or function keys
    • Non-standard use of cursor and edit keys
    • Non-standard use of function keys
    • Failure to filter invalid keys
    • Failure to indicate keyboard state changes
    • Failure to scan for function or control keys
MISSING COMMANDS
    • State transitions
    • Can't do nothing and leave
    • Can't quit mid-program
    • Can't stop mid-command
    • Can't pause
  • Disaster prevention
    • No backup facility
    • No undo
    • No Are you sure?
    • No incremental saves
  • Error handling by the user
    • No user-specifiable filters
    • Awkward error correction
    • Can't include comments
    • Can't display relationships between variables
  • Miscellaneous nuisances
  • Inadequate privacy or security
  • Obsession with security
  • Can't hide menus
  • Doesn't support standard O/S features
  • Doesn't allow long names
PROGRAM RIGIDITY
  • User tailorability
    • Can't turn off the noise
    • Can't turn off case sensitivity
    • Can't tailor to hardware at hand
    • Can't change device initialization
    • Can't turn off automatic saves
    • Can't slow down (speed up) scrolling
    • Can't do what you did last time
    • Can't find out what you did last time
    • Failure to execute a customization command
    • Failure to save customization commands
    • Side-effects of feature changes
    • Infinite tailorability
  • Who's in control
    • Unnecessary imposition of a conceptual style
    • Novice-friendly, experienced-hostile
    • Artificial intelligence and automated stupidity
    • Superfluous or redundant information required
    • Unnecessary repetition of steps
    • Unnecessary limits
PERFORMANCE
  • Slow program
  • Slow echoing
  • How to reduce user throughput
  • Poor responsiveness
  • No type-ahead
  • No warning that an operation will take a long time
  • No progress reports
  • Problems with time-outs
  • Program pesters you
  • Do you really want help and graphics at a specific rate ?
OUTPUT
  • Can't output certain data
  • Can't redirect output
  • Format incompatible with a follow-up process
  • Must output too little or too much
  • Can't control output layout
  • Absurd printed level of precision
  • Can't control labeling of tables or figures
  • Can't control scaling of graphs

Common Software Errors

I thought it would be useful to put up a list of the common errors in software which can be an useful aid for identifying errors while testing.

Common Error types
  • User Interface Errors
  • Error Handling
  • Boundary related errors
  • Calculation errors
  • Initial and Later states
  • Control flow errors
  • Errors in Handling or Interpreting Data
  • Race Conditions
  • Load Conditions
  • Hardware
  • Source, Version and ID Control
  • Testing Errors
You might want to refer to the title, Testing Computer Software (Wiley) for more details on this subject. I'll try to provide additional information on these error types in subsequent posts.

Question: Test types and automation

What do you think ? Are there test types that may not be good candidates for automation ?

Smoke testing

The term is supposed to have originated from the electrical engineering domain. When a new circuit being developed was attached to a power source any major issue with the circuit would cause smoke to emanate. No further testing would need to be performed on this circuit.

In the Software Engineering world, smoke testing is usually a smaller subset of your regression test suite. The tests are automated and test the important high level functionality to make sure these operate correctly. The idea is to verify the breadth of an application's functionality rather than testing in-depth. Smoke tests are run against each build. When a build completes, the automated smoke test suite is executed against this new build. The process would involve setting up the application and components, verifying that all components have been installed, perform any needed initializations, make necessary configurations and verify critical functionality.

Since smoke testing happens after unit testing has completed, it may appear that smoke testing is a duplication of effort. Unit testing is normally executed in isolation and on the developers' desktops / in the development environment. Smoke testing is a high-level integration test effort whose primary aim is to verify that the system's basic functions/features do what they are intended to do after the software system build is installed in the system test environment. A smoke test must be performed for each new build that is turned over to the test group. Successful smoke testing should be regarded as part of the entrance criteria for beginning the testing phase for the build.

Boundary Value Analysis (BVA)

Boundary Value Analysis (BVA) is a technique for designing test cases. It may also be termed as a technique for choosing test data, such that data that lie along the extremes or boundaries are chosen. Such boundary data would include – the maximum, minimum, error values, just in or out of range values. In the case of Equivalence classes, we might say that boundary values are data on, above and below the edges of input and output equivalence classes. Experience shows that tests which test for boundary value conditions are able to find more defects than just selecting any element in an equivalence class as a representative for the elements of the class. BVA focuses on both input and output conditions and requires that elements be selected so that the edges of the equivalence class be tested.

Examples include -
  • If the input or output condition states that the number of values must lie within a specific range, then
    • Create two positive tests, one at either end of the range.
    • Create two negative tests, one just beyond the valid range at the lower end and one just beyond the upper end
  • Lets say an input parameter accepts a range of values such as numbers from 1 to 10, test for 0, 1, 10 and 11
  • In a similar vein, if an application outputs a set of data records, say max of 10 records for a given query, write tests that cause the application to output 0, 1, 10 records as well as a test that could cause the application to incorrectly display > 10 records. The idea is to examine the boundaries of the output conditions. However, it may not always be possible to generate an output outside the range, but it is definitely something to consider when developing tests.

Equivalence Partitioning / Classing

Its a straight forward technique involving partitioning the possible set of factors (generally inputs to the system, output, etc.) into classes or partitions that are handled equivalently / similarly by the system. Tests would be developed including at least one item from each equivalence class.

For example if i had to test a mobile application with subscribers on different carriers, and am sure that for any given carrier all it's subscribers would be treated similarly by the application - a simplistic way to partition the subscribers could be based on their carriers. Each equivalence class in this case could correspond to a carrier and have the subscribers who are using this carrier. I could now pick a subscriber from each class and test with a fairly representative sample set of subscribers covering the different carriers.

An equivalence class is a set of data values which the tester thinks that the system will handle in the same manner. Testing any one representative from an equivalence class is considered to be sufficient since for any value from the same class, the system behavior will not be different. When trying to create equivalence partitions, look for ways to group similar inputs, similar outputs, and similar operation of the software. These groups represent equivalence partitions.

So, why do equivalence partitioning ? The answer must be pretty obvious – the number of tests you could potentially develop and run is nearly infinite. One of the important tasks of a tester is to select a doable subset of tests that are still effective. The objective of Equivalence Partitioning is to reduce the number of tests you need to execute while still getting the software adequately tested. There is an element of risk in using this technique – you are choosing to limit your testing to representatives from each class that you develop. Developing classes can be subjective and would require reviews to have agreement that the classes achieve the desired level of coverage.

Test Design techniques

Test design techniques listed below. We shall look at some of the commonly used ones in subsequent posts.

Blackbox techniques
  • Equivalence partitioning
  • Boundary value analysis
  • Orthogonal arrays
  • State transition testing
  • Cause-effect graphing & Decision Table
  • Use Case testing
  • Syntax testing
  • Random testing
  • Smoke testing
Whitebox techniques
  • Statement testing
  • Branch/decision testing
  • Branch condition testing
  • Branch condition combination testing
  • Condition decision testing
  • Linear Code Sequence and Jump testing
  • Data flow testing

Bugs are social creatures

Bugs like company; atleast this is what we observe as we test applications of various shapes & sizes. Continuing on the Pareto principle from an earlier post, the likelihood of finding bugs in an area where you have already found many bugs is pretty high - few areas contribute the most bugs.

Normally, an analysis of defects would show that the distribution tends to be concentrated around certain areas of the application. This concentration could be due to a host of factors including – the complexity of those areas, constantly changing or unclear requirements, poor coding standards or reviews, coder (in)experience and so on.

Given that bugs like to be together, our focus should be on putting a larger part of our test efforts on those areas where most defects have been reported. In addition, defect analysis may be useful while planning testing for the next release of the application – we would plan on putting more efforts in those areas that carry the higher risk of potential bugs relative to the stabler areas.

Common test types / responsibility chart


Developers
Testers
Customers
Unit Testing
Yes


Integration Testing
Yes


Component Testing
Yes
Yes (additional)

System Testing

Yes

System Integration Testing

Yes

User Acceptance Testing


Yes

While the above is an ideal expectation, the actual assignment of responsibilities could vary across organizations. As indicated in the table above, Unit Testing is the responsibility of the Developers. Automated unit test suites may be leveraged to test stability of nightly builds. Component and integration testing of the various components is often the responsibility of the Development team. Developers may choose to have standard code review and check in procedures (as part of a static testing / analysis strategy) to ensure code quality is maintained. System testing and System Integration testing is the primary responsibility of the QA team, while UAT is ideally done by the Customers or their representatives. Testers would be involved in guiding the UAT process.

The above may seem pretty simplistic and not a detailed reflection of the tasks a Testing organization would perform. When we consider a test type such as System Testing - this may be further divided into various types of testing such as - GUI testing, Regression testing, Performance, Stress, Reliability testing, Compatibility testing, Security testing, Installation and Upgrade testing, etc. In addition to the test tasks listed above, there are various other activities that a Testing group is responsible for.

Pareto chart

A Pareto chart is a histogram that can be used to prioritize problems or their causes. Without this data, we might focus on problems which we think are important but could in reality not be the most pressing items.

The Pareto chart helps with
  • Prioritizing and selecting the greatest problem areas or largest areas of opportunities
  • Analyzing the frequency of an event (in terms of occurrences or number of items) and identify the biggest contributors
  • Communicating in summary, how 80% of the problem comes from 20% of the causes
Pareto charts are based on the Pareto Principle, which is named after the nineteenth-century Italian economist Vilfredo Pareto. The basis of the Pareto Principle is that roughly 80 percent of effects are produced by 20 percent of the causes. In terms of quality management, this theory can be translated to 80 percent of defects are produced by 20 percent of the features, code base, people, etc. Pareto charts as Juran stated, help to separate the vital few from the trivial many. Preparing a Pareto chart includes the following steps.
  • Identify the categories (problem areas or causes) for which data is to be collected
  • Gather the data needed for analysis over a period of time
  • Sort the categories according to frequency of occurrence
  • Mark the axes of the chart. List the categories in descending order on the horizontal axis and use an appropriate scale for the data on the vertical axis
  • Draw the histogram
  • Create the chart's percentage line representing the cumulative percent of occurrences. The line representing the cumulative percent will display across all categories
The chart displays graphically and in an easy to read manner the categories that are most important. Maximum return on investment may be obtained by focussing improvement efforts on these categories. Pareto charts are used widely in software development. One of the uses of this chart is to analyze defect metrics and determine which category of issues to address first. To do this, create Pareto charts based on bug types. This enables the team to determine the category in which most of the bugs are occurring, and focus their efforts there and avoid addressing issues randomly with little knowledge of overall project impact.


Defect Lifecycle; IEEE Standard Classification for Software Anomalies (1044-1993)



Here's a pictorial representation of a defect management lifecycle, that's mapped to the anomaly classification process proposed by the IEEE Standard Classification for Software Anomalies (IEEE Std. 1044-1993).

The IEEE 1044-1993 definition of anomaly: "Any condition that deviates from expectations based on requirements speciļ¬cations, design documents, user documents, standards, etc. or from someone’s perceptions or experiences. Anomalies may be found during, but not limited to, the review, test, analysis, compilation, or use of software products or applicable documentation."

The process is divided into the following four sequential steps.
  • Step 1: Recognition
  • Step 2: Investigation
  • Step 3: Action
  • Step 4: Disposition
Example Defect Lifecycle


Defects, Bugs, Failures, Errors, Mistakes, Incidents

The word-soup comprising defects, bugs, failures, errors, mistakes and incidents is often used interchangeably to generally imply issues with the software. However, use of these terms in Software testing has different meanings. Description from the certification glossary has been incorporated as needed, to maintain consistency of definitions.

Defect / bug: A flaw in a component or system that can cause the component or system to fail to perform its required function. Defect refers to something that is wrong with the program, design, requirements, specifications or other documentation. A defect, if encountered during execution, may cause a failure of the component or system.

Error / mistake: A human action that produces an incorrect result. Errors are incorrect things that people do. People make errors while software has defects. For example, when engineers make errors that result in defects, it is called as injecting defects.

Failure: Deviation of the component or system from its expected delivery, service, or result.

Incident: Any event occurring that requires investigation.

Static testing, IEEE 1028 standard for Software Reviews

Testing need not be just about test execution. Static testing such as review is also testing. Lets look at the types of reviews and the related IEEE standard description for the same.

Reviews can be classified as formal or informal. A formal review process has specific roles such as Manager, moderator, reviewers, scribe, author. The IEEE 1028 standard lists the following types of reviews
  • Management reviews
  • Technical reviews
  • Inspections
  • Walk-throughs
  • Audits
In the real world, organizations may choose to combine elements from the different review types to suit their specific needs. Reviews should ideally be done before dynamic testing. The value of reviews may be co-related with the cost of fixing defects during different points in time in the software development life cycle. The sooner the defect is found (as in reviews) versus finding it later (system integration testing or post release), the less expensive it is to address the defect. Also, defects found during reviews could take lesser time to resolve as opposed to fixing a defect found much later – post implementation and during test execution.

More on the IEEE 1028 standard below.

The abstract of the IEEE 1028-1997 standard for Software Reviews states, “This standard defines five types of software reviews, together with procedures required for the execution of each review type. This standard is concerned only with the reviews; it does not define procedures for determining the necessity of a review, nor does it specify the disposition of the results of the review.”

Snapshot of content of the IEEE 1028 standard for Software Reviews.

1. Overview – Purpose, Scope, Conformance, Organization of standard, Application of standard
2. References
3. Definitions

4. Management reviews – Introduction, Responsibilities (Decision maker, Review leader, Recorder, Management staff, Technical staff, Customer or user representative), Input, Entry criteria (Authorization, Preconditions), Procedures (Management preparation, Planning the review, Overview of review procedures, Preparation, Examination, Rework/follow-up), Exit criteria, Output

5. Technical reviews – Introduction, Responsibilities (Decision maker, Review leader, Recorder, Technical staff, Management staff, Customer or user representative), Input, Entry criteria (Authorization, Preconditions), Procedures (Management preparation, Planning the review, Overview of review procedures, Overview of the software product, Preparation, Examination, Rework/follow-up), Exit criteria, Output

6. Inspections – Introduction, Responsibilities (Inspection leader, Recorder, Reader, Author, Inspector), Input, Entry criteria (Authorization, Preconditions, Minimum entry criteria), Procedures (Management preparation, Planning the inspection, Overview of inspection procedures, Preparation, Examination, Rework/follow-up), Exit criteria, Output, Data collection recommendations (Anomaly classification, Anomaly classes, Anomaly ranking), Improvement

7. Walk-throughs – Introduction, Responsibilities (Walk-through leader, Recorder, Author), Input, Entry criteria (Authorization, Preconditions), Procedures (Management preparation, Planning the walk-through, Overview, Preparation, Examination, Rework/follow-up), Exit criteria, Output, Data collection recommendations (Anomaly classification, Anomaly classes, Anomaly ranking), Improvement

8. Audits – Introduction, Responsibilities (Lead auditor, Recorder, Auditor, Initiator, Audited organization), Input, Entry criteria (Authorization, Preconditions), Procedures (Management preparation, Planning the audit, Opening meeting, Preparation, Examination, Follow-up), Exit criteria, Output

Function Point Analysis ... continued

In an earlier post, we looked at Function Point Analysis method of estimation. Lets now look at a some of the critiques of this method.

FP analysis tends to be complex and expensive for large projects. The CPM published by IFPUG is voluminous and mastering it could take significant time and effort. While it may be possible to learn the basics of FP analysis in short classroom sessions, being able to effectively implement it in an enterprise class project requires lot more understanding and extensive real world experience.

Although FP analysis tends to standardize measurement, it is a measure that many professionals are unaware of. An FP number may not communicate the scope or complexity of work involved to the audience unless they have themselves been exposed to FP analysis. If your organization intends to adopt FP analysis, it might be a good idea to rope in external FP consultants to run the process in the initial stages until the required level of expertise is developed in-house. Trying to learn or experiment on a real project could lead to scenarios such as cost overruns, schedule slippages, etc.

FP analysis may not be the most suitable method for estimation in the early stages of the product when the requirements and design are not firmed up. For FP counts to be accurate, there needs to exist clear set of requirements and design document that explains implementation of the requirements.