Advantages / disadvantages of the Waterfall model

Continuing from the previous blog entry that talked about the Waterfall model, this post presents some of it's advantages and disadvantages.

Some advantages of the Waterfall model
  • Clearly divides the problem into distinct phases that may be performed independently
  • Simple concept
  • Natural approach to solving the problem
  • Fits well into a contractual setting where each phase is considered a milestone
Some of the drawbacks of the Waterfall model

In many projects, the strict sequencing of phases advocated by the waterfall model is not followed. The model assumes that one builds an entire system all at once, perform end-to-end testing after all the design and most of the coding is completed. In reality, feedback from downstream phases are passed upstream to make refinements. For example, while implementing a design, issues with the design may be observed which would require the design to be improved upon. Similarly during other phases. There could be quite a few such iterations to firm up requirements, design and get to actual implementation.

Evidence of failures in practicing the waterfall model comes from one of its most frequent users, the US Department of Defence (DoD). The DoD required most of its projects to follow the waterfall model which was documented in the standard DoD STD 2167. A report on project failure rates showed that up to 75 percent of the projects failed or were never used. Subsequent analysis recommended replacing the waterfall model with an iterative and incremental approach to development.

Some of the assumptions in the waterfall model include
  • A well-defined set of requirements is available. These are assumed to be reasonably well stated and the attempt is to freeze these early. The onus is then on making sure these requirements are well-understood and implemented
  • Any changes to defined requirements would be small enough to be able to be managed without having to make significant changes to the development plans or schedule
  • Software development and associated research & development activity can fit into a predictable schedule
  • Integration of the various pieces of the monolithic system, their behavior, performance and other attributes are predictable and that the architectural plans and designs would be able to handle any integration issues
In real world development, it is not feasible to assume the above. Having a clear set of requirements firmed up at the outset is nearly impossible. Assuming that requirements thus defined are unlikely to change much is another fallacy. Experience shows that requirements do change and in many cases change significantly requiring re-work and re-design. The greater the time between gathering requirements and delivery of the finished product, the greater the likelihood of changes to the requirements. While trying to integrate the various pieces of the system, even thorough analysis and plans would not be able to accurately predict nor control the process. Often, assumptions made around integration tend to be wrong. Any upstream slippages in schedule tend to compress the time available for later phases and importantly for adequate system integration testing.  The model could also lead to early finalization of technological and hardware related decisions which may not turn out to be the most appropriate. Real world observations of software development highlights the fact that the “big-bang” approach of trying to deliver a monolithic solution is too risky and prone to cost and schedule overruns.