Software Development Home

Software Development Articles

Software Development Links

Software Development Books

Software Development Tools

Software Development Keywords

Software Development

Testing in the RUP

IBM Rational's Rational Unified Process or RUP® is a "software engineering process framework designed to be tailored for multiple types of software development and deployment projects" (1). This article will try to place testing and testers in the context of the RUP.

Testing is a "discipline" in the RUP. Guidance is given to the activities, roles and artifacts required to implement testing in an RUP environment. As with most things in the RUP, the guidance is not prescriptive or set in stone for all projects or even organisations. Instead each organisation has to decide on the level of ceremony that that they are going to have in the process. Also not all the practices, roles or artifacts are used. Only that amount, required by that organisation, at a certain level of maturity should be used.

Organisations implementing any kind of software development have two main problems.

  1) When to release the product? Which in effect leads to "when is it good enough?"
  2) How stop a bad product being released? Again leading to "How do I inform everyone?"

Of course in the real world, perfection is never actually produced. Even highly regulated, life and safety critical systems will have some level of faults. Thus a really well maintained operating system might have an uptime of 99.999% when running 24/7. However this still means that the owner organisation will can expect 5 minutes 15 seceonds of downtime. For an operating system, for a communication satellite, 99.999% might be a realistic goal. However for word processing application, it would not make good business.

The second question, of how to stop a bad product being released. This is a question of communication and process.

RUP Testing
Testing in the RUP consists of:-
   - Finding gaps in quality
   - advising team members
   - validating decisions in design and requirements
   - validating functionality of software
   - validating that the requirements have been implemented appropriately.

The concept of quality, indeed the whole question of "When to release the software" is based on the "Good Enough Quality" (GEQ) work of James Bach. GEQ is based on the concept that we can never reach perfection, and that quality is driven by the perceptions of the customer.

This ethos of GEQ can be seen permeating throughout the four phases of the RUP. For instance in the Inception phase, the customer and all other stakeholders buy in to the project. It can be decided whether to determined the level of testing. E.g. are legal or regulated templates to be used, such as medical or military requirements. At this point the broad brush strokes of the quality standard to be met are decided. The customer agrees to what is in the project, and what is to be built. Thus business risk is largely mitigated.

How does this affect our two questions? Firstly we have a very good idea of when we can stop testing. Of course at the inception stage, this might be slightly vague. But at least it can be seen on the horizon. Finer granularity will be introduced in the elaboration, construction and transition phases.

Good Enough Quality can be expressed in many different ways:-
  *Perfection. The Space Shuttle and its 0.1 defects per 1000 lines of code.
  *Support has never received a fault on that defect. Why fix it?
  *Defined Process. As long as we follow the "process" we will have a good product.
  *Requirements. We satisfy the requirements spec, irrespective of whether they were captured correctly
  *Advocacy. "We will make every effort".
  *Bottom line. We want quality, as long as it does not impact too much on profitability.

Obviously there are many more paradigms of quality that can be applied. Suffice to say though that most organisations will at some stage apply one.

Secondly how to inform? Keeping everyone informed is a question of communication and artifacts. This the same as any organisational process, you might say. The RUP however, has its own testing philosophy. These are:-

Low Up Front Documentation.
Detailed planning of testing is done on a phase by phase basis. Thus in Elaboration the documentation will centre on the architecture of the solution. As the project moves into Construction, the granularity of the test planning is finer. Contrast this with the Waterfall method with its huge test plans and specifications.

Holistic Approach. The test design is not drawn solely from the requirements.

Iterative DevelopmentAs the project progresses, not only does the application grow, the test suite does also. Thus in Inception we have a small number of tests. These are designed to test the very bare bones. Once the project moves to Elaboration, further tests are added to test the architecture. Thus a regression suite is now growing to include, Inception and Elaboration tests. With Construction the regression suite grows again. The granularity of the new tests is again finer.

AutomationDue to the growing complexity of the test suite, early and ongoing automation is essential.

Testing in the RUP is centred on finding weakness in the Software Under Test. However when taking in conjunction with other disciplines, it plays a full part in mitigating the risk of failure.

The next article on Testing in the RUP will focus on the different testing roles and artifacts.

Clay Nelson, "Integrating Rational Unfied Process and Six Sigma",The Rational Edge, November 2003
http://www.therationaledge.com/content/nov_03/m_rupsix_nm.jsp

Google
Web www.softwaredev.force9.co.uk

Software Development Bestsellers
The bestselling books on Amazon.

Articles

SAP and Windows Integration Move

Next Gen Java Code posted

Longhorn Shorn

Microgen Pursues AFA Systems

Virus Types

Other Related Websites
Process Improvement

Visit our site of the month Load Testing at loadtesting.force9.co.uk