Читать книгу: «Software Testing Foundations», страница 6
3.1 Sequential Development Models
As the name suggests, a sequential development model arranges the activities involved in the development process in a linear fashion. The assumption here is that development of the product and its feature set is finished when all the phases of the development model have been completed. This model does not envisage overlaps between phases or product iterations. The planned delivery date for projects run this way can lie months—or even years—in the future.
3.1.1 The Waterfall Model
An early model was the so-called “waterfall model” [Royce 70]. It is impressively simple and, in the past, enjoyed a high degree of popularity. Each development phase can only begin once the previous phase has been completed, hence the model’s name1. However, the model can produce feedback loops between neighboring phases that require changes to be made in a previous phase. Figure 3-1 shows the phases incorporated in Royce’s original model:

Fig. 3-1 The waterfall model according to Royce
The major shortcoming of this model is that it bundles testing as a single activity at the end of the project. Testing only takes place once all other development activities have been completed, and is thus seen as a kind of “final check” akin to inspecting goods that leave a factory. In this case, testing is not seen as an activity that takes place throughout the development process.
3.1.2 The V-Model
The V-model is an extension of the waterfall model (see [Boehm 79], [ISO/IEC 12207]). The advent of this model made a huge and lasting difference to the way testing is viewed within the development process. Every tester and every developer should learn the V-model and learn about how it integrates the testing process. Even if a project is based on a different development model, the principles illustrated here can still be applied.
The basic idea is that development and testing are corresponding activities of equal value. In the diagram, they are illustrated by the two branches of the “V”2:

Fig. 3-2 The V-model
The left-hand branch represents the steps that are required to design and develop the system with increasing detail up to the point at which it is actually coded.2
The constructional activities in the left-hand branch correspond to the activities outlined in the waterfall model:
■ Definition of requirements
This is where the customer and end-user requirements are collected, specified, and approved. The purpose and proposed features of the system are now stipulated.
■ Functional design
The requirements are mapped to specific features and dialog flows.
■ Technical design
The functional design is mapped to a technical design that includes definition of the required interfaces to the outside world, and divides the system into easily manageable components that can be developed independently (i.e., the system architecture is drafted).
■ Component specification
The task, behavior, internal construction, and interfaces to other components are defined for each component.
■ Programming
Each specified component is programmed (i.e., implemented as a module, unit, class etc.) using a specific programming language.
Because it is easiest to identify defects at the level of abstraction on which they occur, each of the steps in the left-hand branch is given a corresponding testing step in the right-hand branch. The right-hand branch therefore represents an integration and testing flow during which system components are successively put together (i.e., integrated) to build increasingly large subsystems, that are then tested to ensure that they fulfill their proposed functions. The integration and testing process ends with acceptance testing for the complete system.
■ Component tests ensure that each individual component fulfills its specified requirements.
■ Integration tests ensure that groups of components interact as specified by the technical design.
■ System tests ensure that the system as a whole functions according to its specified requirements.
■ The acceptance test checks that the system as a whole adheres to the contractually agreed customer and end-user criteria.
These test steps represent a lot more than just a chronological order. Each test level checks the product (and/or specific work products) at a different level of abstraction and follows different testing objectives.
This is why the various test levels involve different testing techniques, different testing tools, and specialized personnel. Section 3.4 presents more details regarding each of these test levels.
In general, each test level can include verification and validation tests:
Did we build the system right?
■ Verification involves checking that the test object fulfills its specifications completely and correctly. In other words, the test object (i.e., the output of the corresponding development phase) is checked to see whether it was “correctly” developed according to its specifications (the input for the corresponding phase).
Did we build the right system?
■ Validation involves checking that the test object is actually usable within its intended context. In other words, the tester checks whether the test object actually solves the problem assigned to it and whether it is suited to its intended use.
Practically speaking, every test includes both aspects, although the validation share increases with each level of abstraction. Component tests are largely focused on verification, whereas an acceptance test is mainly about validation.
The V-model’s hallmarks
To summarize, the most important characteristics of the V-model are:
■ Development and test activities take place separately (indicated by the left and right-hand branches) but are equally important to the success of the project.
■ The model’s “V” shape helps to visualize the verification/validation aspects of testing.
■ It differentiates between collaborative test levels, whereby each level is testing against its corresponding development level.
The principle of early testing
The V-model creates the impression that testing begins late in the development process, following implementation. This is wrong! The test levels in the right-hand branch of the model represent the distinct phases of test execution. Test preparation (planning, analysis, and design) must begin within the corresponding development step in the left-hand branch.
3.2 Iterative and Incremental Development Models
Iterative development
The basic idea behind iterative development is that the development team can use the experience they gain from previous development stages along with real-world and customer feedback from earlier system versions to improve the product in future iterations. Such improvements can take the form of fault corrections or the alteration, extension or addition of specific features. The primary objective of all these scenarios is to improve the product step by step in order to meet customer expectations increasingly accurately.
Incremental development
The idea behind incremental development is to develop a product in preplanned stages, with each completed stage offering a more full-featured version (increment) of the product. Increments can vary greatly in size— for example from changing a simple web page layout to adding a complete new module with additional functionality. The primary objective of incremental development is to minimize time to market—i.e., to release a simple product version (or a simple version of a feature) to provide the customer as quickly as possible with a working version of the product or feature. Further enhancements will then be offered continually depending on the customer’s responses and wishes.
Iterative-incremental development
In practice, the borders between these two methodologies are blurred and they are often referred to together as iterative-incremental development. A defining characteristic of both is that each product release enables you to receive regular, early feedback from the customer and/or end-user. This reduces the risk of developing a system that doesn’t meet the customer’s expectations.
Examples of combined iterative-incremental models are: the spiral model [Boehm 86], Rapid Application Development (RAD) [Martin 91], Rational Unified Process (RUP) [Kruchten 03], and Evolutionary Development [Gilb 05].
Agile software development
All forms of agile software development are iterative-incremental development models. The best-known agile models are: Extreme Programming (XP) [Beck 04], Kanban [URL: Kanban], and Scrum [Beedle 02], [URL: Scrum Guide]. In recent years, Scrum has become the most popular of these and is extremely widespread.

Fig. 3-3 Scrum-based agile development
Testing to the rhythm of the iterations
The pace at which new increments/releases are created varies from model to model. While non-agile iterative-incremental projects tend to foresee releases at intervals of six months to a year, or sometimes even longer, agile models in contrast attempt to reduce the release cycle to a quarterly, monthly, or even weekly rhythm.
Here, testing has to be adapted to fit such short release cycles. For example, this means that every component requires re-usable test cases that can be easily and instantly repeated for each new increment. If this condition is not met, you risk reducing system reliability from increment to increment.
Each increment also requires new test cases that cover any additional functionality, which means the number of test cases you need to maintain and execute (on each release) increases over time. The shorter the release cycle, it remains critical but becomes more difficult for all test cases to be satisfactorily executed within the allotted release timeframe. Therefore test automation is an important tool when adapting your testing to agile development.
Continuous Integration and Continuous Deployment
Once you have set up a reliable automated test environment that executes your test cases with sufficient speed, you can use it for every new build. When a component is modified, it is integrated into the previous complete build, followed by a fresh automated test run3. Any failures that appear should be fixed in the short term. This way, the project always has a fully integrated and tested system running within its test environment. This approach is called “Continuous Integration” (CI).
This approach can be augmented using “Continuous Deployment” (CD): If the test run (during CI) is fault-free, the tested system is automatically copied to the production environment and installed there and thus deployed in a ready-to-run state4.
Continuous Delivery = Continuous Testing
Combining CI and CD results in a process called “Continuous Delivery”. These techniques can only be successfully applied if you have a largely automated testing environment at your disposal which enables you to perform “continuous testing”.
Continuous testing and other critical agile testing techniques are explained in detail in [Crispin 08] and [Linz 14].
3.3 Software Development in Project and Product Contexts
The requirements for planning and traceability of development and testing vary according to the context. Likewise, the appropriateness of a particular lifecycle model for the development of a specific product also depends on the contexts within which it is developed and used. The following project- and product-based factors play a role in deciding which model to use:
■ The company’s business priorities, project objectives, and risk profile. For example, if time-to-market is a primary requirement.
■ The type of product being developed. A small (perhaps department-internal) system has a less demanding development process than a large project designed for multi-year use by a huge customer base, such as our VSR-II case study project. Such large products are often developed using multiple models.
■ The market conditions and technical environment in which the product is used. For example, a product family developed for use in the Internet of Things (IoT) can consist of multiple types of objects (devices, services, platforms, and so on), each of which is developed using a specific and suitable lifecycle model. Because IoT objects are used for long periods of time in large numbers, it makes sense if their operational usage (distribution, updates, decommissioning, and so on) is mirrored in specific phases or catalogs of tasks within the lifecycle model. This makes developing new versions of such a system particularly challenging.
■ Identified product risks. For example, the safety aspects involved in designing and implementing a vehicle braking system.
■ Organizational and cultural aspects. For example, the difficulties generated by communication within international teams can make iterative or agile development more difficult.
Case Study: Mixing development models in the VSR-II project
One of the objectives of the VSR-II project is to make it “as agile as possible”, so the DreamCar module and all the browser-based front-end components and subsystems are developed in an agile Scrum environment. However, because they are safety-critical, the ConnectedCar components are to be developed using the traditional V-model.
Prototyping [URL: Prototyping] is also an option early on in a project and, once the experimental phase is complete, you can switch to an incremental approach for the rest of the project.
Tailoring
A development model can and should be adapted and customized for use within a specific project. This adaptation process is called “tailoring”.
Tailoring can involve combining test levels or certain testing activities and organizing them especially to suit the project at hand. For example, when integrating off-the-shelf commercial software into a larger system, interoperability tests at the integration testing stage (for example, when integrating with existing infrastructure or systems) can be performed by the customer rather than the supplier, as can acceptance testing (functional and non-functional operational and customer acceptance tests). For more detail, see section 3.4 and 3.5.
The tailored development model then comprises a view of the required activities, timescales, and objectives that is binding for all project participants. Any detailed planning (schedules, staffing, and infrastructure allocation) can then utilize and build upon the tailored development model.
Attributes of good testing
Regardless of which lifecycle model you choose, your tailoring should support good and effective testing. Your testing approach should include the following attributes:
■ Testing and its associated activities are included as early as possible in the lifecycle—for example, drafting test cases and setting up the test environment (see the principle of early testing above).
■ For every development activity, a corresponding test activity is planned and executed.
■ Test activities are planned and managed specifically to suit the objectives of the test level they belong to.
■ Test analysis and test design begin within the corresponding development phase.
■ As soon as work products (requirements, user stories, design documents, code etc.) exist, testers take part in discussions that refine them. Testers should participate early and continuously in this refinement process.
3.4 Testing Levels
A software system is usually composed of a number of subsystems, which in turn are made up of multiple components often referred to as units or modules. The resulting system structure is also called the systems “software architecture” or “architecture”. Designing an architecture that perfectly supports the system’s requirements is a critical part of the software development process.
During testing, a system has to be examined and tested on each level of its architecture, from the most elementary component right up to the complete, integrated system. The test activities that relate to a particular level of the architecture are known as a testing “level”, and each testing level is a single instance of the test process.
The following sections detail the differences between the various test levels with regard to their different test objects, test objectives, testing techniques, and responsibilities/roles.
3.4.1 Component Testing
Terminology
Component testing involves systematically checking the lowest-level components in a system’s architecture. Depending on the programming language used to create them, these components have various names, such as “units”, “modules” or (in the case of object-oriented programming) “classes”. The corresponding tests are therefore called “module tests”, “unit tests”, or “class tests”.
Components and component testing
Regardless of which programming language is the used, the resulting software building blocks are the “components” and the corresponding tests are called “component tests”.
The test basis
The component-specific requirements and the component’s design (i.e., its specifications) are to be consulted to form the test basis. In order to design white-box tests or to evaluate code coverage, you must analyze the component’s source code and use it as an additional test basis. However, to judge whether a component reacts correctly to a test case, you have to refer to the design or requirements documentation.
Test objects
As detailed above, modules, units, or classes are typical test objects. However, things like shell scripts, database scripts, data conversion and migration procedures, database content, and configuration files can all be test objects too.
A component test verifies a component’s internal functionality
A component test typically tests only a single component in isolation from the rest of the system. This isolation serves to exclude external influences during testing: If a test reveals a failure, it is then obviously attributable to the component you are testing. It also simplifies design and automation of the test cases, due to their narrowly focused scope.
A component can itself consist of multiple building blocks. The important aspect is that the component test has to check only the internal functionality of the component in question, not its interaction with components external to it. The latter is the subject of integration testing. Component test objects generally arrive “fresh from the programmers hard disk”, making this level of testing very closely allied to development work. Component testers therefore require adequate programming skills to do their job properly.
The following example illustrates the point:
Case Study: Testing the calculate_price class
According to its specifications, the VSR-II DreamCar module calculates a vehicle’s price as follows:
We start with the list price
(baseprice)
minus the dealer discount(discount)
. Special edition markup(specialprice)
and the price of any additional extras(extraprice)
are then added.If three or more extras not included with the special edition are added
(extras)
, these extras receive a 10% discount. For five extras or more, the discount increases to 15%.The dealer discount is subtracted from the list price, while the accessory discount is only applied to the extras. The two discounts cannot be applied together.
The resulting price is calculated using the following C++ method5:

The test environment
In order to test this calculation, the tester uses the corresponding class interface by calling the calculate_price()
method and providing it with appropriate test data. The tester then records the component’s reaction to this call—i.e., the value returned by the method call is read and logged.
This piece of code is buggy: the code for calculating the discount for ≥ 5 can never be reached. This coding error serves as an example to explain the white-box analysis detailed in Chapter 5.
To do this you need a “test driver”. A test driver is a separate program that makes the required interface call and logs the test object’s reaction (see also Chapter 5).
For the calculate_price()
test object, a simple test driver could look like this:

The test driver in our example is very simple and could, for example, be extended to log the test data and the results with a timestamp, or to input the test data from an external data table.
Developer tests
To write a test driver you need programming skills. You also have to study and understand the test object’s code (or at least, that of its interface) in order to program a test driver that correctly calls the test object. In other words, you have to master the programming language involved and you need access to appropriate programming tools. This is why component testing is often performed by the component’s developers themselves. Such a test is then often referred to as a “developer test”, even though “component testing” is what is actually meant. The disadvantages of developers testing their own code are discussed in section 2.4.
Testing vs. debugging
Component tests are often confused with debugging. However, debugging involves eliminating defects, while testing involves systematically checking the system for failures (see section 2.1.2).
Our Tip
Use Component test frameworks
■ Using component test frameworks (see [URL: xUnit]) significantly reduces the effort involved in programming test drivers, and creates a consistent component test architecture throughout the project. Using standardized test drivers also makes it easier for other members of the team who aren’t familiar with the individual components or the test environment to perform component tests6. These kinds of test drivers can be controlled via a command-line interface and provide mechanisms for handling test data, and for logging and evaluating test results. Because all test data and logs are identically structured, it is possible to evaluate the results across multiple (or all) tested components.
Component test objectives
The component testing level is characterized not only by the type of test objects and the test environment, but also by very specific testing objectives.
Testing functionality
The most important task of a component test is checking that the test object fully and correctly implements the functionality defined in its specifications (such tests are also known as “function tests” or “functional tests”). In this case, functionality equates to the test object’s input/output behavior. In order to check the completeness and correctness of the implementation, the component is subjected to a series of test cases, with each covering a specific combination of input and output data.
Case Study: Testing VSR-II’s price calculations
This kind of testing input/output data combinations is illustrated nicely by the test cases in the example shown above. Each test case inputs a specific price combined with a specific number of extras. The test case then checks whether the test object calculates the correct total price.
For example, test case #2 checks the “discount for five or more extras”. When test case #2 is executed, the test object outputs an incorrect total price. Test case #2 produces a failure, indicating that the test object does not fulfill its specified requirements for this input data combination.
Typical failures revealed by component testing are faulty calculations or missing (or badly chosen) program paths (for example, overlooked or wrongly interpreted special cases).
Testing for robustness
At run time, a software component has to interact and swap data with multiple neighboring components, and it cannot be guaranteed that the component won’t be accessed and used wrongly (i.e., contrary to its specification). In such cases, the wrongly addressed component should not simply stop working and crash the system, but should instead react “reasonably” and robustly. Testing for robustness is therefore another important aspect of component testing. The process is very similar to that of an ordinary functional test, but serves the component under test with invalid input data instead of valid data. Such test cases are also referred to as “negative tests” and assume that the component will produce suitable exception handling as output. If adequate exception handling is not built in, the component may produce runtime errors, such as division by zero or null pointer access, that cause the system to crash.
Case Study: Negative tests
For the price calculation example we used previously, a negative test would involve testing with negative input values or a false data type (for example,
char
instead ofint
)7:

Various interesting things come to light:
■ Because the number of possible “bad” input values is virtually limitless, it is much easier to design “negative tests” than it is to design “positive tests”.
■ The test driver has to be extended in order to evaluate the exception handling produced by the test object.
■ Exception handling within the test object (evaluating ERR_CODE in our example) requires additional functionality. In practice, you will often find that half of the source code (or sometimes more) is designed to deal with exceptions. Robustness comes at a price.
Alongside functionality and robustness, component testing can also be used to check other attributes of a component that influence its quality and that can only be tested (if at all) using a lot of additional effort at higher test levels. Examples are the non-functional attributes “efficiency” and “maintainability”8.
Testing for efficiency
The efficiency attribute indicates how economically a component interacts with the available computing resources. This includes aspects such as memory use, processor use, or the time required to execute functions or algorithms. Unlike most other test objectives, the efficiency of a test object can be evaluated precisely using suitable test criteria, such as kilobytes of memory or response times measured in milliseconds. Efficiency testing is rarely performed for all the components in a system. It is usually restricted to components that have certain efficiency requirements defined in the requirements catalog or the component’s specification. For example, if limited hardware resources are available in an embedded system, or for a real-time system that has to guarantee predefined response-time limits.
Testing for maintainability
Maintainability incorporates all of the attributes that influence how easy (or difficult) it is to enhance or extend a program. The critical factor here is the amount of effort that is required for a developer (team) to get a grasp of the existing program and its context. This is just as valid for a developer who needs to modify a system that he programmed years ago as for someone who is taking over code from a colleague.
The main aspects of maintainability that need to be tested are code structure, modularity, code commenting, comprehensibility and up-to-dateness of the documentation, and so on.
Case Study: Code that is difficult to maintain
The sample
calculate_price()
code contains a number of maintainability issues. For example, there are no code comments at all, and numerical constants have not been declared as such and are instead hard-coded. If, for example, such a constant needs to be modified, it isn’t clear if and where else in the system it needs to be changed, forcing the developer to make huge efforts figuring this out.
Attributes like maintainability cannot of course be checked using dynamic tests (see Chapter 5). Instead, you will need to analyze the system’s specifications and its codebase using static tests and review sessions (see section 4.3). However, because you are checking attributes of individual components, this kind of analysis has to be carried out within the context of component testing.
Testing strategies
As already mentioned, component testing is highly development-oriented. The tester usually has access to the source code, supporting a white-box oriented testing technique in component testing. Here, a tester can design test cases using existing knowledge of a component’s internal structure, methods, and variables (see section 5.2).
White-box tests
The availability of the source code is also an advantage during test execution, as you can use appropriate debugging tools (see section 7.1.4) to observe the behavior of variables during testing and see whether the component functions properly or not. A debugger also enables you to manipulate the internal state of a component, so you can deliberately initiate exceptions when you are testing for robustness.
Case Study: Code as test basis
The
calculate_price()
code includes the following test-worthy statement:

Additional test cases that fulfill the condition (
discount > addon_discount
) are simple to derive from the code. But the price calculation specification contains no relevant information, and corresponding functionality is not part of the requirements. A code review can reveal a deficiency like this, enabling you to check whether the code is correct and the specification needs to be changed, or whether the code needs to be modified to fit the specification.
However, in many real-world situations, component tests are “only” performed as black-box tests—in other words, test cases are not based on the component’s inner structure9. Software systems often consist of hundreds or thousands of individual building blocks, so analyzing code is only really practical for selected components.
During integration, individual components are increasingly combined into larger units. These integrated units may already be too large to inspect their code thoroughly. Whether component testing is done on the individual components or on larger units (made up of multiple components) is an important decision that has to be made as part of the integration and test planning process.
Test-first
“Test-first” is the state-of-the-art approach to component testing (and, increasingly, on higher testing levels too). The idea is to first design and automate your test cases and to program the code which implements the component as a second step. This approach is strongly iterative: you test your code with the test cases you have already designed, and you then extend and improve your product code in small steps, repeating until the code fulfills your tests. This process is referred to as “test-first programming”, or “test-driven development” (often abbreviated to TDD—see also [URL: TDD], [Linz 14]). If you derive your test cases systematically using well founded test design techniques (see Chapter 5), this approach produces even more benefits—for example, negative tests, too, will be drafted before you begin programming and the team is forced to clarify the intended product behavior for these cases.
Бесплатный фрагмент закончился.
Начислим
+111
Покупайте книги и получайте бонусы в Литрес, Читай-городе и Буквоеде.
Участвовать в бонусной программе