Latest change: 3 August 2013
Software Test Design and Analysis
A comprehensive handbook of test analysis and design techniques
© Bogdan Bereza

To my mother, who taught me to doubt
To Magda, who taught me to trust
To Anna and to Robert, who still learn to tell right from wrong
To my father, who wanted to calculate


Review and Support Team   ||  Your ideas are welcome  ||  The process of writing this book


Now working on
2.12.17 Equivalence Partitioning (this work is not visible here)


Contents
Introduction [~5%]
For Whom and Why Is This Book Needed?
What Other Software Testing Knowledge Exists, and What Is Missing?
How to Read and Use This Book
Review and Support Team
Part 1: The Context [~10%]
Chapter 1.1: Production Process and Quality Control
Chapter 1.2: History of Software Engineering and Software Testing
Chapter 1.3: Basic Principles of Quality Assurance and Quality Control
Chapter 1.4: Product, Project and Process Testing
Part 2: What to Test? Test Design Methods [~70%]
Chapter 2.1 How Much Checking Makes Sense? Risk-based Answer
Chapter 2.2 Basic Classification Is the Entry Point to Understanding
Chapter 2.3 The Sources of Test Design Knowledge
Chapter 2.4 Who Shall Design?
Chapter 2.5 How to Design Test Cases? Algorithms, Heuristics, Intuition, Creativity, Randomness and Quackery
Chapter 2.6 When to Design
Chapter 2.7 Test Design for Different Test Goals
Chapter 2.8 Test Design for Various Domains
Chapter 2.9 Test Design for Various Technologies and Architectures
Chapter 2.10 Test Coverage and Test Design
Chapter 2.11 Test Design Techniques Based On Various Mathematical Models
Chapter 2.12 Test Design Techniques for Various Model Contents
Chapter 2.13 Experience-Based Test Design
Chapter 2.14 What Methods To Choose?
Part 3: Using Test Design to Achieve Business Goals [~15%]
3.1 Test Case Organization and Specification
3.2 What Is the Business Value of Good Testing?
3.3 Back to Basics: Comparative Value of Various SQA Methods
3.4 How Does the Contents of This Book Make Your Business Better?
3.5 I Know How to Design Tests in Many Ways, Mow Help Me Decide What and When

Sources

Key to the Exercises

Index


Introduction [~5%]


For Whom and Why Is This Book Needed?
What Is It Needed for?
Testers need to know how to design test cases. Testers' main responsibility is to choose from infinite, or at least prohibitively huge number of possible combinations of paths and data values, those few - terrifyingly few - test cases that will really be exercised during test execution and thus be the final bulwark  of defence against risk of failure, loss and catastrophe.

If this was not for test design, testing would be just requirements engineering turned upside-down, plus some knowledge in how to manage bug reports, plus a handful of tools with funny names; a kind of last resort fire brigade, needed only when error prevention fails. But it is not so, because testing means test design. Test design (and analysis, which always precedes design) is the very essence of testing, is what makes it a separate, special set of skills.

However, a comprehensive book on test analysis and design techniques is missing. There are two main reasons for this.

One reason is, testers and their employers alike typically look for information that is needed here and now, and which provides immediate relief. Therefore, many books, web pages and other information sources on testing are in one respect sadly similar to job advertisements: they value detailed technical or domain knowledge more than sound, generic test knowledge. Advertisements  for "test analyst with Selenium skill and insurance industry experience" are incomparably more common than those seeking specialists with general test knowledge, capable and willing to learn required domain and technical skills fast.
The result of this is, many testers learn test design techniques on the fly and only within given technology and domain boundaries, and fail to see the whole picture.

Who cares? Well, everybody should, because this way the transfer of knowledge and skill is minimal. Next time, the tester who had already learned designing test cases based on UML activity diagrams for a banking system, will need to learn designing test cased based on UML state transition diagrams for an embedded system from scratch. This way, the waste of time and money is maximal.

I can understand, even if not completely forgive, such approach in hasty, we-need-it-tomorrow project situations, but the is definitely wrong approach to learning and teaching testing. If we want really good practitioners, we need effective textbooks for them. These textbooks must not mix up test design skills with everything else, including technology and domain knowledge. Even if test practitioners need to mix these topics in project realities, they must not mix them in their heads!

The second reason is that readers', authors' and publishers' short-sighted motives take prevalence over far-sighted approach when they choose to put together test design, test organization, test management and even test automation knowledge in one volume or on one web site, or in one presentation. This sells better, probably because it gives an impression of no-nonsense pragmatism, while treating these subjects separately may give an impression of being theoretical and academic. Again, in order to become really high-quality professionals, capable of applying our skills in many different situations, we should learn - and teach - theses areas separately. Professionals need to be able to use their skills multi- dimensionally, not only as one or two or even ten specific sequences.

To avoid this trap myself, I will keep Part 1 "The Context" and Part 3 "Using Test Design to Achieve Business Goals" of my book as short as possible, and focus 70% on Part 2 "What to Test? Test Analysis and Design Methods".

The third reason is, more than half of the book market for testing is monopolized by ISTQB-based textbooks. There is only a limited number of test book copies people can buy, and since books with "ISTQB" on the title page are safer bet for the publishers, they dominate. Good-bye to serious test design!

And of course, the main reason is, THICK VOLUMES SELL BAD!

By learning test analysis and design techniques separately, you'll be able to use them more effectively and really efficiently within different test strategies, processes and test organizations, equally well for various technologies, and in all conceivable domains and businesses.

What Is Available, and What is Not?
I searched the phrase "test design techniques" in amazon.com, and what I got highest up on the list was... "Exploratory Software Testing: Tips, Tricks, Tours, and Techniques", and "Agile Testing: A Practical Guide for Testers and Agile Teams", and some more similar titles. Even keeping my general reservations about exploratory and agile approaches aside for a while, these books were not what I was looking for: not comprehensive textbooks on test design techniques. The ones I found mix test design with many other issues, and do not make any attempt to cover all available techniques. And of course, "tips, tricks and tours" are always easier to sell than boring lectures.
Let the search go on, back to good old names. Boris Beizer's "Software Testing Techniques" and "Black Box Testing" by the same author are very good, yet far from complete. Glenford Myers' "The Art of Software Testing" (first published as early as 35 years ago!) still is and always will be the ultimate in describing some techniques, but definitely not all that are worth knowing.

Then of course there is Lee Copeland's "A Practitioner's Guide to Software Test Design". Great book, in my opinion one of the best there is on this subject. However, it is still not fully what I believe testers need today; it does not offer a really comprehensive view. Saying this, I do not mean just omitting a few rather exotic and unimportant techniques; I mean omitting a lot. For example, UML contains fifteen types of models, which are extremely useful for designing test cases; Lee's book covers two of them use case diagrams, and state transition diagrams. That was the author's choice, and I respect it, but I think more is needed by many testers.

There are books based on the world's most popular test certification scheme: ISTQB Advanced Level syllabi for test analysts and technical test analysts. Even keeping my general reservations about the contents and structure of ISTQB syllabi aside for a while, the simple truth is, design techniques with ISTQB blessing present just a smallish part of the total amount of test design techniques. For example, they hardly cover techniques specific for non-functional testing, and they are mostly limited to test design for dynamic test execution, leaving out almost completely the rules of static testing.

Paul C. Jorgensen's "Software Testing: A Craftsman's Approach" is exceptional since it refers to mathematical concepts and logical reasoning. At last a book that dares use the words "discrete math" and "graph theory"! To tell the truth, I find it hard to understand why all other books choose to teach test design without mathematics. Perhaps because context-driven school pretends testing is more a social skill than anything mathematical? However, Paul's book is very much focused on functional testing, and formal test design: good, but I believe non-functional testing, or experience-based testing must not be left to intuitive magicians, but taken care of in the same volume.

Last but not least, one must not forget Geoff Quentin's worthwhile attempt "The Tester's Handbook", whose chapter 7 "Specific Test techniques" contains the most comprehensive list of test techniques available now. However, I find it not complete, either.

This Book Fills a Market Gap
Please note: I do not criticize these books, nor do I think I am somehow wiser then their authors. On the contrary, I will use the knowledge they have already provided maximally. My wish is to fill a market and educational gap, not a knowledge gap.
That is it: my ambition is to provide the missing link. Not for the sake of theory, nor for the sake of academic ambitions - I will hardly present anything that is not already known and described somewhere. My goal is to provide a fully practical handbook, gathering in one place the as many as possible - not just chosen few - good techniques for designing test cases. This book will help testers of all kinds, exploratory and scripted, agile and traditional, waterfall and iterative, manual and automatic. It should be equally useful for regression-testing and for confirmation-testing, for unit-level and system-level, for testing embedded and database systems, written in object-oriented or in assembly language, static and dynamic, testing software, design, for models and for documents.

Let us disregard the misleading dichotomies of white-box versus black-box, formal versus informal, web testing versus database testing, etc. There are some issues specific for some domains, but first of all, there are test design techniques common for all of them. In project realities, domain and technology knowledge helps, but the lack of general test design knowledge kills. All testers all need to make the same kind of decisions: what to actually check, and what to leave alone. This book will help them to make the right decisions.


Contents


What Other Software Testing Knowledge Exists, and What Is Missing?
[the relationship / correlation / causal relationship between test design techniques plus test coverage and failure risk (probability) || any knowledge of intellectual and organizational difficulties of using / deploying various test design methods]
Contents



How to Read and Use This Book
The book is intended to be used either as a textbook, providing as complete and comprehensive guide as possible, or as reference book, where each area, and each techniques, can be consulted to some extent independently.
[...]
For the techniques described or analyzed, most of the following issues will be covered:

Introduction:
1. A practical situation where this test analysis and design technique is helpful (including  comparative strengths & weaknesses vs. other techniques which could be used in the same situation)
2. How much known and widespread is the use of the technique? (a kind of "social history" of the technique, with links to known test syllabi and approaches, such as exploratory testing).
Body:
 
3. Detailed description of the technique (including a way to measure the % coverage that any given set of test cases will obtain)
4. Practical case of using the technique
5. Exercise
References:
6. Some technical and organizational aspects of using the technique: tools, organizational and process constraints. Is a lightweight version of the technique useful and used, perhaps under a different, or even misleading name? For example, testing invalid equivalence classes is often used as so-called negative testing.
7. Cross-reference example: example usage of the technique with another technique, or some other test design techniques.
A complete reference matrix might be tempting, but too huge for practical purposes. To cover each pair of technique combinations, there would be a total number of  (N-1) x (N-1) such pairs (e.g. 29 x 29 = 841 pairs), and for all conceivable combinations, NN combinations (e.g 2929 = 2,6e+42)... just joking.

Only less obvious but practically promising situations will be described. For example, all flow graph based techniques (e.g. test design based on use case diagrams, or activity diagrams, or state transition diagrams, can - and usually are - combined with equivalence class partitioning plus boundary value analysis, so stating this fact many times would not be practical.

Contents


Review and Support Team
Thank you all members of the review team for your willingness to help and support.

In order to make this book as good and useful as possible, I intend to make the process of writing it very transparent and interactive. The details of how the co-operation between myself and Review Team members will be performed are to be decided, possibly influenced by - yet unknown - publisher's requirements and rules.

However, the final result of this process is to be a book, not a blog, or a discussion forum. It will
only have myself as responsible author, and it will not change every day or with every tempting thought or idea that appears now or some time in the future. I do not think that a living book, constantly changing, is a good idea, because focus will easily be lost, and achieving its goal will then be postponed for ever. Besides, such approach may lead to the loss of clear responsibility, or motivation, or both.

Whether it will be published in paper or in electronic form, or forms, using which model of copywright, will be decided together with publisher, too.

Review team:
Adam Jadczak, IT-journalist, Poland
A
vigdor M. Mevorach, Process QA Manager, Mobileye
David Hayman, Chair of ANZTB, Quality Assurance Practice Manager Vodafone New Zealand
D
eclan Kavanagh, Strategic Business & IT Services Ltd.
Derk-Jan de Grood, Holland
Edward Bishop, UK
Florian Fieber, Senior Consultant and Trainer, Loyal Team GmbH
Ladislau Szilagyi, EuroQST, www.euroqst.ro, ladislau_szilagyi AT euroqst.ro
Richard Taylor, Independent Consultant
, UK
Tilo Linz, imbus AG, Germany
Supporters:
Aleksander Lipski, Poland
Dorothy Graham, software testing consultant, speaker and author UK
Ewa Wardza│a, Independent Consultant, Poland
M
artin Hynie
Mats Wessberg, Sweden
Paul Mansell, www.thetestingrebel.com, UK
P
iotr Kundu, Sweden
Putcha V. Narasimham
Contents


Part 1: The Context [~10%]


Chapter 1.1: Production Process and Quality Control
1.1.1 From Tribal Tools to Software Industry: Development or Stagnation?

1.1.2 From Art and Craft to Mass Production and Back

1.1.3 The Deming Revolution

1.1.4 Is Software Really Special?
Contents


Chapter 1.2: History of Software Engineering and Software Testing
1.2.1 Changing Business Context of Software and Computers

1.2.2 From Manhattan Project to TDD and Exploratory: the Evolution

1.2.3 Social and Political History of Software Testing: Trends, People and Organizations

1.2.4 Is It Programming, Computer Science or System Engineering?

1.2.5 The Rise and Fall of Software Testing Profession

1.2.6 Is Agile Really Agile, Is Exploratory Really Special?

Contents



Chapter 1.3: Basic Principles of Quality Assurance and Quality Control
1.3.1 Basic Principles, or Basic Misunderstandings?

1.3.2 How Do We Know It? Is This Knowledge True, Is It Scientific?

1.3.3 The Confusion: Project Management, Risk Management, Testing, Requirements Engineering
Contents


Chapter 1.4: Product, Project and Process Testing

1.4.1 What Is Product Quality?

1.4.2 Testing as Product Quality Measurement

1.4.3 Testing and Project Measurement

1.4.4 Testing as Process Measurement and Improvement


Contents

Part 2: What to Test? Test Analysis and Design Methods [~70%]


Chapter 2.1 How Much Checking Makes Sense? Risk-based Answer

Contents



Chapter 2.2 Basic Classification Is the Entry Point to Understanding

Contents



Chapter 2.3 The Sources of Test Design Knowledge
2.3.1 Black and White and Many Shades of Grey

2.3.2 Models, Half-Models and Mental Models

2.3.3 Test Basis? What Test Basis?
Contents



2.4 Who Shall Design?
2.4.1 The Myth of Independent Testing

2
.4.2 Stakeholders

2.4.3 Is It Test Design, or Test Organization?

2.4.4 Model-Based Testing and Automatic Test Case Generation

Contents


2.5 How to Design Test Cases? Algorithms, Heuristics, Intuition, Creativity, Randomness and Quackery

Contents



2.6 When to Design
2.6.1 As Early As Possible?


2.6.2 Depending on the Life Cycle?


2.6.3 Agile Testing, Is There Any?


2.6.4 Yes, During Test Execution as Well
Contents


2.7 Test Design for Different Test Goals

2.7.1 Test Design for Different Quality Attributes
2.7.1.A Principles of Performance Test Design

2.7.1.B Principles of Usability Testing

2.7.1.C Security Testing - Is Penetration Testing Exploratory?

2.7.1.D There Are Hundreds of Quality Attributes - So What?

2.7.1.E Testing Tests: Bug Mutations

2.7.2 Test Design for Various Project Goals
2.7.2.A Dynamic Analysis, Regression, Confirmation, Smoke, and Acceptance Testing - Do They Require Special Test Design?

2.7.2.B Test Design for Dynamic and Static Testing

2.7.2.C Rules of Static Analysis

2.7.2.D Designing for Different Test Levels and Test Objects

Contents


2.8 Test Design for Various Domains
2.8.1 Testing Safety-Critical Systems

2.8.2 How to Test Real-Time Systems?

2.8.3 Test Design Principles for Embedded Systems

2.8.3 From Here to Infinity: Business, Financial, Medical, Insurance, Games, Web Applications...

Contents


2.9 Test Design for Various Technologies and Architectures
2.9.1 Web Testing, Mobile Testing, Database Testing - Where Does It All End?

2.9.2 Testing Documents and Models

2.9.3 SOA, Web Services, Cloud

2.9.4 Hardware Platform: Boards, Displays, Data Communication

2.9.5 Programming Languages

2.9.6 Operating Systems

Contents


2.10 Test Coverage and Test Design
2.10.1 Is Test Coverage a Test Attribute, or a Test Design Technique?

2.10.2 Coverage-Based Test Design for Static and for Dynamic Testing

2.10.3 Test Design for Functional and Structural Test Coverage

2.10.4 Special Case: Test Design for Model and for Source Code Coverage

Contents



2.11 Test Design Techniques Based On Various Mathematical Models
2.11.1 Algebra and Mathematical Analysis

2.11.2 Geometry

2.11.3 Statistics

2.11.4 Probability Theory

2.11.5 Graph Theory

2.11.6 Set Theory

2.11.7 Combinatorics

2.11.8 Numerical Analysis

2.11.9 Formal Logic


Contents

 

2.12 Test Design Techniques for Various Model Contents
2.12.1 Class Diagrams

2.12.2 Component Diagrams

2.12.3 Composite Structure Diagrams

2.12.4 Deployment Diagrams

2.12.5 Object Diagrams

2.12.6 Package Diagrams

2.12.7 Profile Diagrams

2.12.8 Activity Diagrams

2.12.9 Communication Diagrams

2.12.10 Interaction Diagrams

2.12.11 Sequence Diagrams

2.12.12 State Diagrams

2.12.13 Timing Diagrams

2.12.14 Use Case Diagrams

2.12.15 Entity Relationship Diagrams

2.12.16 Data Flow Diagrams

2.12.17 Equivalence Partitioning

2.12.18 Syntax Diagrams

2.12.19 Cause-Effect Diagrams

2.12.20 Control Flow Diagrams
Contents


2.13 Experience-Based Test Design
2.13.1 Pseudoscience - Exploratory Testing

2.13.2 Corporate Culture and Test Design

2.13.3 Group-think and Sociological Aspects of Test Design

2.13.4 Psychological Principles for What to Test, and How Much

2.13.5 Special Rules - Are Bugs Really Social Creatures?

Contents


2.14 What Methods To Choose?
2.14.1 Here Be Dragons! Our Knowledge Is Anecdotal

2.14.2 What Do Standards Say? More Anecdotes

2.14.3 How to Learn More: Connecting Test Design Techniques with Risk Estimation

2.14.4 Bayesian Belief Nets to Discover the Truth
Contents


Part 3: Using Test Design to Achieve Business Goals [~15%]


3.1 Test Case Organization and Specification
3.1.2 Conceptual versus Executed Test Cases


3.1.3 Test Case Types
[Test conditions, test cases, test scripts, test scenarios, test instructions, test sequences, test matrix, test data... more?]

Contents



3.2 What Is the Business Value of Good Testing?

Contents


 


3.3 Back to Basics: Comparative Value of Various SQA Methods

Contents



3.4 How Does the Contents of This Book Make Your Business Better?


Contents




3.5 I Know How to Design Tests in Many Ways, Mow Help Me Decide What and When

Contents



Sources
Contents


Key to the Exercises

Contents


Index


Contents