Test Design – Readability versus Writeability

Test Design – Readability versus Writeability

The growing interest in test automation is offset by the concern about risk. One of these risks is inadequate test design that can’t be used as the background for automated test scripts development and execution. Keep in mind, however, that the risk of inadequate testing design is a problem with testing as a whole (outside the scope of test automation).

Test Design Risks for Both Manual and Automated Testing

The growing interest in test automation is offset by the concern about risk. One of these risks is inadequate test design that can’t be used as the background for automated test scripts development and execution. Keep in mind, however, that the risk of inadequate testing design is a problem with testing as a whole (outside the scope of test automation).

High quality test design can simplify and speed up the process of the:
  • Manual tester (Test execution)
  • Automation test engineer (Scripts development and execution)
  • Test manager (Manual and automation test result analysis)

Test Design and Requirements

Let’s go over some test design basics. Testing is always performed against requirements which are required to be:
  • Completed
  • Consistent
  • Unambiguous
  • Traceable
  • Testable

To double-check them all it’s necessary to start test design activities as soon as possible. Requirements are typically changed during project performance. These changes are to be reflected in the test design and test scripts.

Data-driven testing is based on the separation of test steps and test data, allowing for two things:
  • Increasing the number of test data (typical action in test automation) without a test script update
  • Decreasing the time of the test execution by directed data selection based on equivalence partition, boundary values, pairwise testing, etc.

What is a Test Case?

A test case consists of format and content. The basic format is the same everywhere:
  1. Step #
  2. Action
  3. Expected result

The Devil is in the Details

The main question is “Is the context adequate to the format?”.

Consider the typical test case (Fig. 1):

Typical test case 1.jpg

Fig. 1 The typical test case

The questions for this test case are:
  • What do the terms “few,” “any,” or “for example” mean?
  • How we can double-check the price?
  • How we can double-check that it’s not considered a defect when the button is not available?
  • How many times do we repeat steps 2-7 and how do we find data for these repetitions?
Note that these questions appear in both manual and automated testing.

What is the (Updated) Test Case?

We can modify the test case structure to be the following:
  • Step # (No changes)
  • Action (No changes)
  • Input data (All that user can type/select/press/etc.)
  • Expected result (All double-checks when the action is performed)

  So we get a better test case result (Fig. 2):

test case result 2.png

Fig. 2 A better test case

Repetitions and Wording

Some of the aforementioned issues are now resolved, but steps remain. The next steps are to get rid of the following:
  • Constructions like “Repeat steps from … to …” that may cause confusion during manual testing and make the automated test script more complicated and difficult to update (do not forget about requirements changes)
  • Words such as “corresponding,” “any,” “appropriate,” etc. – see Figure 3.

3.png

Fig. 3 Sample of the use of “corresponding” within the text


Feel the Difference Between Actions and Expected Results

The next activity is to separate mixed actions and expected results (Fig. 4).

mixed actions and expected results 4.png


Fig. 4 Mixed actions and expected results



The rules are:
  • All actions (for example, step 12) are placed in the “Action” column.
  • All expected results (for example, steps 13 and 14) are placed in “Expected results” column.

This reduces the number of test case steps.

“Le Mieux Est L’ennemi du Bien,” or BIEGE (The Best Is the Enemy of the Good Enough)

The next step is to get rid of universal scripts where UI depends on the test data (Fig. 5):

UI depends on test data 5.png


Fig. 5 UI depends on test data



Figure 6 shows another example of universal scripts:

example of universal scripts 6.png


Fig. 6 One more example of universal scripts


There is a universal “Execute action” command on test case step 5 that is executed as “Move 1 element …” for data set GU-01 and as “Move all elements …” for data set GU-02. Nevertheless, it means a mix of test case steps and test data that leads to more complicated manually-tested actions and automated test script code. In this case it may be better to develop more test cases that will be simpler and easier to understand, automate and maintain.

What Test Case is Recommended in Data-Driven Testing?

Test Case Format:
  • Step #
  • Action
  • Input data (all that the user can type/select/press /etc.)
  • Expected result (all checks out when the action is performed)

Test Case Content:
  • No explicit cycles – use several data sets instead of
  • No unambiguous constructions – use explicit data values or references instead of

Data Set:
  • Roles, values
  • References to data storage items

Data Preparation:
  • Preconditions (database content)
  • SQL queries execution
  • Ad hoc test cases

How to Know if You’re On the Right Path

Criteria:
  • There is a reasonable balance between test case step complexity and test dada complexity
  • Sometimes it’s a good idea to develop two or three similar test cases with strictly reduced test data volume

Benefits

The pluses are:
  • Manual testing – An increase of test execution reliability
  • Automated testing – Increased understanding of automation testing efficiency and results


Interested in upgrading your skills? Check out our software testing trainings

Mai ai întrebări?
Conectați-văcu noi