8.1 Generate Tests Often and Keep 100% Coverage
As stated in the modelling section, frequent test generation is the key to succeeding in completion of modelling. Rarely lack of coverage “fixes itself” but requires conscious actions. Lack of coverage is not a problem – actually it allows you to learn more about the functionality you are modelling, e.g., the interplay of data with control flow which in turn will allow you to find inconsistencies in the specification or your thinking about the application to be testing. The following sections give more detail on how to work with coverage.
...
Recommendation: Generate test cases frequently and keep coverage at 100%.
Once a model starts to become complex (see MODEL COMPLEXITY) and test generation time exceeds your threshold of patience, start to generate tests from subactivity diagrams, i.e., by setting a sub activity diagram to be the “main diagram” and generating tests. This technique should be used to study in particular newly added functionality in isolation before you evaluate it as part of the full model. Note that you may need to prepare the activity diagrams before using them in isolation (e.g., copy and initialize [state] variables temporarily into the isolated diagrams).
...
Recommendation: Generate tests for new functionality by generating tests for the related activity diagrams in isolation.
8.2 Review Testing Target Settings before Analyzing Your Model
Conformiq sets by default targets for test generation to cover all requirements, all nodes, all control flows, all data objects (including all alternatives), all conditions, and all combinatorial checkpoints, i.e., generated tests must cover all test targets to achieve 100% coverage.
Targets are set automatically by the tool without analyzing the model if they are actually achievable, i.e., test targets may not be reachable by (even correct) model construction. This is by intent to challenge users to question and verify their model construction since a tool cannot automatically decide what is correct or incorrect. Frequently such unreachable test targets occur when using combinatorial data, i.e., by default all pairs or combinations are target. Another source for “expected” unreachable targets is from activity diagrams that make decision based on state variables. In the latter case the impact on coverage is generally much severe.
The number of default test targets is intentionally quite large since a computer cannot automatically decide if the lack of coverage is “ok by specification” or “an unintentional modelling error”. It is the user who is expected to decide what is correctly a test target and what is not by fine tuning the default settings at any time in Test Targets view, i.e., change targets from “TARGET” to “DON’T_CARE” or “BLOCK”.
...
Recommendation: Whenever coverage after test generation is less than 100% first review all uncovered test target settings by expanding the nodes in test target view and modify them if required. Set them to “DON’T CARE” in case they are not crucial to your coverage or known to be “invalid” in the case of combinatorial data.
...
Recommendation: The “BLOCK” test target setting should only be used by advanced users. Generally, try to use the “DON’T CARE” setting for test targets instead of “BLOCK”.
Users generally underestimate the impact of a blocked test target on the remainder of the modelcoverage. Blocking a test target automatically makes all model structure and other test targetsfollowing the point of the blocked target unreachable. Use the “BLOCK” setting only if a testcovering this specific target would break your application or clearly generate an invalid test.
8.3 Limit Use of Combinatorial Data to Avoid Test Case Explosion
A frequent temptation for modelers is to use extensively combinatorial data settings without understanding its impact of that on test generation (see also separate Conformiq Guide “Combinatorial Testing with Conformiq Creator” in References)[SD1] . Contrary to other pure test data generating tools all settings of combinatorial data coverage are independent, i.e., not linked to each other. A common misunderstanding is that a “Data Coverage” setting in one action and “Data Coverage” in another action mean that the alternatives values in both actions will be combined – instead it produces the product of two independent combinations, i.e., test case explosion.
...
Recommendation: To avoid test case explosion use the combinatorial action in modeling. Do not use the data coverage option.
Exploratory “Data Coverage” options are not meant for general use but to help reaching coverage of all combinations in very specific modeling situations. In general, exploratory options multiply the number of tests generated by the number of combinations that can be generated from the input constraint.
...
Recommendation: Avoid using the “All combinations (exploratory)” option more than once in your model.
8.4 Use Multiple Test Design Configurations to Optimize for Different Test Targets
Test optimization during test generation can be affected by a) selecting test targets as “TARGET”, “BLOCK” or “DON’T_CARE” in the Test Targets view and b) the selected Test Case Selection option in the Conformiq Options dialog. In a Creator project, test target settings are stored as part of so called a Test Design Configuration (TDC). When creating a new Creator project there is always one such DC in the project but users can add as many as they want. The use of multiple Test Design Configurations allows to evaluate test optimizations for different test target selections. For example, one TDC can have the default (full) selection of test targets whereas the other has only a subset selected as “TARGET” and “block” some targets which are “TARGET” in the other TDC. This way user can get after each single test generation one set of tests optimized for “regression” and one for “progression” testing. Notice that scripting backends need to be attached separately for each TDC.
...
Recommendation: Use multiple test design configurations to study impact different test target settings on test generation, or create multiple optimized test suites (per TDC) for export from the same test generation. Use “BLOCK” target settings to reduce tests generated in a particular TDC.
...
Recommendation: To efficiently use “BLOCK” target settings, please “BLOCK” all related targets which will be impacted by the first change. By removing those targets which cannot be covered anyway, total number of targets to be covered is reduced, thereby resulting in better coverage statistics and quicker test generation time.
8.5 Use Advanced Test Generation Concepts Only Once Functionality is Complete
Advanced test generation concepts are known to require more processing power or time to resolve in particular as model complexity increases during model development. Delaying the use of such concepts, simplifies in particular troubleshooting in cases where test generation does not reach expected coverage.
To make most effective use of time, users are advised to delay covering of large data sets or use of advanced test generation options up to the point where in principle the model is complete and coverage with basic test generation options is 100%, i.e., the basic correctness of the model has been validated. Note that use of large data sets including combinatorial data generally does not increase your functional coverage but purely increases data coverage.
...
Recommendation: Validate all functionality to be tested first and make sure it is covered in tests before using modeling constructs like large external spreadsheets or combinatorial data, and advanced test generation options like “Enable Only Finalized Runs” or “Enable Data Distribution”.
Following the approach to validate models without using advanced test generation concepts faithfully, significantly simplifies troubleshooting for possible causes for lack of coverage due to
a) A default test target setting can actually never be covered due to model construction, which can be resolved by setting the target to “DON’T_CARE” in the Test Targets view in the Conformiq Test Design
b) Model construction or complexity, which can be resolved by either model restructuring or increasing the test generation time limit
When using advanced test generation concepts users should expect longer test generation times as compared to basic test generation options, i.e., use of advanced test generations options generally never reduce test generation complexity and time.
...
Recommendation: Ensure you have suitable hardware and use the 64-bit Creator installation when using advanced test generation concepts.
With the “Enable Finalized Runs” option Conformiq Creator only ends test generation if after covering every target that test can also be ended in an explicit or implicit return node, i.e., even if a test target can be covered but a return node cannot be reached then the test is disregarded and target is reported as not covered.
...
Recommendation: Always first ensure 100% coverage is achievable with a model before generating tests with the “Enable Only Finalized Runs” option. Be aware that test generation may take significantly longer and increase your time limit settings. For model validation or coverage analysis disable in general the option to reduce test generation time.
Surprisingly, in often it is not data conditions or constraints that lead to this increase in test generation time but instead it is the number of control explicit and implicit merge nodes encountered in flows before reaching a return node – the more merge nodes are encountered the longer the required test generation. The best solution to reduce test generation time is to restructure the model either by reducing control flow merges in activity diagrams or even by splitting modelled functionality across two Conformiq projects instead of keeping all in one.
...
Recommendation: In case that test generation time with “Enable Only Finalized Runs” becomes too large, restructure your model refactoring common control flows into sub activity diagrams and reducing the amount of control flow merges. In case of high model complexity split the model into multiple models.
Ultimately there is always the possibility to accept the optimized test set produced instead of using “Enable Only Finalized Runs”.
The “Enable Data Distribution” option has been introduced to generate tests with data distribution for alternative values specified in input action constraints which are not used in any flow branching or other decision making in the model, i.e., as soon as for example a decision node compares an input action result to an alternative value then this value can no longer be freely distributed. Use of this advanced option can also impact significantly test generation time for larger models and should therefore only be used in final stages of modeling.
...
Recommendation: Use the “Enable Distributed Data” option only for test generation with validated models and when there is alternative values that can actually be distributed.
8.6 Handling of Lookahead Depth (Applicable to Releases Prior to Creator 2.4 Only)
The “Lookahead Depth” knob in the Conformiq Options dialog defines a test generation end condition in terms how “deep” the state space of infinite possibilities should be explored to find tests. Low lookahead may lead to the lack of covering test targets. Too high lookahead does not give any promise of 100% coverage and may (depending on hardware used for test generation) run the computational server (with high lookahead) out of memory. Note also that generating the same test set for full coverage usually takes longer with higher than “perfect” lookahead.
...
Recommendation: Starting from Creator 2.4 lookahead depth handling is performed automatically. If you are facing challenges with finding good lookahead depth settings, we highly recommend upgrading to that version.
Generally speaking, lookahead depth is often not the (main) reason for tests failing to cover targets – the more common case is that model construction prevents Conformiq to either unintentionally or on purpose to generate tests that cover these targets. Generation of tests from large and/or complex models and/or use of advanced test generation concepts usually require lookahead modification from the default (which is 1).
Setting lookahead depth “just right” is a non trivial endeavor which can only be set by experts in test generation. Roughly a perfect setting can be explained as the maximum number of input actions required in any place of your model logic required to cover a new test target or alternatively reach an end node like return or flow final node (in cases where the “Enable Only Finalized Runs” test generation option is used). Instead of attempting to understand and reproduce perfect lookahead depth computation we generally advise to adjust the setting using a much more abstract technique.
...
Recommendation: When working with versions before Creator 2.4, leave the “Lookahead Depth” setting generally as low as possible. Always investigate other causes than the lookahead depth first which could prevent target coverage. When lookahead depth can no longer be ruled out as a reason, increment the setting gradually like 1, 2, 4, 6, 8. With smaller or medium size model a lookahead of 8 should suffice – settings beyond that value should be a “last resort”.
When coverage does not improve after some increments it usually indicates other issues than lookahead depth prevent to reach full coverage. Remember that besides the knob you can directly type into the lookahead depth into the number field – also values beyond 64.
Remember that lookahead is related to number of input actions between the storing of data as part of an input action and using it later on in model logic (e.g., as part of a condition). Therefore,you’re your model does not have model logic based on input action data (i.e., decisions or action preconditions) then lack of coverage cannot be caused by a too low lookahead depth settingThis section primarily gives advice how to work with and manage expectations around coverage.