Skip to end of metadata
Go to start of metadata

You are viewing an old version of this content. View the current version.

Compare with Current View Version History

Version 1 Next »

7.1      Reuse Existing Assets to Create Models Automatically

Over the past years numerous importers have been developed to create activity diagrams with matching structure diagrams from various existing assets such as business process models, flowcharts, ALM test plans, business process tests, manual tests, Gherkin feature files, and soon even automated Tosca tests. Importers of manual test assets and Gherkin also optimize and remove redundant information from the imported functionality.

Recommendation: Instead of starting with an empty project and blank initial diagrams, save valuable time in modelling and make use of various available importers to complement or create a starting point for model specification.

If you have any specification, application, or testing assets in a format which is currently not supported by importers reach out to support@conformiq.com for help.

Generally, users have thousands of (manual) test cases in repositories they would like to import and work with in modeling. Apply the same thinking as in modeling from scratch and import respecting model complexity limits, e.g., avoid importing more than 100 tests into a given Creator project. Filter or group tests before import according to functionality and then create a Creator project and model for that purpose and import.

Recommendation: Generally, avoid importing more than 100 tests with test assets importers. If you have more than 100 tests to be imported, then split the tests into groups of at most 100 tests based on the functional aspect they test and import each group to a different Creator project.

7.2      Validate Functionality and Test Data Modifications Frequently

Experience has shown that it is more effective to iterate between working on the model and test generation instead of investing a lot of time into model change and “see what happens” when you click test generation button on the entire model after days of modelling. Chances are that,  especially in more complex models (see section MODEL COMPLEXITY), there are basic modelling construction issues that either prevent test generation to achieve a 100% coverage or unnecessarily increase test generation time. Remember to generate tests from subactivity diagrams instead of only the complete model to speed up and simplify analysis (see section Generate Tests Often and Keep 100% Coverage).

Recommendation: Generate tests frequently after completing (partial) changes to your model, e.g., one of your many activity diagrams, to ensure that changes do not trigger unintentional or unexpected behavior and work with the rest of your model. Verify the steps of the partial flow, even if the tests do not make sense in the overall model context.

“Longer” test generation times generally indicate conflicts in model logic (which may or may not be intentional). Stop test generation and understand the tests generated up to that point to find easily overlooked cases or unintentional mismatches between input constraints and condition specifications.

Recommendation: Save time and stop test generation manually when validating smaller changes to a model (i.e., before you have “finalized” that part of the model). For example, stop test generation if test generation takes suddenly significantly longer after a small change or when you see that the tool has covered what you wanted to see covered. Chances are that you may have created unintentionally a conflicting model definition.

Note that test generation runs as a background process. You can continue modelling work on other parts while tests generate.

7.3      Structure Functionality using Multiple Activity Diagrams

When an activity diagram can no longer be easily reviewed because of the sheer number of nodes and control flows or when encountering repetition of action sequences and logic, it is recommended to break the diagram up into multiple activity diagrams. Simply create a new activity diagram, copy the part of the original diagram to be isolated into the new diagram, and replace the selected part in the original diagram with a single activity that refers to the new diagram as a sub activity diagram.

Recommendation: To manage activity diagram complexity, readability, and maintainability move (in particular repeated) parts from an activity diagram into a subactivity diagram(s).

Recommendation: The folders/files can be “linked” with the source, if a subactivity diagram is complete on its own, and needs to be shared across multiple Creator Projects,

Note that refactoring into sub diagrams can also be used to restructure diagrams for test generation (see section Merge Nodes and Finalized Runs).

Recommendation: Avoid nesting  subactivity diagrams deeper than 5 levels (i.e., AD contains and AD which contains an AD which contains ... etc) as it affects readability. Deep nesting levels indicate concerning model complexity - think about splitting and moving part of the modelled functionality off into a new Creator project.

7.4     Refer to Requirements from the Model

In general tests should be specified based on requirements on how an application should operate. Requirement coverage is the easiest means to communicate to other stakeholders like managers what you are working on, what is the level of complexity of the functionality to be tested, and why are some models taking more time than others to complete.

If requirements are stored in ALM tools or Excel files they can be imported to Creator using Requirement Providers and then drag & dropped from the Requirements view in the Conformiq Modeling perspective to activities. Often users however are confronted with a lack of (properly) documented requirements – in these cases we encourage users to use Excel or create Creator ad-hoc requirements in models to capture and trace what they test.

Recommendation: In case an ALM system is not suitable or applicable, create (and share) requirements catalogue yourself in Excel, import them using the Excel Requirement Provider, and drag & drop them to your flows. Use requirements to communicate to stakeholders exactly what you are testing in this model and where (in which test) you are testing specific requirements.

A known challenge in modeling with requirement references (see section Measuring Model Complexity) the granularity of requirements. In particular when coming from different source they scope and clarity of requirements may vastly differ. Some authors write six requirements in one. Some requirements are not testable. Some are not clearly defined. Finally, testers generally do not have the right to modify requirements in ALM systems.

Recommendation: Review requirements for their testability. Engage other stakeholders to point out deficiencies or ambiguities in requirements. Use ad-hoc requirements to break up requirements you cannot modify or as a temporary solution until ALM requirements are updated.

Requirements can also be understood and used “way points” or “[pure] testing requirements” in model coverage. They can be useful to track specific data coverage or to even troubleshoot coverage issues. For example, adding an ad-hoc requirement with a skip action pre-condition “Username is Stephan and password is 123” after a fill form action, makes it easy to pinpoint via the Test Targets view in the Conformiq Test Design perspective all test(s) where this user and password is used at this point of the control flow.

Recommendation: Use ad-hoc requirement actions to specify and track pure testing requirements, or to trouble shoot test coverage. Use requirement categories to separate them from other requirements.

7.5      Risk Based Testing with Requirement Priorities

Conformiq Creator allows to model risk by specifying requirement priorities. These priorities are then considered during test generation and allow to model aspects of the application that are considered to be more critical or “riskier” than others, i.e., Conformiq Creator tests these riskier paths more extensively than other paths through the model.

The optional Priority property of a requirement action should be used to specify the priority. The priority mass (or overall test priority) is calculated for each path through the model by summing up the requirement priorities encountered in that path: the initial mass is always 1 and after that every requirement action not yet been encountered with a priority increments the total by that priority value. Thus, the priority mass of a path which includes two requirement actions with priority property set to 5 and 2 is 1 + 5 + 2 = 8. The bigger the number, the more important the given model part is and the higher the risk associated with the requirement is.

Recommendation: Whether risk-based testing approach is used or not, it is worthwhile to provide the Priority for each requirement action as per the agreed risk definition.

Requirements which have been downloaded via a Requirement Provider from a 3rd party requirement management tool may also have a separate, external priority (set by the requirement management system). This external priority is used by Conformiq Creator as a default value for the priority in the requirement action, but the value can be overridden in the Priority property in the Properties View, e.g., to manage the relative priority between requirements of same priority for testing purposes.

7.6     Return Node Use

The Conformiq Creator modeling language has three different concepts to model the end of a flow, i.e., explicit “return node”, explicit “flow final” node, and implicit return node (which is an activity with no outgoing flows). Return nodes (explicit as well as implicit) imply that control flow execution continues in the parent activity diagram – only flow final nodes really terminate a flow. A special case where there is no noticeable difference is a project with one single activity diagram. “By default” control flows are more or less terminated by implicit return nodes.

Recommendation: Avoid using implicit return nodes altogether and limit use of return nodes at a minimum to meaningful return points, i.e., terminate all flows ending “successful” behavior in the same return node. Name that return node “Success”.

Note that the improper modeling with return nodes can easily lead to invalid results in test generation, i.e., parent diagram continuing with actions even though test should stop because an error condition was handled in the subactivity diagram. Remember that no tool can automatically “tell” or differentiate which control flows should terminate. The user must express the knowledge by using flow final nodes.

Recommendation: Always use flow final nodes to terminate control flow branches that model handling of errors or invalid behavior by the system to be tested, i.e., contain the modeling of error handling in the same activity diagram. Name flow final nodes after the error or terminating condition they represent.

7.7      Model Combinatorial Data with Combinatorial Action

Modeling of combinatorial data with combinatorial modeling constructs is an advanced modelingconcept which allows select different input data (i.e. alternative values from input constraints) tobe combined in a combinatorial action within activity diagrams. In version prior to Creator 2.4, the“Data Coverage” option in input action constraints was used for that purpose.

Recommendation: Use only the combinatorial action to model combinatorial data since it makes use of combinatorial data more explicit. Try to avoid using the “Data coverage” option part of input action constraints and instead specify a combinatorial action after the input action referring to its input action results.

Even though it may be possible and tempting to specify a combination of all alternative values in all input constraints against each other with one, single combinatorial action, we advise against such modeling. This approach usually also combines alternative values which are not intended to be combined, e.g., inputs to the same form in different control flow branches. Exclusions of certain pairs or combinations cannot be specified in the model but must be done in Test Target view. The “single combinatorial action” approach leads to generate (a lot of) test targets that by model construction cannot be reached and need to be manually set to “DON’T_CARE”.

Recommendation: Instead of a single combinatorial action per activity diagram, specify one  combinatorial action per control flow branch which includes input actions that are to be combined.

The recommendation of one combinatorial action per control flow branch reduces the need to modify test target settings but does not necessarily eliminate it completely. Test target setting modifications are required in cases where some pairs or combinations involve conditional input actions (i.e., actions with an action pre-condition) which may prevent (by construction) covering some of the pairs or combinations. Note that after adding or modifying a combinatorial action always all new combinatorial test targets in the Test Target view are set to “TARGET” – no matter  if that is possible by model construction or not.

Recommendation: Always review settings of combinatorial data test targets in Test Target view of the Conformiq Test Design perspective and mark invalid pairs or combinations as “BLOCK” or uninteresting pairs or combinations as “DON’T_CARE”.

7.8     Model Combinatorial Data Only After Completing Modelling of Targeted Functionality

The use of combinatorial data is part of a set of advanced test generation concepts which generally leads to longer test generation. Note that adding combinatorial data coverage also  rarely helps to improve coverage for models. To work most effectively and simplify model and test  analysis, model combinatorial data coverage should only be used in the final stages of model development.

Recommendation: Model combinatorial data only after you have completed modelling the entire targeted functionality and achieved 100% coverage in test generation. Similarly, disable combinatorial data coverage settings if you maintain or significantly extend a model from a previous delivery and only bring back the settings once you have achieved 100% coverage with the modified model.

As explained in detail in the separate document “Combinatorial Testing with Conformiq Creator”, (see References) multiple settings combinatorial data coverage settings multiply the effect of combinatorial data on test generation which not only increases test generation time and memory usage but also to very many tests that need to be validated. A common misunderstanding is that setting data coverage in multiple actions that data of these actions would be combined – the only way this can be modeled is using one combinatorial action. The “exploratory” settings for “Data Coverage” options should only be used by expert users.

Recommendation: Limit the use of combinatorial action (and combinatorial “Data Coverage” options) to a few uses per model, i.e., not every input action of your model. In particular, limit the use of the “exploratory” setting in data coverage options to no more than one or two places per model. Validate that use of these options really adds value to your generated tests, e.g., if the use of “exploratory” generates “too many” tests for you then remove or reduce the setting.

7.9      Use Action Results Instead of Variable Data Objects

Experience has shown that the concept of using variables to store data before using it elsewhere (like in conditions) is not easy to adopt for users without automation or programming background. Starting from Creator 2.3, the Creator modelling language has been extended and simplified by allowing direct use results of input actions that produce data (like fill form, receive message, custom input actions, etc) if input action and data use are in the same activity diagram (see Creator Manual section Creating Models with Conformiq Creator > Activity Node > Actions). In other words it enables to model data flows and handling without variable definitions.

Recommendation: Use action results instead of variable data objects to use less clicks to model storage and access of data. Absence of variable data objects also makes your diagrams easier to understand for others.

Note that variable data objects still need to be used in cases data has to be stored in one activity diagram and then used in another activity diagram. Also use of action results does impact readability: There is currently no visual indication in the source action that its action results are being used elsewhere.

7.10     Merge Nodes and Finalized Runs

Merges can be used to combine control flows within activity diagrams at given points of the specification of functionality to be tested. Merges can be explicit, i.e., merge nodes, or implicit, i.e., activity nodes with multiple incoming control flows.

Recommendation: Use explicit merge nodes only to simplify the analysis of activity diagrams.

Control flows are often merged with the good intention to reuse common steps or actions in different flow variants. If they overused, however, they can however lead to unnecessary model complexity. When tests are generated using the advanced “Enable Only Finalized Runs” option each merge node encountered in a path leading to a test increases the test generation complexity  and time required to complete test generation.

If the intent is that all flows passing through the merge should take the following action then a common technique to avoid merge nodes is to isolate the path after the merge node into a sub diagram, and then - instead of merging or connecting <n> control flows to one activity calling that subdiagram – create and connect the <n> control flows to <n> activities calling that same activity diagram. Note that remodeling also multiplies the number of generated test targets by <n>, i.e. there will be separate targets for each action, data etc in the subdiagram for each control flow branch – even though the model browser always shows the same diagram underlying all <n> activities.

Recommendation: Minimize the use of control flow merges when the “Enable Only Finalized Runs” option must be used for test generation, to reduce test generation time.

Often a high number of merges is an indicator of trying to model “too much functionality” with a single model. In these cases, other symptoms commonly include a high model complexity (see section Model Complexity) as well as use of state variables (see section Avoid Overuse of State Variables).  Here, the good intentions of reuse backfire into unnecessary model complexity: (multiple) state variables are deployed to “remember” which (combination) control flow branches have “taken” prior to the merge(s) which quickly increases risk for modeling error as well as diagram complexity, and reduces understandability. In these cases generally the best option is to split the model into multiple models and reduce this way the merges required.

Recommendation: Avoid overusing control flow merging to get reuse “at any cost”. Instead, isolate different aspects of functionality into split them into different models

7.11      Avoid Overuse of State Variables

Statevariablesareanadvancedmodelingconceptwhichenablesuseofasimple,explicit concept of “state” in a world of activity diagrams where state is only implicit in a control flow. It is intended to be used to keep track of “what has happened before” – similar to the “memory” concept in electronic calculators. Overuse of state variables, e.g., create and managing a lot of them in parallel makes models hard to follow. In particular, in large models it is easy to overlook proper setting of state variables in all flow branches and it gets difficult to find the root cause for unexpected test generation results. A good indicator for overuse of state variables is when you start getting increasingly confronted with variable initialization warnings in the Problems View of the Conformiq Modeling perspective.

Recommendation: Keep things simple and use as little state variables as possible. Avoid complex logic or maintaining multiple state variables in parallel.

To reduce your use of state variables restructure your control flows and manage state by reducing the amount of explicit and implicit merge nodes in your control flows.

Recommendation: Avoid “masking data” by assigning variables (including fields of variable data objects) to other (differently named) state variables and even worse converting data types. Instead (re)use action result or variable data object fields directly.

Another common use of state variables is to parameterize or influence selection of paths in a (very generic) sub diagram. This practice can easily backfire and create situations where it will never be possible to reach 100% coverage with default target settings, e.g., by calling a subactivity diagram with the state variable always and only set to one value and the sub activity having flows covering multiple settings. Note that expert users can remedy this situation by manually setting all the test targets related to sub activity diagram in Test Targets view of the Conformiq Test Generation perspective to “DON’T_CARE” (see “Review Testing Target Settings before Analyzing Your Model”).

Recommendation: Avoid use of state variables to select a subset of different flows in a sub activity diagram. Simplify you model for easing especially review with others: instead of using a state variable with a generic activity diagram, split this generic activity diagram into one activity diagram per case and eliminate state variable that is used.

 

  • No labels