Measuring Model Complexity
There is no single simple way how to define or measure model complexity. Clearly there is a point in the lifetime of a model where the model becomes “too big” to understand even by its author, i.e., where analysis such as “why is target not covered” becomes too time consuming. An easy rule of thumb is that: if your peer can not easily review and understand how generated tests map back to your model then it is very likely to be too complex.
Note that increased test generation times generally go hand in hand with higher model complexity but model complexity ironically is often not the actual root cause for longer test generation times. Commonly improper use of (in particular) advanced model language constructs are the main cause for longer test generation times.
Four simple ways for measuring model complexity are:
a. Number of testable requirements referenced in a model
Benefits: Requirements (and their complexity) are generally well understood among all stakeholders. The review of poorly worded or open ended requirements with other stakeholders leads to the benefit that if requirements get improved then all stakeholders benefit from that – not just the tester. Requirements and their coverage are easy to monitor and trace through entire testing process.
Challenges: Complexity and quality of requirements often differ depending on their source, i.e., one requirement can actually mean ten or more requirements from testing point of view (like “the application should work well”). Another challenge is that many testers lack to this day availability (and access to) requirements in a format that can be easily automatically processed, e.g., ALM or Excel. This measure focuses on overall model complexity but cannot help to identify diagram complexity or unused definitions.
Recommendation: Before assessing number of requirements or even using them in a model normalize them to a common granularity, i.e., break complex requirement up into smaller actually testable requirements. Use a spreadsheet to keep track of testable requirements if you do not have access to a requirement management system. |
b. Number of tests generated from a model
Benefits: Extremely easy to measure and understand by all stakeholders.
Challenges: This measure is easily “polluted” by use of external spreadsheet and/or combinatorial test data. It focuses on overall model complexity but cannot help to identify diagram complexity or unused definitions.
combinatorial test data. It focuses on overall model complexity but cannot help to identify diagram complexity or unused definitions.
c. Diagram structure, i.e., number/nesting depth of activity diagrams, number of nodes/data objects/variables per diagram, control flow fan in/out per diagram, number of advanced modelling constructs, number of widgets per screen, etc
Benefits: Easy to measure – note that some parts of diagram complexity are already measured and reported today in activity diagram editor (but usually ignored). Helps to catch and reduce modelling flaws introduced by modelling complexity.
Challenges: Requires understanding of model to understand metric – meaningful to the user but not really to other stakeholders like managers. It is also difficult to completely avoid complex diagrams if it is complex functionality which is to be tested.
d. Test Generation Time, ie., time taken for test generation to provide 100% test target coverage and produce an optimized test suite.
Benefits: Easy to measure. Test generation time is a factor of various parameters and includes the complete model construct, including the diagram structure, combinatorial data, number of requirements etc,. Since Creator treats different models with its own internal logic while generating tests and is the only constant across different models, the total time taken serves as a good comparative parameter.
Challenges: Test generation time is not known until model is complete and test generation process is invoked. Also, test generation time can easily vary depending on Conformiq options. Test generation includes those AD which are referenced by Main AD (directly or indirectly); however, Creator includes “all” files in “model” folder in various levels of processing.
Recommendation: Generate tests frequently to review test generation time, and not until all the model is complete. Also, set identical Conformiq options across models while doing complexity comparison across models. |
Recommendation: User should measure model complexity based on a mix of criteria which allows all stakeholders to understand the complexity of each model. |
We encourage users to derive their own measures to complement the above measures - in particular more project specific measures, e.g., number of “modules”, product requirement definitions, etc to be covered per Creator project.
Related content
Copyright © 2023 Conformiq and its subsidiaries. All rights reserved.