Seminar on Design and Architecture Patterns

Ošlejšek: Lessons learned

Lessons learned from the application of analysis patterns

General: 

  • One term, e.g., metric or programming language, should not appear in multiple cases of the problem domain model (as an instance note and an analytical class, for instance).

Generic feedback to the Organization hierarchy/structure patterns: 

  • There is no clear organizational hierarchy evident from the QualityIS system, except for the enterprise module, where "the roles can change during the lifetime of a specific SW project at all levels of management". Therefore, the necessary context is the project plus timestamp. 
  • We model and clarify the ("static") hierarchy of organization units (classes), not instances. Although "the roles can change during the lifetime", the structure of the hierarchy itself never changes. A tester is always driven by a quality assurance manager, who is driven by a business assurance manager.
  • Possibly, also the author vs. co-author relationships from the "core" application can be considered organizational hierarchy. In this case, we can either encode them as two organization structure types (core system vs. enterprise module) or as a single organizational hierarchy with two different identification schemes for role names (the hierarchies are the same, but only the role names differ, e.g., an author is called quality assurance manager - see the Identification scheme below).
  • Another example: The hierarchical organization of a project code with possible parallel structures (source code structure can differ from Maven project structure, for instance).

Generic feedback to the Accountability pattern:

  • The usage is a bit overkill for this project. However, we could use it to clarify responsibilities related to SW life cycle of the SW project uploaded and tested to the QualityIS.
  • Accountability types: project ownership, code upload (developer, external system), analysis run (who can run analysis - developer?), qualitative configuration (who is responsible for defining quality standards on the project) 

Generic feedback to the Identification scheme pattern:

  • In the enterprise module, the quality assurance manager = author and testers = co-author. Therefore, we have two identification schemes: for the core system and the enterprise module. The identified Object is the role. Therefore, we can combine this pattern with the organizational hierarchy to capture aliases of roles. 
  • Another example: internal versioning (2.3.4) vs. public release versioning (Windows 10 SP1)

Generic feedback to the Account pattern:

  • The "balance" is a number(!) that changes in time. Entries store the history of such changes.
  • If the Account is a SW product, what the balance could serve for (we are looking for varying number):
    • Example 1: SW quality indicator (percentage, number of failures of matrics with unsatisfiable results, ...). Entries = analyses. Each analysis remembers which metrics were measured and how many of them have failed. => They represent the history of the SW project, while the quality indicator (the "balance") is the current state.
    • Example 2: Number of development branches. Entries = project uploads. Each upload knows how many branches were removed (merged) and how many new branches were introduced. The "balance" attribute in the account captures the current state.

Generic feedback to the Observation and Measurements pattern:

  • Successful usage requires correct mapping of the pattern's classes to terms of the application domain.
  • Common mistake: Metric class=The metric is "the number of lines of code", for instance, while Measurement is a concrete measured number.
  • Common mistake: PhenomenonType = metric, issue,  phenomenons = lines of code, functional points, ..."and "issues: insufficient documentation, ..." nonsense?
    • Consider that you measure 2000 lines of code, and this value is to be stored as a Measurement record linked to ...
  • Common mistake: PhenomenonType = issue, Phenomenon=major, minor, critical. Where is the name of the issue? It would be possible only if the CategoryObservation were linked to the context of the issues (with the name of the issue, etc.). However, it spreads the definition of issues into two different parts of the model unnecessarily.
  • => PhenomenonType = the name of a metric + major issue, minor issues, critical issue. Phenomenon = names of issues (can contain "other").

Generic feedback to the Enterprise Segment pattern:

  • The reflection association of the DimensionElement is important and cannot be missing in the model because it demotes the classification tree. 
  • The requirement was: "Which types/categories of quality metrics are applied to different SW products and/or in different geographical locations, and also how often"? Can we consider (and then model) the time/frequency as a dimension element? Yes (years - months - weeks - days - ...).
  • Classifications are usually trees (only sometimes lists). Reflect it in the documentation of dimension elements.
  • A dimension element related to the quality has to be somehow associated with the Observation&measurement part of the model because we aim to classify metrics.

Generic feedback to the Planning pattern:

  • The associations handling a sequence of states (predecessor) are important!
  • Name the Action and its sub-classes properly, reflecting the terms of the application domain.
  • The PlannedAction (properly renamed :-) stores info about aspects of the SW project that will be tested. Therefore, it has to be associated with the related class(es) of the Observation&measurement part of the model.
  • Similarly, the ImplemenedAction (or its CompletedAction sub-class) stores info about measured values, i.e., has to be associated with (other) parts of already existing Observation&measurement decomposition.
  • Check the duplicate terms in your model. Do you have a "planned analysis" associated with metrics/issues as well as QualityConfiguration? Is there any difference?

Other notes:

  • How do terms like analysisreport, or quality configuration relate to the planning and observations&measurements? Do you have all these terms captured by the problem domain model (not necessarily as explicit classes)?
  • Has anyone asked whether the analyses are independent? Or do we need to plan their execution order? Where and how to store such ordering?