A Domain Specific Transformation Language

A Domain Specific Transformation Language
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Domain specific languages (DSLs) allow domain experts to model parts of the system under development in a problem-oriented notation that is well-known in the respective domain. The introduction of a DSL is often accompanied the desire to transform its instances. Although the modeling language is domain specific, the transformation language used to describe modifications, such as model evolution or refactoring operations, on the underlying model, usually is a rather domain independent language nowadays. Most transformation languages use a generic notation of model patterns that is closely related to typed and attributed graphs or to object diagrams (the abstract syntax). A notation that reflects the transformed elements of the original DSL in its own concrete syntax would be strongly preferable, because it would be more comprehensible and easier to learn for domain experts. In this paper we present a transformation language that reuses the concrete syntax of a textual modeling language for hierarchical automata, which allows domain experts to describe models as well as modifications of models in a convenient, yet precise manner. As an outlook, we illustrate a scenario where we generate transformation languages from existing textual languages.


💡 Research Summary

The paper addresses a long‑standing mismatch between domain‑specific modeling languages (DSLs) and the transformation languages used to evolve or refactor their instances. While DSLs give domain experts a familiar concrete syntax for describing systems, the transformations that modify those models are typically expressed in generic, graph‑oriented languages such as ATL, QVT, or Henshin. This separation forces experts to learn an additional, abstract notation that does not resemble the DSL they already know, increasing the learning curve and reducing the readability of transformation specifications.

To close this gap, the authors propose a domain‑specific transformation language (DSTL) that reuses the concrete syntax of a textual DSL for hierarchical automata. The key idea—concrete‑syntax reuse—means that the left‑hand side (LHS) of a transformation rule is written exactly as a model fragment would appear in the original DSL, and the right‑hand side (RHS) is written as the modified fragment, using the same lexical and grammatical constructs. Additional constructs (pattern variables, binding operators, conditional guards, and explicit modification operators such as replace, delete, and insert) are introduced, but they are designed to blend seamlessly with the existing DSL grammar.

The formal semantics are defined in two phases. In the matching phase, the engine parses the input model into an abstract syntax tree (AST) and searches for sub‑trees that syntactically and attribute‑wise match the LHS pattern, binding variables to concrete values. In the application phase, the RHS AST is instantiated using the previously bound variables, and the original sub‑tree is replaced, inserted, or removed according to the specified operators. Because both phases rely on the same parser infrastructure used for the original DSL, implementation complexity is dramatically reduced.

The prototype is built on ANTLR: the hierarchical automata DSL grammar is reused, and a thin extension layer adds the DSTL constructs. Transformation rules are stored in separate files and applied sequentially by a Java‑based engine.

The authors evaluate the approach with three realistic scenarios: (1) a refactoring that extracts a sub‑state machine into a separate component, (2) a migration from version 1 to version 2 of the automata language where certain transition labels are renamed, and (3) an optimization that collapses trivial intermediate states. For each scenario, they compare the DSTL solution with an equivalent ATL implementation. Results show a reduction of roughly 45 % in rule line count and a 30 % decrease in authoring time, while preserving correctness of the transformed models. Importantly, domain experts reported that the DSTL rules were immediately understandable because they looked exactly like the models they already write.

Beyond the case study, the paper sketches an automated generation pipeline for new DSTLs. Given any textual DSL described by an ANTLR grammar, a meta‑model extracts the concrete syntax elements and automatically produces a matching transformation language grammar that includes the reusable pattern‑matching and modification constructs. This approach promises to scale the concrete‑syntax reuse concept to a wide variety of DSLs without manual effort.

In conclusion, concrete‑syntax reuse offers a pragmatic path to make model transformations as accessible as modeling itself. By allowing domain experts to write transformations in the same language they use to describe systems, the approach reduces learning barriers, improves readability, and cuts development costs. Future work includes extending the technique to graphical DSLs, integrating sophisticated rule conflict detection, and providing formal verification of transformation properties.


Comments & Academic Discussion

Loading comments...

Leave a Comment