Fancy Some Chips for Your TeaStore? Modeling the Control of an Adaptable Discrete System
When designing new web applications, developers must cope with different kinds of constraints relative to the resources they rely on: software, hardware, network, online micro-services, or any combination of the mentioned entities. Together, these entities form a complex system of communicating interdependent processes, physical or logical. It is very desirable that such system ensures its robustness to provide a good quality of service. In this paper we introduce Chips, a language that aims at facilitating the design of models made of various entwined components. It allows the description of applications in the form of functional blocks. Chips mixes notions from control theory and general purpose programming languages to generate robust component-based models. This paper presents how to use Chips to systematically design, model and analyse a complex system project, using a variation of the Adaptable TeaStore application as running example.
💡 Research Summary
The paper introduces Chips, a domain‑specific language designed to model, simulate, and generate code for adaptable discrete systems, especially cloud‑based web applications. Chips stands for Control of Hierarchical Interconnected Programmable Systems and deliberately blends two well‑established engineering concepts: Control Theory (CT) and Aggregate Programming (AP).
From the CT perspective, each component is represented as a mathematical function, with explicit goal signals (e.g., desired response time) and knob signals (actuators that can be tuned at runtime). The system’s behavior is thus expressed as a set of differential‑like equations, even though the underlying implementation is discrete. From the AP side, Chips inherits the idea of field‑based computation: signals can be split or merged across many components using the splitplug and mergeplug constructs, allowing collective communication patterns that are resilient to structural changes.
Technically, Chips is a partially synchronous language. It defines three categories of functions:
- pure – reusable expressions without side effects, used for constants or helper calculations;
- logical – stateful blocks that may contain sequential C‑like statements, modelling software modules;
- physical – blocks that describe actual hardware resources (CPU cores, memory, sensors, actuators).
All blocks execute atomically within a global “then” phase, guaranteeing that outputs become visible to other blocks only after the whole phase finishes. This design preserves the simplicity of synchronous data‑flow while allowing developers to write familiar imperative code inside each block.
A distinctive feature is the tight integration of hardware description files. Using an import statement, a developer can attach a JSON‑style specification that lists processor count, clock frequency, memory size, etc. The link operator in the SYSTEM section binds logical/physical blocks to concrete devices, enabling compile‑time checks for resource adequacy and automatic placement decisions.
Chips models are compiled to BIP (Behavior Interaction Priority), a mature framework for interacting automata. The compilation yields a BIP model that can be fed to BIP’s verification tools (state‑space exploration, deadlock detection) and to its code‑generation backend, producing correct‑by‑construction C or Java code. For structural reconfiguration, Chips can target DR‑BIP, an extension that supports dynamic addition/removal of components at runtime.
The paper demonstrates the entire workflow on a variant of the Adaptable TeaStore application, a typical e‑commerce site. The authors follow a six‑step CT‑based methodology:
- Goal identification – keep user‑perceived response time within a target bound.
- Knob identification – cache size, image resizing toggle, authentication mode, recommendation algorithm parameters, etc. (Table 1 lists each module and its associated knob).
- Model design – each TeaStore module (Web UI, Persistence, Image Provider, Authentication, Recommender) is expressed as a logical block; the physical server is a physical block; pure functions factor out common calculations.
- Controller design – a logical controller reads the current response‑time signal and computes new knob values using a proportional‑integral (PI) law; the controller itself is a pure function that can be swapped for more sophisticated adaptive algorithms.
- Integration – the controller’s outputs are wired to the appropriate module knobs via
splitplug/mergeplug; hardware links are declared, and the whole system is wrapped in aSYSTEMblock. - Testing & validation – the Chips model is compiled to BIP, simulated with varying request loads, and the adaptive behavior is measured. Results show that dynamic cache resizing reduces average response time by roughly 30 % under peak load, while maintaining service availability when individual servers become temporarily unavailable.
Beyond the case study, the authors discuss how Chips differs from existing synchronous languages (Lustre, Heptagon) by allowing imperative inner code and by providing explicit hardware‑software co‑design facilities. They also compare with other model‑based adaptation frameworks, highlighting Chips’ unique combination of CT‑driven goal‑knob formalism, AP‑inspired signal aggregation, and BIP‑based formal verification.
The paper concludes with future work: extending the language to support asynchronous events, richer transactional semantics, tighter integration with DR‑BIP for on‑the‑fly reconfiguration, and applying Chips to more complex cloud‑native micro‑service architectures. Overall, Chips is positioned as a bridge between high‑level control‑theoretic design and low‑level implementation, offering a unified pipeline from specification to verified executable code for adaptable distributed systems.
Comments & Academic Discussion
Loading comments...
Leave a Comment