A Brief Review on Models for Performance Evaluation in DSS Architecture
Distributed Software Systems are used these days by many people in the real time operations and modern enterprise applications. One of the most important and essential attributes of measurements for the quality of service of distributed software is performance. Performance models can be employed at early stages of the software development cycle to characterize the quantitative behavior of software systems. In this research, performance models based on fuzzy logic approach, queuing network approach and Petri net approach have been reviewed briefly. One of the most common ways in performance analysis of distributed software systems is translating the UML diagrams to mathematical modeling languages for the description of distributed systems such as queuing networks or Petri nets. In this paper, some of these approaches are reviewed briefly. Attributes which are used for performance modeling in the literature are mostly machine based. On the other hand, end users and client parameters for performance evaluation are not covered extensively. In this way, future research could be based on developing hybrid models to capture user decision variables which make system performance evaluation more user driven.
💡 Research Summary
The paper provides a concise yet comprehensive review of performance evaluation models applicable to Distributed Software Systems (DSS), which are increasingly prevalent in real‑time operations and modern enterprise applications. It begins by underscoring performance as a critical quality‑of‑service attribute and argues that early‑stage quantitative modeling can guide design decisions, resource allocation, and capacity planning. Three principal modeling paradigms are examined: fuzzy‑logic‑based models, classical queuing‑network (QN) models, and Petri‑net models.
Fuzzy‑logic approaches excel at handling uncertainty and imprecise inputs, making them suitable for incorporating user‑perceived metrics, environmental variability, and subjective quality criteria. Their main drawback lies in the heavy reliance on expert‑crafted membership functions and rule bases, which can limit reproducibility and automation. QN models, on the other hand, offer a well‑established analytical framework for estimating throughput, response time, and utilization by representing service requests as stochastic flows through interconnected service stations. While powerful for machine‑centric performance indicators, QNs struggle with complex concurrency patterns, non‑linear resource constraints, and highly dynamic workloads, often requiring approximations or model extensions. Petri‑net models provide a graphical, state‑transition perspective that captures concurrency, synchronization, and resource contention with high fidelity. However, they suffer from state‑space explosion in large‑scale systems, making verification and quantitative analysis computationally intensive.
The authors also discuss the prevalent practice of translating UML artifacts—such as Use‑Case, Sequence, and Deployment diagrams—into mathematical representations like QNs or Petri nets. This translation bridges the gap between developer‑friendly visual models and rigorous performance analysis tools, but the mapping rules predominantly focus on machine‑level attributes (CPU cycles, memory size, network bandwidth). Consequently, end‑user and client‑side parameters—such as perceived response‑time thresholds, interaction frequencies, and priority weights—are largely omitted from existing literature.
Identifying this gap, the paper advocates for hybrid modeling strategies that combine the uncertainty handling of fuzzy logic with the analytical strength of QNs and the concurrency modeling of Petri nets. Such hybrids would explicitly incorporate user decision variables, enabling performance evaluation that is more user‑driven and reflective of real‑world service quality. Potential research directions include automated extraction of fuzzy rules from user behavior logs, weighted queuing models that embed user‑defined importance factors, and extended Petri‑net formulations where tokens represent user sessions with individualized performance expectations.
In conclusion, while each of the surveyed models contributes valuable capabilities, their current implementations are biased toward machine‑centric metrics and overlook the nuanced influence of end‑user behavior. Future work should therefore focus on developing integrated, user‑aware performance models that can provide more accurate predictions and support adaptive optimization in distributed software environments.