Adaptive Tuning Algorithm for Performance tuning of Database Management System

Adaptive Tuning Algorithm for Performance tuning of Database Management   System
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

Performance tuning of Database Management Systems(DBMS) is both complex and challenging as it involves identifying and altering several key performance tuning parameters. The quality of tuning and the extent of performance enhancement achieved greatly depends on the skill and experience of the Database Administrator (DBA). As neural networks have the ability to adapt to dynamically changing inputs and also their ability to learn makes them ideal candidates for employing them for tuning purpose. In this paper, a novel tuning algorithm based on neural network estimated tuning parameters is presented. The key performance indicators are proactively monitored and fed as input to the Neural Network and the trained network estimates the suitable size of the buffer cache, shared pool and redo log buffer size. The tuner alters these tuning parameters using the estimated values using a rate change computing algorithm. The preliminary results show that the proposed method is effective in improving the query response time for a variety of workload types. .


💡 Research Summary

The paper proposes an autonomous performance‑tuning framework for database management systems that leverages a feed‑forward neural network to predict optimal values for key memory parameters such as buffer cache, shared pool, and redo‑log buffer. System‑level key performance indicators—buffer miss ratio, number of active processes, and average table size—are continuously extracted from the DBMS log by a data‑mining component and compressed into a lightweight feature set. These features serve as inputs to a multilayer perceptron with three inputs, a hidden layer of 100 sigmoid neurons, and two outputs (buffer cache size and shared pool size). The network is trained on a modest data set of 100 samples (epoch = 100, learning rate = 0.4).

Once the network produces an estimate, a simple rate‑change algorithm adjusts the actual DBMS parameters. If the observed response‑time change (ΔRtime) exceeds a predefined threshold (Rth), the buffer size is increased to the next granule; if ΔRtime is negative and below the threshold, the size is decreased. This rule‑based tuner limits the frequency of adjustments to avoid destabilizing the system.

Experimental validation was performed on Oracle 9i using a TPC‑C‑like OLTP workload. Results show that increasing the buffer cache from 4 MB to 16 MB reduces query response time from roughly 120 ms to near zero, and the neural network correctly anticipates the need for larger buffers when the number of concurrent users exceeds twelve. The approach demonstrates a measurable reduction in response time while imposing minimal monitoring overhead, thereby relieving the DBA from manual tuning tasks.

The authors acknowledge several limitations: the training set is small and specific to a single DBMS, the granularity of parameter changes is coarse, and the impact of log compression on system resources is not fully quantified. They suggest future work that includes expanding the training corpus across multiple DBMS platforms, incorporating multi‑objective optimization (e.g., balancing latency, memory usage, and I/O), refining the granularity of adjustments, and adopting online learning techniques for continuous adaptation.


Comments & Academic Discussion

Loading comments...

Leave a Comment