Benchmarking OODBs with a Generic Tool
📝 Abstract
We present in this paper a generic object-oriented benchmark (OCB: the Object Clustering Benchmark) that has been designed to evaluate the performances of Object-Oriented Data-bases (OODBs), and more specifically the performances of clustering policies within OODBs. OCB is generic because its sample database may be customized to fit any of the databases in-troduced by the main existing benchmarks, e.g., OO1 (Object Operation 1) or OO7. The first version of OCB was purposely clustering-oriented due to a clustering-oriented workload, but OCB has been thoroughly extended to be able to suit other purposes. Eventually, OCB’s code is compact and easily portable. OCB has been validated through two implementations: one within the O2 OODB and another one within the Texas persistent object store. The perfor-mances of a specific clustering policy called DSTC (Dynamic, Statistical, Tunable Clustering) have also been evaluated with OCB.
💡 Analysis
We present in this paper a generic object-oriented benchmark (OCB: the Object Clustering Benchmark) that has been designed to evaluate the performances of Object-Oriented Data-bases (OODBs), and more specifically the performances of clustering policies within OODBs. OCB is generic because its sample database may be customized to fit any of the databases in-troduced by the main existing benchmarks, e.g., OO1 (Object Operation 1) or OO7. The first version of OCB was purposely clustering-oriented due to a clustering-oriented workload, but OCB has been thoroughly extended to be able to suit other purposes. Eventually, OCB’s code is compact and easily portable. OCB has been validated through two implementations: one within the O2 OODB and another one within the Texas persistent object store. The perfor-mances of a specific clustering policy called DSTC (Dynamic, Statistical, Tunable Clustering) have also been evaluated with OCB.
📄 Content
Benchmarking OODBs with a Generic Tool – Submission to JDM 1/22 Benchmarking OODBs with a Generic Tool
Jérôme Darmont, Michel Schneider Laboratoire d’Informatique (LIMOS) Université Blaise Pascal – Clermont-Ferrand II Complexe Scientifique des Cézeaux 63177 Aubière Cedex FRANCE E-mail: darmont@libd2.univ-bpclermont.fr, schneider@cicsun.univ-bpclermont.fr Phone: (33) 473-407-740 Fax: (33) 473-407-444
Abstract: We present in this paper a generic object-oriented benchmark (OCB: the Object Clustering Benchmark) that has been designed to evaluate the performances of Object- Oriented Databases (OODBs), and more specifically the performances of clustering policies within OODBs. OCB is generic because its sample database may be customized to fit any of the databases introduced by the main existing benchmarks, e.g., OO1 (Object Operation 1) or OO7. The first version of OCB was purposely clustering-oriented due to a clustering-oriented workload, but OCB has been thoroughly extended to be able to suit other purposes. Eventual- ly, OCB’s code is compact and easily portable. OCB has been validated through two imple- mentations: one within the O2 OODB and another one within the Texas persistent object store. The performances of a specific clustering policy called DSTC (Dynamic, Statistical, Tunable Clustering) have also been evaluated with OCB. Keywords: Object-Oriented Databases, Clustering, Performance Evaluation, Benchmarking.
INTRODUCTION
The need to evaluate the performances of Object-Oriented Database Management Systems (OODBMSs) is important both to designers and users. Performance evaluation is useful to designers to determine elements of architecture, choose between caching strategies, and select Object Identifier (OID) type, among others. It helps them validate or refute hypotheses regard- ing the actual behavior of an OODBMS. Thus, performance evaluation is an essential compo- nent in the development process of efficient and well-designed object stores. Users may also employ performance evaluation, either to compare the efficiency of different technologies before selecting an OODBMS or to tune a system. Benchmarking OODBs with a Generic Tool – Submission to JDM 2/22 The work presented in this paper was initially motivated by the evaluation of object clustering techniques. The benefits induced by such techniques on global performances are widely ac- knowledged and numerous clustering strategies have been proposed. As far as we know, there is no generic approach allowing for their comparison. This problem is interesting for both designers (to set up the corresponding functionalities in the system kernel) and users (for per- formance tuning). There are different approaches used to evaluate the performances of a given system: experi- mentation, simulation, and mathematical analysis. This paper focuses only on the first two approaches. Mathematical analysis is not considered because it invariably uses strong simpli- fication hypotheses (Benzaken, 1990; Gardarin et al., 1995) and its results may well differ from reality.
Experimentation on the real system is the most natural approach and a priori the simplest to complete. However, the studied system must have been acquired, installed, and have a real database implanted in it. This database must also be significant of future exploitation of the system. Total investment and exploitation costs may be quite high, which can be drawbacks when selecting a product. Simulation is casually used in substitution or as a complement to experimentation. It does not necessitate installing nor acquiring the real system. It can even be performed on a system still in development (a priori evaluation). The execution of a simulation program is generally much faster than experimentation. Investment and exploitation costs are very low. However, this approach necessitates the design of a functioning model for the studied system. The relia- bility of results directly depends on the quality and the validity of this model. Thus, the main difficulty is to elaborate and validate the model. A modelling methodology can help and se- cure these tasks. Experimentation and simulation both necessitate a workload model (database and operations to run on this database) and a set of performance metrics. These elements are traditionally provided by benchmarks. Though interest for benchmarks is well recognized for experimenta- tion, simulation approaches usually use workloads that are dedicated to a given study, rather than workloads suited to performance comparisons. We believe that benchmarking techniques can also be useful in simulation. Benchmarking can help validate a simulation model as com- pared to experimental results or support a mixed approach in which some performance criteria necessitating precision are measured by experimentation and other criteria that does not ne- cessitate precision are evaluated by simulation. Benchmarking OO
This content is AI-processed based on ArXiv data.