📝 Original Info
- Title: Impact of Data-Oriented and Object-Oriented Design on Performance and Cache Utilization with Artificial Intelligence Algorithms in Multi-Threaded CPUs
- ArXiv ID: 2512.07841
- Date: 2025-11-22
- Authors: Researchers from original ArXiv paper
📝 Abstract
The growing performance gap between multi-core CPUs and main memory necessitates hardware-aware software design paradigms. This study provides a comprehensive performance analysis of Data Oriented Design (DOD) versus the traditional Object-Oriented Design (OOD), focusing on cache utilization and efficiency in multi-threaded environments. We developed and compared four distinct versions of the A* search algorithm: single-threaded OOD (ST-OOD), single-threaded DOD (ST-DOD), multi-threaded OOD (MT-OOD), and multi-threaded DOD (MT-DOD). The evaluation was based on metrics including execution time, memory usage, and CPU cache misses. In multi-threaded tests, the DOD implementation demonstrated considerable performance gains, with faster execution times and a lower number of raw system calls and cache misses. While OOD occasionally showed marginal advantages in memory usage or percentage-based cache miss rates, DOD's efficiency in data-intensive operations was more evident. Furthermore, our findings reveal that for a fine-grained task like the A* algorithm, the overhead associated with thread management led to single-threaded versions significantly outperforming their multi-threaded counterparts in both paradigms. We conclude that even when performance differences appear subtle in simple algorithms, the consistent advantages of DOD in critical metrics highlight its foundational architectural superiority, suggesting it is a more effective approach for maximizing hardware efficiency in complex, large-scale AI and parallel computing tasks.
💡 Deep Analysis
Deep Dive into Impact of Data-Oriented and Object-Oriented Design on Performance and Cache Utilization with Artificial Intelligence Algorithms in Multi-Threaded CPUs.
The growing performance gap between multi-core CPUs and main memory necessitates hardware-aware software design paradigms. This study provides a comprehensive performance analysis of Data Oriented Design (DOD) versus the traditional Object-Oriented Design (OOD), focusing on cache utilization and efficiency in multi-threaded environments. We developed and compared four distinct versions of the A* search algorithm: single-threaded OOD (ST-OOD), single-threaded DOD (ST-DOD), multi-threaded OOD (MT-OOD), and multi-threaded DOD (MT-DOD). The evaluation was based on metrics including execution time, memory usage, and CPU cache misses. In multi-threaded tests, the DOD implementation demonstrated considerable performance gains, with faster execution times and a lower number of raw system calls and cache misses. While OOD occasionally showed marginal advantages in memory usage or percentage-based cache miss rates, DOD’s efficiency in data-intensive operations was more evident. Furthermore, our
📄 Full Content
Impact of Data-Oriented and Object-Oriented
Design on Performance and Cache Utilization with
Artificial Intelligence Algorithms in Multi-Threaded
CPUs
Gabriel M. Arantes
∗, Richard F. Pinto
∗, Bruno L. Dalmazo
∗, Eduardo N. Borges
∗, Viviane L. D. de Mattos
∗,
Rafael A. Berri
∗
∗Federal University of Rio Grande (FURG), Rio Grande, Brazil
Giancarlo Lucca
†
†Catholic University of Pelotas (UCPel), Pelotas, Brazil
Fabian C. Cardoso
‡
‡University of Rio Verde (UniRV), Rio Verde, Brazil
Abstract—The growing performance gap between multi-core
CPUs and main memory necessitates hardware-aware software
design paradigms. This study provides a comprehensive per-
formance analysis of Data-Oriented Design (DOD) versus the
traditional Object-Oriented Design (OOD), focusing on cache
utilization and efficiency in multi-threaded environments. We de-
veloped and compared four distinct versions of the A* search al-
gorithm: single-threaded OOD (ST-OOD), single-threaded DOD
(ST-DOD), multi-threaded OOD (MT-OOD), and multi-threaded
DOD (MT-DOD). The evaluation was based on metrics including
execution time, memory usage, and CPU cache misses. In
multi-threaded tests, the DOD implementation demonstrated
considerable performance gains, with faster execution times
and a lower number of raw system calls and cache misses.
While OOD occasionally showed marginal advantages in memory
usage or percentage-based cache miss rates, DOD’s efficiency
in data-intensive operations was more evident. Furthermore,
our findings reveal that for a fine-grained task like the A*
algorithm, the overhead associated with thread management
led to single-threaded versions significantly outperforming their
multi-threaded counterparts in both paradigms. We conclude
that even when performance differences appear subtle in simple
algorithms, the consistent advantages of DOD in critical metrics
highlight its foundational architectural superiority, suggesting it
is a more effective approach for maximizing hardware efficiency
in complex, large-scale AI and parallel computing tasks.
Index Terms—data-oriented design, object-oriented design,
multi-threading, performance optimization, cache efficiency
I. INTRODUCTION
In the modern world, computers are extremely important
and are used in nearly every aspect of daily life. They
are continuously advancing, with constant improvements in
their performance and capabilities, making their efficient use
increasingly necessary and important [1].
The
authors
would
like
to
thank
FAPERGS
(24/2551-0001396-2,
23/2551-0000773-8), CNPq (305805/2021-5) and FAPERGS/CNPq (23/2551-
0000126-8).
In particular, the CPU (Central Processing Unit) is evolving
at a rapid pace, with annual improvements in processing speed
and memory storage, known in the CPU context as cache. The
cache has different levels of speed and size, whereas other
components have not evolved at the same rate [2]. Storage
speed outside the CPU is significantly slower, and in this paper,
we will focus on RAM, which is approximately 100 times
slower than the CPU [3].
A solution to mitigate this performance gap may lie in the
more effective use of the cache through the adoption of Data-
Oriented Design/Data-Oriented Programming (DOD/DOP)
patterns, as opposed to the current industry standard of Object-
Oriented Design/Object-Oriented Programming (OOD/OOP).
OOP is not efficient in terms of cache usage, as it has a
greater impact on performance and scalability, whereas DOD
does not exhibit these issues [4]. OOP consists of using
object classes for data manipulation and well-encapsulated
functions with multiple abstraction layers [5]. In contrast,
DOD makes better use of data by separating it from the code,
thus improving data access and allowing for greater scalability
and easier code maintenance [6].
The remainder of the paper is organized as follows. Section
II presents a literature review. The methodology is presented
in Section III, and details of its results in Section IV. Finally,
in Section V, some final remarks are made, and directions for
future research are indicated.
II. LITERATURE REVIEW
To develop this paper, a study on the subject was conducted
by reviewing scientific literature, and this section presents the
results of that study.
A. Articles and Books
Computer architecture is a vast and constantly evolving
field, with numerous significant contributions over the decades.
arXiv:2512.07841v1 [cs.AI] 22 Nov 2025
In this context, the book ”Computer Architecture” by Hen-
nessy and Patterson [7] highlights the importance of memory
in computing, particularly the role of cache in enhancing
overall system performance. They argue that systems with
more efficient caches tend to perform better overall, even
though they do not provide a study specifically focused on
cache, supporting their assertion through a broad analysis of
computer architectures.
The relevance of cache is corroborated by various other
studies and publications. For example, Culler et al. [8], in
”Parallel C
…(Full text truncated)…
📸 Image Gallery
Reference
This content is AI-processed based on ArXiv data.