Using Information Theory to Study the Efficiency and Capacity of Computers and Similar Devices

Reading time: 6 minute
...

📝 Original Info

  • Title: Using Information Theory to Study the Efficiency and Capacity of Computers and Similar Devices
  • ArXiv ID: 1003.3619
  • Date: 2010-03-19
  • Authors: ** Boris Ryabko (Siberian State University of Telecommunications and Informatics, Institute of Computational Technologies of the Siberian Branch of the Russian Academy of Sciences, Novosibirsk, Russia) **

📝 Abstract

We address the problems of estimating the computer efficiency and the computer capacity. We define the computer efficiency and capacity and suggest a method for their estimation, based on the analysis of processor instructions and kinds of accessible memory. It is shown how the suggested method can be applied to estimate the computer capacity. In particular, this consideration gives a new look at the organization of the memory of a computer. Obtained results can be of some interest for practical applications

💡 Deep Analysis

Deep Dive into Using Information Theory to Study the Efficiency and Capacity of Computers and Similar Devices.

We address the problems of estimating the computer efficiency and the computer capacity. We define the computer efficiency and capacity and suggest a method for their estimation, based on the analysis of processor instructions and kinds of accessible memory. It is shown how the suggested method can be applied to estimate the computer capacity. In particular, this consideration gives a new look at the organization of the memory of a computer. Obtained results can be of some interest for practical applications

📄 Full Content

arXiv:1003.3619v1 [cs.IT] 18 Mar 2010 Using Information Theory to Study the Efficiency and Capacity of Computers and Similar Devices Boris Ryabko∗ ∗Siberian State University of Telecommunications and Informatics, Institute of Computational Technologies of Siberian Branch of Russian Academy of Science, Novosibirsk, Russia; boris@ryabko.net Abstract We address the problems of estimating the computer efficiency and the computer capacity. We define the computer efficiency and capacity and suggest a method for their estimation, based on the analysis of processor instructions and kinds of accessible memory. It is shown how the suggested method can be applied to estimate the computer capacity. In particular, this consideration gives a new look at the organization of the memory of a computer. Obtained results can be of some interest for practical applications. Keywords: computer capacity, computer efficiency, Shannon theory, cache memory, Information Theory 1 Introduction We address the problem of what the efficiency (or performance) and the capacity of a computer are and how they can be estimated. More precisely, we consider a computer with a certain set of instructions and several kinds of memory. What is the computer capacity, if we know the execution time of each instruction and the speed of each kind of memory? What is the computer efficiency if the computer is used for solving problems of a certain kind (say, matrix multiplications)? On the one hand, the questions about the computer efficiency and capacity are quite natural, but, on the other hand, to the best of our knowledge, the computer science does not give answers to those questions. The first goal of this paper is to suggest a reasonable definition of the com- puter efficiency and capacity and methods of their estimation. We will mainly consider computers, but our approach can be applied to all devices which con- tain processors, memories and instructions. (Among those devices we mention mobile telephones and routers.) Second, we describe a method for estimation of the computer capacity and apply it to several examples which are of some 1 theoretical and practical interest. In particular, this consideration gives a new look at the organization of a computer memory. The suggested approach is based on the concept of Shannon entropy, the capacity of a discrete noiseless channel and some other ideas of C. Shannon [12] that underly Information Theory. 2 The computer efficiency and capacity 2.1 The basic concepts and definitions Let us first briefly describe the main point of the suggested approach and defi- nitions. For a start, we will consider the simplified variant of a computer, which consists of a set of instructions I and an accessible memory M. We suppose that at the initial moment there is a program and data which can be considered as binary words P and D, located in the memory of a com- puter M. In what follows we will call the pair P and D a computer task. A computer task < P, D > determines a certain sequence of instructions X(P, D) = x1x2x3..., xi ∈I. (It is supposed that an instruction may contain an ad- dress of a memory location in M, the index of a register, etc.) For example, if the program P contains a loop which will be executed ten times, then the sequence X will contain the body of this loop repeated ten times. We say that two computer tasks < P1, D1 > and < P2, D2 > are different, if the sequences X(P1, D1) and X(P2, D2) are different. Let us denote the execution time of an instruction x by τ(x). Then the execution time τ(X) of a sequence of instructions X = x1x2x3...xt is given by τ(X) = t X i=1 τ(xi). The key observation is as follows: the number of different computer tasks, whose execution time equals T , is upper bounded by the size of the set of all sequences of instructions, whose execution time equals T , i.e. ν(T ) ≤N(T ), (1) where ν(T ) is the number of different problems, whose execution time equals T , and N(T ) = |{X : τ(X) = T }|. (2) Hence, log ν(T ) ≤log N(t). (3) (Here and below log x ≡log2 x and |Y | is the number of elements of Y if Y is a set, and the length of Y if Y is a word.) In other words, the total number of computer tasks executed in time T is upper bounded by (2). Basing on this consideration we give the following definition. 2 Definition 1. Let there be a computer with a set of instructions I and let τ(x) be the execution time of an instruction x ∈I. The computer capacity C(I) is defined as follows: C(I) = lim T →∞ log N(T ) T , (4) where N(T ) is defined in (2). (That this limit always exists can be proven based on the lemma by M. Fekete [9, lemma M. Fekete].) The next question to be investigated is the definition of the computer ef- ficiency (or performance), when a computer is used for solving problems of a certain kind. For example, one computer can be a Web server, another can be used for solving differential equations, etc. Certainly, the computer efficiency depends on the problems the computer has to solve. In order to model this sit-

…(Full text truncated)…

Reference

This content is AI-processed based on ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut