How good are MatLab, Octave and Scilab for Computational Modelling?

How good are MatLab, Octave and Scilab for Computational Modelling?
Notice: This research summary and analysis were automatically generated using AI technology. For absolute accuracy, please refer to the [Original Paper Viewer] below or the Original ArXiv Source.

In this article we test the accuracy of three platforms used in computational modelling: MatLab, Octave and Scilab, running on i386 architecture and three operating systems (Windows, Ubuntu and Mac OS). We submitted them to numerical tests using standard data sets and using the functions provided by each platform. A Monte Carlo study was conducted in some of the datasets in order to verify the stability of the results with respect to small departures from the original input. We propose a set of operations which include the computation of matrix determinants and eigenvalues, whose results are known. We also used data provided by NIST (National Institute of Standards and Technology), a protocol which includes the computation of basic univariate statistics (mean, standard deviation and first-lag correlation), linear regression and extremes of probability distributions. The assessment was made comparing the results computed by the platforms with certified values, that is, known results, computing the number of correct significant digits.


💡 Research Summary

The paper conducts a systematic assessment of numerical accuracy across three widely used computational platforms—MatLab, Octave, and Scilab—on identical i386 hardware running three operating systems (Windows 10, Ubuntu 20.04, macOS Catalina). Using the latest releases of each environment, the authors evaluate both linear‑algebraic and statistical functions against certified reference values supplied by the National Institute of Standards and Technology (NIST).

For linear algebra, randomly generated integer and floating‑point matrices ranging from 3 × 3 to 10 × 10 are processed to compute determinants, eigenvalues, and eigenvectors. Results are compared to NIST’s high‑precision benchmarks, and the number of correct significant digits is recorded. In the statistical domain, the study examines basic univariate statistics (mean, standard deviation, first‑lag autocorrelation), simple linear regression coefficients, and extreme quantiles of four probability distributions (normal, exponential, beta, chi‑square). Again, certified NIST values serve as the ground truth.

Accuracy is quantified by counting the matching significant digits after accounting for both absolute and relative error, effectively measuring how many of the ~15‑digit IEEE‑754 double‑precision mantissa are reliable. The authors also perform a Monte‑Carlo sensitivity analysis by adding ±1 × 10⁻⁶ perturbations to input data, thereby gauging the stability of each platform’s results under small numerical noise.

Overall, the three platforms deliver comparable performance for most routine tasks, achieving 14–15 correct digits in means, standard deviations, regression coefficients, and well‑conditioned eigenvalue problems. MatLab consistently yields the highest fidelity, especially for determinants of large integer matrices and for extreme quantiles, where Octave and Scilab sometimes drop to 13–14 correct digits. Scilab shows a noticeable decline in determinant accuracy for matrices larger than 8 × 8, likely due to differences in pivoting strategies during LU decomposition. Octave’s eigenvalue routine returns values in ascending order, requiring post‑processing to align with reference data.

The Monte‑Carlo experiments reveal that all three environments lose roughly 0.5–1.2 significant digits when inputs are perturbed, with the loss correlating strongly with the condition number of the problem. High‑condition‑number matrices (cond > 10¹²) cause the greatest variability, particularly in Scilab.

The authors conclude that MatLab, Octave, and Scilab are all suitable for standard scientific and engineering modeling, but MatLab offers the most robust performance for high‑precision or ill‑conditioned problems. Octave and Scilab remain cost‑effective alternatives, provided users verify critical results with independent checks or higher‑precision tools.


Comments & Academic Discussion

Loading comments...

Leave a Comment