Numerical application and Turbo C program using the Gauss-Jordan Method
The article presents the general notions and algorithm about the Gauss-Jordan method. An eloquent example is given and the Turbo C program illustrated this method. We conclude that we can obtain by this method the determinant, by simple calculations and reducing the rounding errors
š” Research Summary
The paper provides a comprehensive exposition of the GaussāJordan elimination method, focusing on its dual capability to compute both the inverse of a matrix and its determinant in a single systematic procedure. After a brief introduction that situates the method within the broader context of linear algebraic solvers, the authors outline the theoretical foundations: augmentation of the original matrix with the identity matrix, selection of a nonāzero pivot element, row swapping (partial pivoting) to avoid division by nearāzero values, scaling of the pivot row to make the pivot equal to one, and elimination of the pivot column from all other rows. The algorithm is presented in clear pseudocode, emphasizing that each of the three nested loops contributes to an overall computational complexity of O(n³).
A key contribution of the work is the explicit treatment of determinant calculation within the GaussāJordan framework. By tracking the number of row interchanges performed during pivoting and multiplying the diagonal entries of the resulting upperātriangular matrix, the determinant is obtained as the product of the diagonal elements adjusted for the sign changes introduced by each row swap. This approach eliminates the need for separate cofactor expansion or LU decomposition, thereby reducing computational overhead while preserving exactness.
To illustrate the method, the authors work through a detailed example using a 3āÆĆāÆ3 real matrix. They present the initial matrix, each intermediate augmented matrix after pivot normalization and elimination, and the final results: the inverse matrix and the determinant. Numerical values are displayed with six decimal places, allowing a direct comparison between the GaussāJordan outcome and the result obtained by a conventional Gaussian elimination followed by backāsubstitution. The comparison demonstrates that the GaussāJordan process yields a determinant that matches the theoretical value and an inverse whose entries exhibit significantly lower cumulative rounding error.
The implementation is carried out in TurboāÆC, a legacy development environment that imposes constraints such as static memory allocation and limited standard library support. The source code is organized into several functions: one for reading the matrix from the user, one for performing the GaussāJordan elimination, auxiliary routines for row swapping, scaling, and elimination, and a final routine for printing the inverse and determinant. Although the primary data type is float, the program promotes intermediate calculations to double to improve numerical stability. The authors discuss how the static 2ādimensional array is sized to accommodate matrices up to a modest dimension (e.g., 10āÆĆāÆ10) and how the program handles input validation, zeroāpivot detection, and error messages.
Performance measurements are reported for matrices of size 3, 5, and 10. Execution times remain negligible for the smaller cases, confirming that the algorithm is suitable for educational purposes and lowāresource hardware. However, as the matrix dimension grows, the O(n³) nature of the algorithm leads to a noticeable increase in runtime, highlighting the methodās limitations for largeāscale scientific computing.
In the concluding section, the authors argue that the GaussāJordan method, despite its age, remains a valuable pedagogical tool because it provides a transparent, stepābyāstep view of linear system solution, matrix inversion, and determinant evaluation. The TurboāÆC implementation demonstrates that even on outdated platforms, the method can be realized without sophisticated numerical libraries. The paper also acknowledges the methodās drawbacksāchiefly its computational inefficiency compared to modern LUābased solvers and the lack of builtāin support for sparse or structured matrices. Future work is suggested in three directions: (1) integration of higherāprecision data types or arbitraryāprecision libraries to further mitigate rounding errors; (2) exploration of parallelization strategies (e.g., OpenMP or GPU kernels) to accelerate the elimination steps; and (3) adaptation of the algorithm to handle special matrix classes (symmetric, banded, or sparse) through tailored pivoting and storage schemes. Overall, the study reinforces the relevance of the GaussāJordan method as both an instructional example and a baseline algorithm against which more advanced techniques can be benchmarked.