The code for matrix inverse is very similar to the code for solving simultaneous equations. Added effort is needed to find the maximum pivot element and there must be both row and column interchanges. An example that shows the increasing error with the increasing size of the matrix, on a difficult matrix, is shown below. Note that results of a 16 by 16 matrix using 64-bit IEEE Floating point arithmetic that is ill conditioned may become useless. Given a matrix A, computing the inverse AI, then checking that |A| * |AI| = |II| is approximately the identity matrix |I| is useful and possibly very important. The check used for this case study was to sum the absolute values of |II| - |I| and print the sum and also print the sum divided by n*n, the number of elements in the matrix. This case study uses the classic, difficult to invert, variation of the Hilbert Matrix, in floating point format, shown here as rational numbers: | 1/2 1/3 1/4 1/5 | using i for the column index and | 1/3 1/4 1/3 1/6 | j for the row index, | 1/4 1/5 1/6 1/7 | the (i,j) element is 1/(i+j) | 1/5 1/6 1/7 1/8 | as a floating point number A few solutions are (A followed by AI): | 1/2 | n=1 | 2 | | 1/2 1/3 | n=2 | 18 -24 | | 1/3 1/4 | | -24 36 | | 1/2 1/3 1/4 1/5 | n=4 | 200 -1200 2100 -1120 | | 1/3 1/4 1/5 1/6 | | -1200 8100 -15120 8400 | | 1/4 1/5 1/6 1/7 | | 2100 -15120 29400 -16800 | | 1/5 1/6 1/7 1/8 | | -1120 8400 -16800 9800 | | 1/2 1/3 1/4 1/5 1/6 1/7 1/8 1/9 | | 1/3 1/4 1/5 1/6 1/7 1/8 1/9 1/10 | | 1/4 1/5 1/6 1/7 1/8 1/9 1/10 1/11 | | 1/5 1/6 1/7 1/8 1/9 1/10 1/11 1/12 | | 1/6 1/7 1/8 1/9 1/10 1/11 1/12 1/13 | | 1/7 1/8 1/9 1/10 1/11 1/12 1/13 1/14 | | 1/8 1/9 1/10 1/11 1/12 1/13 1/14 1/15 | | 1/9 1/10 1/11 1/12 1/13 1/14 1/15 1/16 | | 2592 -60480 498960 -1995840 4324320 -5189184 3243240 -823680 | | -60480 1587600 -13970880 58212000 -129729600 158918760 -100900800 25945920 | | 498960 -13970880 128066400 -548856000 1248647400 -1553872320 998917920 -259459200 | | -1995840 58212000 -548856000 2401245000 -5549544000 6992425440 -454053600 118918800 | | 4324320 -12972960 1248647400 5549544000 12985932960 -16527551040 10821610800 -2854051200 | | -5189184 158918760 -1553872320 6992425440 -16527551040 21210357168 -13984850880 3710266560 | | 3243240 -100900800 998917920 -4540536000 10821610800 -13984850880 9275666400 -2473511040 | | -823679 25945920 -259459200 1189188000 -2854051200 3710266560 -2473511040 662547600 | Note the exponential growth of the size of numbers in the inverse. One measure of the difficulty of inverting a matrix is the size of the largest diagonal during each step of the inversion process. The magnitude of the largest element of the inverse will be approximately the order of the reciprocal of the smallest of the largest diagonal. The smallest of the largest diagonal for a few cases are: n = 2 .277E-1 n = 4 .340E-4 n = 8 .770E-10 n = 16 .560E-22 n = 32 .358E-46 n = 64 .645E-95 n = 128 .178E-192 n = 256 failed with 200 digit multiple precision The average error, as computed for various precisions, is 7-digit 15-digit 166-bit 332-bit 664-bit MatLab IEEE IEEE mpf mpf mpf IEEE n 32-bit 64-bit 50-digit 100-digit 200-digit 64-bit 2 1.29E-7 4.61E-17 1.99E-59 1.36E-107 6.37E-204 0.00 4 7.84E-5 1.37E-13 1.91E-58 3.05E-106 6.12E-203 2.39E-13 8 3.83E0 9.43E-8 4.44E-50 1.81E-98 3.24E-194 7.31E-7 16 1.23E1 3.27E-1 8.13E-39 2.77E-87 8.56E-183 2.37E2 32 5.20E0 3.98E0 4.72E-14 4.45E-62 1.19E-158 1.46E6 64 6.13E0 1.31E1 1.01E8 2.20E-13 1.20E-109 2.55E8 128 8.39E0 5.23E0 7.79E7 3.46E8 3.83E-12 2.27E10 256 1.14E3 1.67E1 1.76E8 8.51E7 1.05E8 8.38E11 The errors bigger than E-5 are very deceiving in the first two columns. They indicate failure to invert. MatLab does indicate failure for n=16 and larger, other codes had the matrix singular error suppressed. A reasonable conclusion, for this matrix, is that an n by n matrix needs more than n bits of floating point precision in order to get a reliable inverse. Twice the number of bits as n to get good results. Before you panic, notice the results for the same test in MatLab for this hard to invert matrix verses a pseudo random matrix. n=2, avgerr=0 , rnderr=4.16334e-017 n=4, avgerr=2.39808e-013 , rnderr=1.04246e-016 n=8, avgerr=7.31534e-007 , rnderr=7.86263e-016 n=16, avgerr=237.479 , rnderr=3.57e-016 n=32, avgerr=1.46377e+006 , rnderr=7.5899e-016 n=64, avgerr=2.55034e+008 , rnderr=2.34115e-015 n=128, avgerr=2.2773e+010 , rnderr=6.70795e-015 n=256, avgerr=8.38903e+011 , rnderr=1.9137e-014 Well conditioned matrices may be inverted for n in the range 10,000 to 20,000 with IEEE 64-bit floating point. Many large matrices are sparse, having many zero elements, and may have only bands of non zero elements. Unfortunately the inverse of sparse matrices are not sparse, thus sparse matrix storage techniques may actually be slower. The MatLab code is, of course, the shortest (stripped here): n=1; for r=1:8 n=2*n; A=zeros(n,n); B=rand(n,n); for i=1:n for j=1:n A(i,j)=1.0/(i+j); end end avgerr=sum(sum(abs(A*inv(A)-eye(n,n))))/(n*n) rnderr=sum(sum(abs(B*inv(B)-eye(n,n))))/(n*n) end The detailed code and results are: invrnd.m actual MatLab code invrnd_m.out file output invrnd_mm.out partial screen output inverse.f90 basic inverse test_inverse.f90 test program test_inverse_f90.out output Extracted form test_inverse_f90.out is initializing big matrix, n= 2 , n*n= 4 sum of error= 1.84748050191530E-16 , avg error= 4.61870125478825E-17 initializing big matrix, n= 4 , n*n= 16 sum of error= 2.19971263426544E-12 , avg error= 1.37482039641590E-13 initializing big matrix, n= 8 , n*n= 64 sum of error= 0.00000604139304982709 , avg error= 9.43967664035483E-8 initializing big matrix, n= 16 , n*n= 256 sum of error= 83.9630735209012 , avg error= 0.327980755941020 initializing big matrix, n= 32 , n*n= 1024 sum of error= 4079.56590417946 , avg error= 3.98395107830025 initializing big matrix, n= 64 , n*n= 4096 sum of error= 53735.8765782488 , avg error= 13.1191104927365 initializing big matrix, n= 128 , n*n= 16384 sum of error= 85784.2643647822 , avg error= 5.23585597929579 initializing big matrix, n= 256 , n*n= 65536 sum of error= 1097119.16168229 , avg error= 16.7407098645368 initializing big matrix, n= 512 , n*n= 262144 sum of error= 1.36281435213093E+7 , avg error= 51.9872418262837 initializing big matrix, n= 1024 , n*n= 1048576 sum of error= 1.24247404738082E+9 , avg error= 1184.91558778841 Very similar results from the C version: inverse.c inverse.h test_inverse.c test_inverse.out Similar results from float rather than double in the C version: inversef.c inversef.h test_inversef.c test_inversef.out Exploring results from 50 digit multiple precision arithmetic version: mpf_inverse.c mpf_inverse.h test_mpf_inverse.c test_mpf50_inverse.out Changing 'digits' to 100 digit multiple precision arithmetic version: test_mpf100_inverse.out Changing 'digits' to 200 digit multiple precision arithmetic version: test_mpf200_inverse.out The 200 digit run with more output: test_mpf_inverse.out Java version, double inverse.java test_inverse.java test_inverse_java.out Java version, BigDecimal 300 digit Big_inverse.java test_Big_inverse.java test_Big_inverse_java.out