How to deal with a 'ZeroDivisionError: float division by zero' error in my gaussian elimination code?

77 Views Asked by At

I have a code for gaussian elimination that works for most matrices. However, some matrices, when run through the code, will return the last row of that matrix as 0s. Obviously, you don't want that, and you want to have a 1 in the last element of the diagonal in the matrix. What my code does is that it will try dividing 0 by itself to get 1, which is where I think the 'division by zero' error comes from. How do I make it so that when it encounters a 0 in the last diagonal, and then tries dividing by 0, it skips this process and... gets the answer? I'm not sure what to do after that. The test matrix I'm using is this:

matrix = [[4, 0, -1, 0], [10, 1, -4, 0], [0, 2, -3, 0]]. 3 rows and 4 columns.

For this code, the matrix A would be:

[[4, 0, -1], [10, 1, -4], [0, 2, -3]]

And the vector b would be: [0, 0, 0]

And the code would augment the vector to the matrix.

from fractions import Fraction as frac

# Matrix input
R = int(input("Enter the number of rows: "))
C = int(input("Enter the number of columns: "))
# Initialize matrix
matrix = []
print("Enter the entries row-wise:")

t = []

# For user input
for i in range(R):      # Loop for rows
    a = []
    for j in range(C):      # Loop for columns
        a.append(int(input()))
    matrix.append(a)

b = list(map(int, input("Enter your vector, separated by spaces: ").split()))

# Printing matrix
for i in range(R):
    for j in range(C):
        print(matrix[i][j], end=" ")
    print()

A = matrix


def gaussian_elimination(A, b):
    n = len(A)
    # Augment matrix A with vector b
    Ab = [A[i] + [b[i]] for i in range(n)]

    # Elimination phase
    for i in range(n):
        # Check if the last element on the diagonal is 0
        if Ab[i][i] == 0:
            continue

        # Find row with largest pivot
        max_row = i
        for j in range(i + 1, n):
            if abs(Ab[j][i]) > abs(Ab[max_row][i]):
                max_row = j

        # Swap current row with row containing largest pivot
        Ab[i], Ab[max_row] = Ab[max_row], Ab[i]

        # Eliminate entries below pivot
        for j in range(i + 1, n):
            factor = Ab[j][i] / Ab[i][i]
            for k in range(i + 1, n + 1):
                Ab[j][k] -= factor * Ab[i][k]
            Ab[j][i] = 0

    # Back substitution phase
    # Back substitution phase
    x = [0] * n
    for i in range(n - 1, -1, -1):
        x[i] = Ab[i][n]
        for j in range(i + 1, n):
            x[i] -= Ab[i][j] * x[j]
        x[i] /= Ab[i][i]


    # Turning decimals into fractions if needed
    sol = {}
    for k in range(len(x)):
        xStr = str(x[k])
        xStrFrac = frac(xStr).limit_denominator()
        sol[f"x{k + 1}"] = str(xStrFrac)

    return sol


sol = gaussian_elimination(A, b)

print(sol)

I tried making it so that if the code encountered a 0 in the last diagonal, it would continue and not try any elementary row operations. I'm pretty sure I didn't write the code for this correctly, as the output stayed the same.

0

There are 0 best solutions below