I'm trying to check some endpoints by looping over urls in a python script using the requests module.
I have a simple function I call to check each endpoint:
# Return endpoint status
def get_endpoint_status(url):
response - requests.get(
url,
headers-headers,
proxies-proxies,
timeout 10
)
return response
And I have main code like this to loop over the environment hostname urls:
# Main loop
for environment in environments_list:
environment_hostname - environment [0]
# Get endpoint status
r - get_endpoint_status('https://' + environment_hostname + endpoint)
# If good
if r.status_code = 200:
print (r.status_code)
print (r.url)
# If not good
else:
print ("Failed")
print (r.status_code)
print (r.url)
I only care about 200 or fail http codes, so anything that is not 200 is a fail.
This script works fine when the endpoints return valid http codes.
But sometimes other things happen, like a request timeout or connection failure. for example. Then the script fails and exits and the results output stops.
I know I need to add some error handling, but I'm not sure what to try to catch or where to catch it?
- Am I trying to catch a requests failure? Or would this failure be from the OS networking?
- If there is a timeout or a connection failure of some kind and I catch it, can I then just make it return it as a http 500 status code and carry on with the script?
Sorry for all the questions, I'm not a python developer so this is all quite new to me