The Python requests module is simple and elegant but one thing bugs me. It is possible to get a requests.exception.ConnectionError with a message like:
Max retries exceeded with url: ...
This implies that requests can attempt to access the data several times. But there is not a single mention of this possibility anywhere in the docs. Looking at the source code I didn't find any place where I could alter the default (presumably 0) value.
So is it possible to somehow set the maximum number of retries for requests?
This question is related to
python
python-requests
Be careful, Martijn Pieters's answer isn't suitable for version 1.2.1+. You can't set it globally without patching the library.
You can do this instead:
import requests
from requests.adapters import HTTPAdapter
s = requests.Session()
s.mount('http://www.github.com', HTTPAdapter(max_retries=5))
s.mount('https://www.github.com', HTTPAdapter(max_retries=5))
A cleaner way to gain higher control might be to package the retry stuff into a function and make that function retriable using a decorator and whitelist the exceptions.
I have created the same here: http://www.praddy.in/retry-decorator-whitelisted-exceptions/
Reproducing the code in that link :
def retry(exceptions, delay=0, times=2):
"""
A decorator for retrying a function call with a specified delay in case of a set of exceptions
Parameter List
-------------
:param exceptions: A tuple of all exceptions that need to be caught for retry
e.g. retry(exception_list = (Timeout, Readtimeout))
:param delay: Amount of delay (seconds) needed between successive retries.
:param times: no of times the function should be retried
"""
def outer_wrapper(function):
@functools.wraps(function)
def inner_wrapper(*args, **kwargs):
final_excep = None
for counter in xrange(times):
if counter > 0:
time.sleep(delay)
final_excep = None
try:
value = function(*args, **kwargs)
return value
except (exceptions) as e:
final_excep = e
pass #or log it
if final_excep is not None:
raise final_excep
return inner_wrapper
return outer_wrapper
@retry(exceptions=(TimeoutError, ConnectTimeoutError), delay=0, times=3)
def call_api():
This will not only change the max_retries but also enable a backoff strategy which makes requests to all http:// addresses sleep for a period of time before retrying (to a total of 5 times):
import requests
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
s = requests.Session()
retries = Retry(total=5,
backoff_factor=0.1,
status_forcelist=[ 500, 502, 503, 504 ])
s.mount('http://', HTTPAdapter(max_retries=retries))
s.get('http://httpstat.us/500')
As per documentation for Retry
: if the backoff_factor is 0.1, then sleep() will sleep for [0.1s, 0.2s, 0.4s, ...] between retries. It will also force a retry if the status code returned is 500, 502, 503 or 504.
Various other options to Retry
allow for more granular control:
MaxRetryError
, or to return a response with a response code in the 3xx range.NB: raise_on_status is relatively new, and has not made it into a release of urllib3 or requests yet. The raise_on_status keyword argument appears to have made it into the standard library at most in python version 3.6.
To make requests retry on specific HTTP status codes, use status_forcelist. For example, status_forcelist=[503] will retry on status code 503 (service unavailable).
By default, the retry only fires for these conditions:
TimeoutError
HTTPException
raised (from http.client in Python 3 else httplib).
This seems to be low-level HTTP exceptions, like URL or protocol not
formed correctly.SocketError
ProtocolError
Notice that these are all exceptions that prevent a regular HTTP response from being received. If any regular response is generated, no retry is done. Without using the status_forcelist, even a response with status 500 will not be retried.
To make it behave in a manner which is more intuitive for working with a remote API or web server, I would use the above code snippet, which forces retries on statuses 500, 502, 503 and 504, all of which are not uncommon on the web and (possibly) recoverable given a big enough backoff period.
EDITED: Import Retry
class directly from urllib3.
After struggling a bit with some of the answers here, I found a library called backoff that worked better for my situation. A basic example:
import backoff
@backoff.on_exception(
backoff.expo,
requests.exceptions.RequestException,
max_tries=5,
giveup=lambda e: e.response is not None and e.response.status_code < 500
)
def publish(self, data):
r = requests.post(url, timeout=10, json=data)
r.raise_for_status()
I'd still recommend giving the library's native functionality a shot, but if you run into any problems or need broader control, backoff is an option.
Source: Stackoverflow.com