# Requests > This part of the documentation covers all the interfaces of Requests. For --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/api/ # Path: api/ # Developer Interface¶ This part of the documentation covers all the interfaces of Requests. For parts where Requests depends on external libraries, we document the most important right here and provide links to the canonical documentation. ## Main Interface¶ All of Requests’ functionality can be accessed by these 7 methods. They all return an instance of the `Response` object. requests.request(_method_ , _url_ , _** kwargs_)[[source]](../_modules/requests/api/#request)¶ Constructs and sends a `Request`. Parameters: * **method** – method for the new `Request` object: `GET`, `OPTIONS`, `HEAD`, `POST`, `PUT`, `PATCH`, or `DELETE`. * **url** – URL for the new `Request` object. * **params** – (optional) Dictionary, list of tuples or bytes to send in the query string for the `Request`. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) A JSON serializable Python object to send in the body of the `Request`. * **headers** – (optional) Dictionary of HTTP Headers to send with the `Request`. * **cookies** – (optional) Dict or CookieJar object to send with the `Request`. * **files** – (optional) Dictionary of `'name': file-like-objects` (or `{'name': file-tuple}`) for multipart encoding upload. `file-tuple` can be a 2-tuple `('filename', fileobj)`, 3-tuple `('filename', fileobj, 'content_type')` or a 4-tuple `('filename', fileobj, 'content_type', custom_headers)`, where `'content_type'` is a string defining the content type of the given file and `custom_headers` a dict-like object containing additional headers to add for the file. * **auth** – (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. * **timeout** ([_float_](https://docs.python.org/3/library/functions.html#float "\(in Python v3.14\)") _or_[ _tuple_](https://docs.python.org/3/library/stdtypes.html#tuple "\(in Python v3.14\)")) – (optional) How many seconds to wait for the server to send data before giving up, as a float, or a [(connect timeout, read timeout)](../user/advanced/#timeouts) tuple. * **allow_redirects** ([_bool_](https://docs.python.org/3/library/functions.html#bool "\(in Python v3.14\)")) – (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to `True`. * **proxies** – (optional) Dictionary mapping protocol to the URL of the proxy. * **verify** – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to `True`. * **stream** – (optional) if `False`, the response content will be immediately downloaded. * **cert** – (optional) if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair. Returns: `Response` object Return type: requests.Response Usage: >>> import requests >>> req = requests.request('GET', 'https://httpbin.org/get') >>> req requests.head(_url_ , _** kwargs_)[[source]](../_modules/requests/api/#head)¶ Sends a HEAD request. Parameters: * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. If allow_redirects is not provided, it will be set to False (as opposed to the default `request` behavior). Returns: `Response` object Return type: requests.Response requests.get(_url_ , _params =None_, _** kwargs_)[[source]](../_modules/requests/api/#get)¶ Sends a GET request. Parameters: * **url** – URL for the new `Request` object. * **params** – (optional) Dictionary, list of tuples or bytes to send in the query string for the `Request`. * ****kwargs** – Optional arguments that `request` takes. Returns: `Response` object Return type: requests.Response requests.post(_url_ , _data =None_, _json =None_, _** kwargs_)[[source]](../_modules/requests/api/#post)¶ Sends a POST request. Parameters: * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) A JSON serializable Python object to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. Returns: `Response` object Return type: requests.Response requests.put(_url_ , _data =None_, _** kwargs_)[[source]](../_modules/requests/api/#put)¶ Sends a PUT request. Parameters: * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) A JSON serializable Python object to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. Returns: `Response` object Return type: requests.Response requests.patch(_url_ , _data =None_, _** kwargs_)[[source]](../_modules/requests/api/#patch)¶ Sends a PATCH request. Parameters: * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) A JSON serializable Python object to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. Returns: `Response` object Return type: requests.Response requests.delete(_url_ , _** kwargs_)[[source]](../_modules/requests/api/#delete)¶ Sends a DELETE request. Parameters: * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. Returns: `Response` object Return type: requests.Response ## Exceptions¶ _exception _requests.RequestException(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#RequestException)¶ There was an ambiguous exception that occurred while handling your request. _exception _requests.ConnectionError(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#ConnectionError)¶ A Connection error occurred. _exception _requests.HTTPError(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#HTTPError)¶ An HTTP error occurred. _exception _requests.TooManyRedirects(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#TooManyRedirects)¶ Too many redirects. _exception _requests.ConnectTimeout(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#ConnectTimeout)¶ The request timed out while trying to connect to the remote server. Requests that produced this error are safe to retry. _exception _requests.ReadTimeout(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#ReadTimeout)¶ The server did not send any data in the allotted amount of time. _exception _requests.Timeout(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#Timeout)¶ The request timed out. Catching this error will catch both `ConnectTimeout` and `ReadTimeout` errors. _exception _requests.JSONDecodeError(_* args_, _** kwargs_)[[source]](../_modules/requests/exceptions/#JSONDecodeError)¶ Couldn’t decode the text into json ## Request Sessions¶ _class _requests.Session[[source]](../_modules/requests/sessions/#Session)¶ A Requests session. Provides cookie persistence, connection-pooling, and configuration. Basic Usage: >>> import requests >>> s = requests.Session() >>> s.get('https://httpbin.org/get') Or as a context manager: >>> with requests.Session() as s: ... s.get('https://httpbin.org/get') auth¶ Default Authentication tuple or object to attach to `Request`. cert¶ SSL client certificate default, if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair. close()[[source]](../_modules/requests/sessions/#Session.close)¶ Closes all adapters and as such the session cookies¶ A CookieJar containing all currently outstanding cookies set on this session. By default it is a `RequestsCookieJar`, but may be any other `cookielib.CookieJar` compatible object. delete(_url_ , _** kwargs_)[[source]](../_modules/requests/sessions/#Session.delete)¶ Sends a DELETE request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response get(_url_ , _** kwargs_)[[source]](../_modules/requests/sessions/#Session.get)¶ Sends a GET request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response get_adapter(_url_)[[source]](../_modules/requests/sessions/#Session.get_adapter)¶ Returns the appropriate connection adapter for the given URL. Return type: requests.adapters.BaseAdapter get_redirect_target(_resp_)¶ Receives a Response. Returns a redirect URI or `None` head(_url_ , _** kwargs_)[[source]](../_modules/requests/sessions/#Session.head)¶ Sends a HEAD request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response headers¶ A case-insensitive dictionary of headers to be sent on each `Request` sent from this `Session`. hooks¶ Event-handling hooks. max_redirects¶ Maximum number of redirects allowed. If the request exceeds this limit, a `TooManyRedirects` exception is raised. This defaults to requests.models.DEFAULT_REDIRECT_LIMIT, which is 30. merge_environment_settings(_url_ , _proxies_ , _stream_ , _verify_ , _cert_)[[source]](../_modules/requests/sessions/#Session.merge_environment_settings)¶ Check the environment and merge it with some settings. Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "\(in Python v3.14\)") mount(_prefix_ , _adapter_)[[source]](../_modules/requests/sessions/#Session.mount)¶ Registers a connection adapter to a prefix. Adapters are sorted in descending order by prefix length. options(_url_ , _** kwargs_)[[source]](../_modules/requests/sessions/#Session.options)¶ Sends a OPTIONS request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response params¶ Dictionary of querystring data to attach to each `Request`. The dictionary values may be lists for representing multivalued query parameters. patch(_url_ , _data =None_, _** kwargs_)[[source]](../_modules/requests/sessions/#Session.patch)¶ Sends a PATCH request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response post(_url_ , _data =None_, _json =None_, _** kwargs_)[[source]](../_modules/requests/sessions/#Session.post)¶ Sends a POST request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) json to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response prepare_request(_request_)[[source]](../_modules/requests/sessions/#Session.prepare_request)¶ Constructs a `PreparedRequest` for transmission and returns it. The `PreparedRequest` has settings merged from the `Request` instance and those of the `Session`. Parameters: **request** – `Request` instance to prepare with this session’s settings. Return type: requests.PreparedRequest proxies¶ Dictionary mapping protocol or protocol and host to the URL of the proxy (e.g. {‘http’: ‘foo.bar:3128’, ‘http://host.name’: ‘foo.bar:4012’}) to be used on each `Request`. put(_url_ , _data =None_, _** kwargs_)[[source]](../_modules/requests/sessions/#Session.put)¶ Sends a PUT request. Returns `Response` object. Parameters: * **url** – URL for the new `Request` object. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * ****kwargs** – Optional arguments that `request` takes. Return type: requests.Response rebuild_auth(_prepared_request_ , _response_)¶ When being redirected we may want to strip authentication from the request to avoid leaking credentials. This method intelligently removes and reapplies authentication where possible to avoid credential loss. rebuild_method(_prepared_request_ , _response_)¶ When being redirected we may want to change the method of the request based on certain specs or browser behavior. rebuild_proxies(_prepared_request_ , _proxies_)¶ This method re-evaluates the proxy configuration by considering the environment variables. If we are redirected to a URL covered by NO_PROXY, we strip the proxy configuration. Otherwise, we set missing proxy keys for this URL (in case they were stripped by a previous redirect). This method also replaces the Proxy-Authorization header where necessary. Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "\(in Python v3.14\)") request(_method_ , _url_ , _params =None_, _data =None_, _headers =None_, _cookies =None_, _files =None_, _auth =None_, _timeout =None_, _allow_redirects =True_, _proxies =None_, _hooks =None_, _stream =None_, _verify =None_, _cert =None_, _json =None_)[[source]](../_modules/requests/sessions/#Session.request)¶ Constructs a `Request`, prepares it and sends it. Returns `Response` object. Parameters: * **method** – method for the new `Request` object. * **url** – URL for the new `Request` object. * **params** – (optional) Dictionary or bytes to be sent in the query string for the `Request`. * **data** – (optional) Dictionary, list of tuples, bytes, or file-like object to send in the body of the `Request`. * **json** – (optional) json to send in the body of the `Request`. * **headers** – (optional) Dictionary of HTTP Headers to send with the `Request`. * **cookies** – (optional) Dict or CookieJar object to send with the `Request`. * **files** – (optional) Dictionary of `'filename': file-like-objects` for multipart encoding upload. * **auth** – (optional) Auth tuple or callable to enable Basic/Digest/Custom HTTP Auth. * **timeout** ([_float_](https://docs.python.org/3/library/functions.html#float "\(in Python v3.14\)") _or_[ _tuple_](https://docs.python.org/3/library/stdtypes.html#tuple "\(in Python v3.14\)")) – (optional) How many seconds to wait for the server to send data before giving up, as a float, or a [(connect timeout, read timeout)](../user/advanced/#timeouts) tuple. * **allow_redirects** ([_bool_](https://docs.python.org/3/library/functions.html#bool "\(in Python v3.14\)")) – (optional) Set to True by default. * **proxies** – (optional) Dictionary mapping protocol or protocol and hostname to the URL of the proxy. * **hooks** – (optional) Dictionary mapping hook name to one event or list of events, event must be callable. * **stream** – (optional) whether to immediately download the response content. Defaults to `False`. * **verify** – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use. Defaults to `True`. When set to `False`, requests will accept any TLS certificate presented by the server, and will ignore hostname mismatches and/or expired certificates, which will make your application vulnerable to man-in-the-middle (MitM) attacks. Setting verify to `False` may be useful during local development or testing. * **cert** – (optional) if String, path to ssl client cert file (.pem). If Tuple, (‘cert’, ‘key’) pair. Return type: requests.Response resolve_redirects(_resp_ , _req_ , _stream =False_, _timeout =None_, _verify =True_, _cert =None_, _proxies =None_, _yield_requests =False_, _** adapter_kwargs_)¶ Receives a Response. Returns a generator of Responses or Requests. send(_request_ , _** kwargs_)[[source]](../_modules/requests/sessions/#Session.send)¶ Send a given PreparedRequest. Return type: requests.Response should_strip_auth(_old_url_ , _new_url_)¶ Decide whether Authorization header should be removed when redirecting stream¶ Stream response content default. trust_env¶ Trust environment settings for proxy configuration, default authentication and similar. verify¶ SSL Verification default. Defaults to True, requiring requests to verify the TLS certificate at the remote end. If verify is set to False, requests will accept any TLS certificate presented by the server, and will ignore hostname mismatches and/or expired certificates, which will make your application vulnerable to man-in-the-middle (MitM) attacks. Only set this to False for testing. ## Lower-Level Classes¶ _class _requests.Request(_method =None_, _url =None_, _headers =None_, _files =None_, _data =None_, _params =None_, _auth =None_, _cookies =None_, _hooks =None_, _json =None_)[[source]](../_modules/requests/models/#Request)¶ A user-created `Request` object. Used to prepare a `PreparedRequest`, which is sent to the server. Parameters: * **method** – HTTP method to use. * **url** – URL to send. * **headers** – dictionary of headers to send. * **files** – dictionary of {filename: fileobject} files to multipart upload. * **data** – the body to attach to the request. If a dictionary or list of tuples `[(key, value)]` is provided, form-encoding will take place. * **json** – json for the body to attach to the request (if files or data is not specified). * **params** – URL parameters to append to the URL. If a dictionary or list of tuples `[(key, value)]` is provided, form-encoding will take place. * **auth** – Auth handler or (user, pass) tuple. * **cookies** – dictionary or CookieJar of cookies to attach to this request. * **hooks** – dictionary of callback hooks, for internal usage. Usage: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> req.prepare() deregister_hook(_event_ , _hook_)¶ Deregister a previously registered hook. Returns True if the hook existed, False if not. prepare()[[source]](../_modules/requests/models/#Request.prepare)¶ Constructs a `PreparedRequest` for transmission and returns it. register_hook(_event_ , _hook_)¶ Properly register a hook. _class _requests.Response[[source]](../_modules/requests/models/#Response)¶ The `Response` object, which contains a server’s response to an HTTP request. _property _apparent_encoding¶ The apparent encoding, provided by the charset_normalizer or chardet libraries. close()[[source]](../_modules/requests/models/#Response.close)¶ Releases the connection back to the pool. Once this method has been called the underlying `raw` object must not be accessed again. _Note: Should not normally need to be called explicitly._ _property _content¶ Content of the response, in bytes. cookies¶ A CookieJar of Cookies the server sent back. elapsed¶ The amount of time elapsed between sending the request and the arrival of the response (as a timedelta). This property specifically measures the time taken between sending the first byte of the request and finishing parsing the headers. It is therefore unaffected by consuming the response content or the value of the `stream` keyword argument. encoding¶ Encoding to decode with when accessing r.text. headers¶ Case-insensitive Dictionary of Response Headers. For example, `headers['content-encoding']` will return the value of a `'Content-Encoding'` response header. history¶ A list of `Response` objects from the history of the Request. Any redirect responses will end up here. The list is sorted from the oldest to the most recent request. _property _is_permanent_redirect¶ True if this Response one of the permanent versions of redirect. _property _is_redirect¶ True if this Response is a well-formed HTTP redirect that could have been processed automatically (by `Session.resolve_redirects`). iter_content(_chunk_size =1_, _decode_unicode =False_)[[source]](../_modules/requests/models/#Response.iter_content)¶ Iterates over the response data. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. The chunk size is the number of bytes it should read into memory. This is not necessarily the length of each item returned as decoding can take place. chunk_size must be of type int or None. A value of None will function differently depending on the value of stream. stream=True will read data as it arrives in whatever size the chunks are received. If stream=False, data is returned as a single chunk. If decode_unicode is True, content will be decoded using the best available encoding based on the response. iter_lines(_chunk_size =512_, _decode_unicode =False_, _delimiter =None_)[[source]](../_modules/requests/models/#Response.iter_lines)¶ Iterates over the response data, one line at a time. When stream=True is set on the request, this avoids reading the content at once into memory for large responses. Note This method is not reentrant safe. json(_** kwargs_)[[source]](../_modules/requests/models/#Response.json)¶ Decodes the JSON response body (if any) as a Python object. This may return a dictionary, list, etc. depending on what is in the response. Parameters: ****kwargs** – Optional arguments that `json.loads` takes. Raises: **requests.exceptions.JSONDecodeError** – If the response body does not contain valid json. _property _links¶ Returns the parsed header links of the response, if any. _property _next¶ Returns a PreparedRequest for the next request in a redirect chain, if there is one. _property _ok¶ Returns True if `status_code` is less than 400, False if not. This attribute checks if the status code of the response is between 400 and 600 to see if there was a client error or a server error. If the status code is between 200 and 400, this will return True. This is **not** a check to see if the response code is `200 OK`. raise_for_status()[[source]](../_modules/requests/models/#Response.raise_for_status)¶ Raises `HTTPError`, if one occurred. raw¶ File-like object representation of response (for advanced usage). Use of `raw` requires that `stream=True` be set on the request. This requirement does not apply for use internally to Requests. reason¶ Textual reason of responded HTTP Status, e.g. “Not Found” or “OK”. request¶ The `PreparedRequest` object to which this is a response. status_code¶ Integer Code of responded HTTP Status, e.g. 404 or 200. _property _text¶ Content of the response, in unicode. If Response.encoding is None, encoding will be guessed using `charset_normalizer` or `chardet`. The encoding of the response content is determined based solely on HTTP headers, following RFC 2616 to the letter. If you can take advantage of non- HTTP knowledge to make a better guess at the encoding, you should set `r.encoding` appropriately before accessing this property. url¶ Final URL location of Response. ## Lower-Lower-Level Classes¶ _class _requests.PreparedRequest[[source]](../_modules/requests/models/#PreparedRequest)¶ The fully mutable `PreparedRequest` object, containing the exact bytes that will be sent to the server. Instances are generated from a `Request` object, and should not be instantiated manually; doing so may produce undesirable effects. Usage: >>> import requests >>> req = requests.Request('GET', 'https://httpbin.org/get') >>> r = req.prepare() >>> r >>> s = requests.Session() >>> s.send(r) body¶ request body to send to the server. deregister_hook(_event_ , _hook_)¶ Deregister a previously registered hook. Returns True if the hook existed, False if not. headers¶ dictionary of HTTP headers. hooks¶ dictionary of callback hooks, for internal usage. method¶ HTTP verb to send to the server. _property _path_url¶ Build the path URL to use. prepare(_method =None_, _url =None_, _headers =None_, _files =None_, _data =None_, _params =None_, _auth =None_, _cookies =None_, _hooks =None_, _json =None_)[[source]](../_modules/requests/models/#PreparedRequest.prepare)¶ Prepares the entire request with the given parameters. prepare_auth(_auth_ , _url =''_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_auth)¶ Prepares the given HTTP auth data. prepare_body(_data_ , _files_ , _json =None_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_body)¶ Prepares the given HTTP body data. prepare_content_length(_body_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_content_length)¶ Prepare Content-Length header based on request method and body prepare_cookies(_cookies_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_cookies)¶ Prepares the given HTTP cookie data. This function eventually generates a `Cookie` header from the given cookies using cookielib. Due to cookielib’s design, the header will not be regenerated if it already exists, meaning this function can only be called once for the life of the `PreparedRequest` object. Any subsequent calls to `prepare_cookies` will have no actual effect, unless the “Cookie” header is removed beforehand. prepare_headers(_headers_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_headers)¶ Prepares the given HTTP headers. prepare_hooks(_hooks_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_hooks)¶ Prepares the given hooks. prepare_method(_method_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_method)¶ Prepares the given HTTP method. prepare_url(_url_ , _params_)[[source]](../_modules/requests/models/#PreparedRequest.prepare_url)¶ Prepares the given HTTP URL. register_hook(_event_ , _hook_)¶ Properly register a hook. url¶ HTTP URL to send the request to. _class _requests.adapters.BaseAdapter[[source]](../_modules/requests/adapters/#BaseAdapter)¶ The Base Transport Adapter close()[[source]](../_modules/requests/adapters/#BaseAdapter.close)¶ Cleans up adapter specific items. send(_request_ , _stream =False_, _timeout =None_, _verify =True_, _cert =None_, _proxies =None_)[[source]](../_modules/requests/adapters/#BaseAdapter.send)¶ Sends PreparedRequest object. Returns Response object. Parameters: * **request** – The `PreparedRequest` being sent. * **stream** – (optional) Whether to stream the request content. * **timeout** ([_float_](https://docs.python.org/3/library/functions.html#float "\(in Python v3.14\)") _or_[ _tuple_](https://docs.python.org/3/library/stdtypes.html#tuple "\(in Python v3.14\)")) – (optional) How long to wait for the server to send data before giving up, as a float, or a [(connect timeout, read timeout)](../user/advanced/#timeouts) tuple. * **verify** – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use * **cert** – (optional) Any user-provided SSL certificate to be trusted. * **proxies** – (optional) The proxies dictionary to apply to the request. _class _requests.adapters.HTTPAdapter(_pool_connections =10_, _pool_maxsize =10_, _max_retries =0_, _pool_block =False_)[[source]](../_modules/requests/adapters/#HTTPAdapter)¶ The built-in HTTP Adapter for urllib3. Provides a general-case interface for Requests sessions to contact HTTP and HTTPS urls by implementing the Transport Adapter interface. This class will usually be created by the `Session` class under the covers. Parameters: * **pool_connections** – The number of urllib3 connection pools to cache. * **pool_maxsize** – The maximum number of connections to save in the pool. * **max_retries** – The maximum number of retries each connection should attempt. Note, this applies only to failed DNS lookups, socket connections and connection timeouts, never to requests where data has made it to the server. By default, Requests does not retry failed connections. If you need granular control over the conditions under which we retry a request, import urllib3’s `Retry` class and pass that instead. * **pool_block** – Whether the connection pool should block for connections. Usage: >>> import requests >>> s = requests.Session() >>> a = requests.adapters.HTTPAdapter(max_retries=3) >>> s.mount('http://', a) add_headers(_request_ , _** kwargs_)[[source]](../_modules/requests/adapters/#HTTPAdapter.add_headers)¶ Add any headers needed by the connection. As of v2.0 this does nothing by default, but is left for overriding by users that subclass the `HTTPAdapter`. This should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **request** – The `PreparedRequest` to add headers to. * **kwargs** – The keyword arguments from the call to send(). build_connection_pool_key_attributes(_request_ , _verify_ , _cert =None_)[[source]](../_modules/requests/adapters/#HTTPAdapter.build_connection_pool_key_attributes)¶ Build the PoolKey attributes used by urllib3 to return a connection. This looks at the PreparedRequest, the user-specified verify value, and the value of the cert parameter to determine what PoolKey values to use to select a connection from a given urllib3 Connection Pool. The SSL related pool key arguments are not consistently set. As of this writing, use the following to determine what keys may be in that dictionary: * If `verify` is `True`, `"ssl_context"` will be set and will be the default Requests SSL Context * If `verify` is `False`, `"ssl_context"` will not be set but `"cert_reqs"` will be set * If `verify` is a string, (i.e., it is a user-specified trust bundle) `"ca_certs"` will be set if the string is not a directory recognized by [`os.path.isdir`](https://docs.python.org/3/library/os.path.html#os.path.isdir "\(in Python v3.14\)"), otherwise `"ca_cert_dir"` will be set. * If `"cert"` is specified, `"cert_file"` will always be set. If `"cert"` is a tuple with a second item, `"key_file"` will also be present To override these settings, one may subclass this class, call this method and use the above logic to change parameters as desired. For example, if one wishes to use a custom [`ssl.SSLContext`](https://urllib3.readthedocs.io/en/latest/reference/urllib3.util.html#urllib3.util.SSLContext "\(in urllib3 v2.5.1.dev40\)") one must both set `"ssl_context"` and based on what else they require, alter the other keys to ensure the desired behaviour. Parameters: * **request** (`PreparedRequest`) – The PreparedReqest being sent over the connection. * **verify** – Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use. * **cert** – (optional) Any user-provided SSL certificate for client authentication (a.k.a., mTLS). This may be a string (i.e., just the path to a file which holds both certificate and key) or a tuple of length 2 with the certificate file path and key file path. Returns: A tuple of two dictionaries. The first is the “host parameters” portion of the Pool Key including scheme, hostname, and port. The second is a dictionary of SSLContext related parameters. build_response(_req_ , _resp_)[[source]](../_modules/requests/adapters/#HTTPAdapter.build_response)¶ Builds a `Response` object from a urllib3 response. This should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter` Parameters: * **req** – The `PreparedRequest` used to generate the response. * **resp** – The urllib3 response object. Return type: requests.Response cert_verify(_conn_ , _url_ , _verify_ , _cert_)[[source]](../_modules/requests/adapters/#HTTPAdapter.cert_verify)¶ Verify a SSL certificate. This method should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **conn** – The urllib3 connection object associated with the cert. * **url** – The requested URL. * **verify** – Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use * **cert** – The SSL certificate to verify. close()[[source]](../_modules/requests/adapters/#HTTPAdapter.close)¶ Disposes of any internal state. Currently, this closes the PoolManager and any active ProxyManager, which closes any pooled connections. get_connection(_url_ , _proxies =None_)[[source]](../_modules/requests/adapters/#HTTPAdapter.get_connection)¶ DEPRECATED: Users should move to get_connection_with_tls_context for all subclasses of HTTPAdapter using Requests>=2.32.2. Returns a urllib3 connection for the given URL. This should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **url** – The URL to connect to. * **proxies** – (optional) A Requests-style dictionary of proxies used on this request. Return type: urllib3.ConnectionPool get_connection_with_tls_context(_request_ , _verify_ , _proxies =None_, _cert =None_)[[source]](../_modules/requests/adapters/#HTTPAdapter.get_connection_with_tls_context)¶ Returns a urllib3 connection for the given request and TLS settings. This should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **request** – The `PreparedRequest` object to be sent over the connection. * **verify** – Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use. * **proxies** – (optional) The proxies dictionary to apply to the request. * **cert** – (optional) Any user-provided SSL certificate to be used for client authentication (a.k.a., mTLS). Return type: urllib3.ConnectionPool init_poolmanager(_connections_ , _maxsize_ , _block =False_, _** pool_kwargs_)[[source]](../_modules/requests/adapters/#HTTPAdapter.init_poolmanager)¶ Initializes a urllib3 PoolManager. This method should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **connections** – The number of urllib3 connection pools to cache. * **maxsize** – The maximum number of connections to save in the pool. * **block** – Block when no free connections are available. * **pool_kwargs** – Extra keyword arguments used to initialize the Pool Manager. proxy_headers(_proxy_)[[source]](../_modules/requests/adapters/#HTTPAdapter.proxy_headers)¶ Returns a dictionary of the headers to add to any request sent through a proxy. This works with urllib3 magic to ensure that they are correctly sent to the proxy, rather than in a tunnelled request if CONNECT is being used. This should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: **proxy** – The url of the proxy being used for this request. Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "\(in Python v3.14\)") proxy_manager_for(_proxy_ , _** proxy_kwargs_)[[source]](../_modules/requests/adapters/#HTTPAdapter.proxy_manager_for)¶ Return urllib3 ProxyManager for the given proxy. This method should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **proxy** – The proxy to return a urllib3 ProxyManager for. * **proxy_kwargs** – Extra keyword arguments used to configure the Proxy Manager. Returns: ProxyManager Return type: [urllib3.ProxyManager](https://urllib3.readthedocs.io/en/latest/reference/urllib3.poolmanager.html#urllib3.ProxyManager "\(in urllib3 v2.5.1.dev40\)") request_url(_request_ , _proxies_)[[source]](../_modules/requests/adapters/#HTTPAdapter.request_url)¶ Obtain the url to use when making the final request. If the message is being sent through a HTTP proxy, the full URL has to be used. Otherwise, we should only use the path portion of the URL. This should not be called from user code, and is only exposed for use when subclassing the `HTTPAdapter`. Parameters: * **request** – The `PreparedRequest` being sent. * **proxies** – A dictionary of schemes or schemes and hosts to proxy URLs. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.14\)") send(_request_ , _stream =False_, _timeout =None_, _verify =True_, _cert =None_, _proxies =None_)[[source]](../_modules/requests/adapters/#HTTPAdapter.send)¶ Sends PreparedRequest object. Returns Response object. Parameters: * **request** – The `PreparedRequest` being sent. * **stream** – (optional) Whether to stream the request content. * **timeout** ([_float_](https://docs.python.org/3/library/functions.html#float "\(in Python v3.14\)") _or_[ _tuple_](https://docs.python.org/3/library/stdtypes.html#tuple "\(in Python v3.14\)") _or_ _urllib3 Timeout object_) – (optional) How long to wait for the server to send data before giving up, as a float, or a [(connect timeout, read timeout)](../user/advanced/#timeouts) tuple. * **verify** – (optional) Either a boolean, in which case it controls whether we verify the server’s TLS certificate, or a string, in which case it must be a path to a CA bundle to use * **cert** – (optional) Any user-provided SSL certificate to be trusted. * **proxies** – (optional) The proxies dictionary to apply to the request. Return type: requests.Response ## Authentication¶ _class _requests.auth.AuthBase[[source]](../_modules/requests/auth/#AuthBase)¶ Base class that all auth implementations derive from _class _requests.auth.HTTPBasicAuth(_username_ , _password_)[[source]](../_modules/requests/auth/#HTTPBasicAuth)¶ Attaches HTTP Basic Authentication to the given Request object. _class _requests.auth.HTTPProxyAuth(_username_ , _password_)[[source]](../_modules/requests/auth/#HTTPProxyAuth)¶ Attaches HTTP Proxy Authentication to a given Request object. _class _requests.auth.HTTPDigestAuth(_username_ , _password_)[[source]](../_modules/requests/auth/#HTTPDigestAuth)¶ Attaches HTTP Digest Authentication to the given Request object. ## Encodings¶ requests.utils.get_encodings_from_content(_content_)[[source]](../_modules/requests/utils/#get_encodings_from_content)¶ Returns encodings from given content string. Parameters: **content** – bytestring to extract encodings from. requests.utils.get_encoding_from_headers(_headers_)[[source]](../_modules/requests/utils/#get_encoding_from_headers)¶ Returns encodings from given HTTP Header Dict. Parameters: **headers** – dictionary to extract encoding from. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.14\)") requests.utils.get_unicode_from_response(_r_)[[source]](../_modules/requests/utils/#get_unicode_from_response)¶ Returns the requested content back in unicode. Parameters: **r** – Response object to get unicode content from. Tried: 1. charset from content-type 2. fall back and replace all unicode characters Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "\(in Python v3.14\)") ## Cookies¶ requests.utils.dict_from_cookiejar(_cj_)[[source]](../_modules/requests/utils/#dict_from_cookiejar)¶ Returns a key/value dictionary from a CookieJar. Parameters: **cj** – CookieJar object to extract cookies from. Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "\(in Python v3.14\)") requests.utils.add_dict_to_cookiejar(_cj_ , _cookie_dict_)[[source]](../_modules/requests/utils/#add_dict_to_cookiejar)¶ Returns a CookieJar from a key/value dictionary. Parameters: * **cj** – CookieJar to insert cookies into. * **cookie_dict** – Dict of key/values to insert into CookieJar. Return type: CookieJar requests.cookies.cookiejar_from_dict(_cookie_dict_ , _cookiejar =None_, _overwrite =True_)[[source]](../_modules/requests/cookies/#cookiejar_from_dict)¶ Returns a CookieJar from a key/value dictionary. Parameters: * **cookie_dict** – Dict of key/values to insert into CookieJar. * **cookiejar** – (optional) A cookiejar to add the cookies to. * **overwrite** – (optional) If False, will not replace cookies already in the jar with new ones. Return type: CookieJar _class _requests.cookies.RequestsCookieJar(_policy =None_)[[source]](../_modules/requests/cookies/#RequestsCookieJar)¶ Compatibility class; is a http.cookiejar.CookieJar, but exposes a dict interface. This is the CookieJar we create by default for requests and sessions that don’t specify one, since some clients may expect response.cookies and session.cookies to support dict operations. Requests does not use the dict interface internally; it’s just for compatibility with external client code. All requests code should work out of the box with externally provided instances of `CookieJar`, e.g. `LWPCookieJar` and `FileCookieJar`. Unlike a regular CookieJar, this class is pickleable. Warning dictionary operations that are normally O(1) may be O(n). add_cookie_header(_request_)¶ Add correct Cookie: header to request (urllib.request.Request object). The Cookie2 header is also added unless policy.hide_cookie2 is true. clear(_domain =None_, _path =None_, _name =None_)¶ Clear some cookies. Invoking this method without arguments will clear all cookies. If given a single argument, only cookies belonging to that domain will be removed. If given two arguments, cookies belonging to the specified path within that domain are removed. If given three arguments, then the cookie with the specified name, path and domain is removed. Raises KeyError if no matching cookie exists. clear_expired_cookies()¶ Discard all expired cookies. You probably don’t need to call this method: expired cookies are never sent back to the server (provided you’re using DefaultCookiePolicy), this method is called by CookieJar itself every so often, and the .save() method won’t save expired cookies anyway (unless you ask otherwise by passing a true ignore_expires argument). clear_session_cookies()¶ Discard all session cookies. Note that the .save() method won’t save session cookies anyway, unless you ask otherwise by passing a true ignore_discard argument. copy()[[source]](../_modules/requests/cookies/#RequestsCookieJar.copy)¶ Return a copy of this RequestsCookieJar. extract_cookies(_response_ , _request_)¶ Extract cookies from response, where allowable given the request. get(_name_ , _default =None_, _domain =None_, _path =None_)[[source]](../_modules/requests/cookies/#RequestsCookieJar.get)¶ Dict-like get() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains. Warning operation is O(n), not O(1). get_dict(_domain =None_, _path =None_)[[source]](../_modules/requests/cookies/#RequestsCookieJar.get_dict)¶ Takes as an argument an optional domain and path and returns a plain old Python dict of name-value pairs of cookies that meet the requirements. Return type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "\(in Python v3.14\)") get_policy()[[source]](../_modules/requests/cookies/#RequestsCookieJar.get_policy)¶ Return the CookiePolicy instance used. items()[[source]](../_modules/requests/cookies/#RequestsCookieJar.items)¶ Dict-like items() that returns a list of name-value tuples from the jar. Allows client-code to call `dict(RequestsCookieJar)` and get a vanilla python dict of key value pairs. See also keys() and values(). iteritems()[[source]](../_modules/requests/cookies/#RequestsCookieJar.iteritems)¶ Dict-like iteritems() that returns an iterator of name-value tuples from the jar. See also iterkeys() and itervalues(). iterkeys()[[source]](../_modules/requests/cookies/#RequestsCookieJar.iterkeys)¶ Dict-like iterkeys() that returns an iterator of names of cookies from the jar. See also itervalues() and iteritems(). itervalues()[[source]](../_modules/requests/cookies/#RequestsCookieJar.itervalues)¶ Dict-like itervalues() that returns an iterator of values of cookies from the jar. See also iterkeys() and iteritems(). keys()[[source]](../_modules/requests/cookies/#RequestsCookieJar.keys)¶ Dict-like keys() that returns a list of names of cookies from the jar. See also values() and items(). list_domains()[[source]](../_modules/requests/cookies/#RequestsCookieJar.list_domains)¶ Utility method to list all the domains in the jar. list_paths()[[source]](../_modules/requests/cookies/#RequestsCookieJar.list_paths)¶ Utility method to list all the paths in the jar. make_cookies(_response_ , _request_)¶ Return sequence of Cookie objects extracted from response object. multiple_domains()[[source]](../_modules/requests/cookies/#RequestsCookieJar.multiple_domains)¶ Returns True if there are multiple domains in the jar. Returns False otherwise. Return type: [bool](https://docs.python.org/3/library/functions.html#bool "\(in Python v3.14\)") pop(_k_[, _d_]) -> v, remove specified key and return the corresponding value.¶ If key is not found, d is returned if given, otherwise KeyError is raised. popitem() -> (k, v), remove and return some (key, value) pair¶ as a 2-tuple; but raise KeyError if D is empty. set(_name_ , _value_ , _** kwargs_)[[source]](../_modules/requests/cookies/#RequestsCookieJar.set)¶ Dict-like set() that also supports optional domain and path args in order to resolve naming collisions from using one cookie jar over multiple domains. set_cookie(_cookie_ , _* args_, _** kwargs_)[[source]](../_modules/requests/cookies/#RequestsCookieJar.set_cookie)¶ Set a cookie, without checking whether or not it should be set. set_cookie_if_ok(_cookie_ , _request_)¶ Set a cookie if policy says it’s OK to do so. setdefault(_k_[, _d_]) -> D.get(k,d), also set D[k]=d if k not in D¶ update(_other_)[[source]](../_modules/requests/cookies/#RequestsCookieJar.update)¶ Updates this jar with cookies from another CookieJar or dict-like values()[[source]](../_modules/requests/cookies/#RequestsCookieJar.values)¶ Dict-like values() that returns a list of values of cookies from the jar. See also keys() and items(). _class _requests.cookies.CookieConflictError[[source]](../_modules/requests/cookies/#CookieConflictError)¶ There are two cookies that meet the criteria specified in the cookie jar. Use .get and .set and include domain and path args in order to be more specific. add_note()¶ Exception.add_note(note) – add a note to the exception with_traceback()¶ Exception.with_traceback(tb) – set self.__traceback__ to tb and return self. ## Status Code Lookup¶ requests.codes¶ alias of {} The `codes` object defines a mapping from common names for HTTP statuses to their numerical codes, accessible either as attributes or as dictionary items. Example: >>> import requests >>> requests.codes['temporary_redirect'] 307 >>> requests.codes.teapot 418 >>> requests.codes['\o/'] 200 Some codes have multiple names, and both upper- and lower-case versions of the names are allowed. For example, `codes.ok`, `codes.OK`, and `codes.okay` all correspond to the HTTP status code 200. * 100: `continue` * 101: `switching_protocols` * 102: `processing`, `early-hints` * 103: `checkpoint` * 122: `uri_too_long`, `request_uri_too_long` * 200: `ok`, `okay`, `all_ok`, `all_okay`, `all_good`, `\o/`, `✓` * 201: `created` * 202: `accepted` * 203: `non_authoritative_info`, `non_authoritative_information` * 204: `no_content` * 205: `reset_content`, `reset` * 206: `partial_content`, `partial` * 207: `multi_status`, `multiple_status`, `multi_stati`, `multiple_stati` * 208: `already_reported` * 226: `im_used` * 300: `multiple_choices` * 301: `moved_permanently`, `moved`, `\o-` * 302: `found` * 303: `see_other`, `other` * 304: `not_modified` * 305: `use_proxy` * 306: `switch_proxy` * 307: `temporary_redirect`, `temporary_moved`, `temporary` * 308: `permanent_redirect`, `resume_incomplete`, `resume` * 400: `bad_request`, `bad` * 401: `unauthorized` * 402: `payment_required`, `payment` * 403: `forbidden` * 404: `not_found`, `-o-` * 405: `method_not_allowed`, `not_allowed` * 406: `not_acceptable` * 407: `proxy_authentication_required`, `proxy_auth`, `proxy_authentication` * 408: `request_timeout`, `timeout` * 409: `conflict` * 410: `gone` * 411: `length_required` * 412: `precondition_failed`, `precondition` * 413: `request_entity_too_large`, `content_too_large` * 414: `request_uri_too_large`, `uri_too_long` * 415: `unsupported_media_type`, `unsupported_media`, `media_type` * 416: `requested_range_not_satisfiable`, `requested_range`, `range_not_satisfiable` * 417: `expectation_failed` * 418: `im_a_teapot`, `teapot`, `i_am_a_teapot` * 421: `misdirected_request` * 422: `unprocessable_entity`, `unprocessable`, `unprocessable_content` * 423: `locked` * 424: `failed_dependency`, `dependency` * 425: `unordered_collection`, `unordered`, `too_early` * 426: `upgrade_required`, `upgrade` * 428: `precondition_required`, `precondition` * 429: `too_many_requests`, `too_many` * 431: `header_fields_too_large`, `fields_too_large` * 444: `no_response`, `none` * 449: `retry_with`, `retry` * 450: `blocked_by_windows_parental_controls`, `parental_controls` * 451: `unavailable_for_legal_reasons`, `legal_reasons` * 499: `client_closed_request` * 500: `internal_server_error`, `server_error`, `/o\`, `✗` * 501: `not_implemented` * 502: `bad_gateway` * 503: `service_unavailable`, `unavailable` * 504: `gateway_timeout` * 505: `http_version_not_supported`, `http_version` * 506: `variant_also_negotiates` * 507: `insufficient_storage` * 509: `bandwidth_limit_exceeded`, `bandwidth` * 510: `not_extended` * 511: `network_authentication_required`, `network_auth`, `network_authentication` ## Migrating to 1.x¶ This section details the main differences between 0.x and 1.x and is meant to ease the pain of upgrading. ### API Changes¶ * `Response.json` is now a callable and not a property of a response. import requests r = requests.get('https://api.github.com/events') r.json() # This *call* raises an exception if JSON decoding fails * The `Session` API has changed. Sessions objects no longer take parameters. `Session` is also now capitalized, but it can still be instantiated with a lowercase `session` for backwards compatibility. s = requests.Session() # formerly, session took parameters s.auth = auth s.headers.update(headers) r = s.get('https://httpbin.org/headers') * All request hooks have been removed except ‘response’. * Authentication helpers have been broken out into separate modules. See [requests-oauthlib](https://github.com/requests/requests-oauthlib) and [requests-kerberos](https://github.com/requests/requests-kerberos). * The parameter for streaming requests was changed from `prefetch` to `stream` and the logic was inverted. In addition, `stream` is now required for raw response reading. # in 0.x, passing prefetch=False would accomplish the same thing r = requests.get('https://api.github.com/events', stream=True) for chunk in r.iter_content(8192): ... * The `config` parameter to the requests method has been removed. Some of these options are now configured on a `Session` such as keep-alive and maximum number of redirects. The verbosity option should be handled by configuring logging. import requests import logging # Enabling debugging at http.client level (requests->urllib3->http.client) # you will see the REQUEST, including HEADERS and DATA, and RESPONSE with HEADERS but without DATA. # the only thing missing will be the response.body which is not logged. try: # for Python 3 from http.client import HTTPConnection except ImportError: from httplib import HTTPConnection HTTPConnection.debuglevel = 1 logging.basicConfig() # you need to initialize logging, otherwise you will not see anything from requests logging.getLogger().setLevel(logging.DEBUG) requests_log = logging.getLogger("urllib3") requests_log.setLevel(logging.DEBUG) requests_log.propagate = True requests.get('https://httpbin.org/headers') ### Licensing¶ One key difference that has nothing to do with the API is a change in the license from the [ISC](https://opensource.org/licenses/ISC) license to the [Apache 2.0](https://opensource.org/licenses/Apache-2.0) license. The Apache 2.0 license ensures that contributions to Requests are also covered by the Apache 2.0 license. ## Migrating to 2.x¶ Compared with the 1.0 release, there were relatively few backwards incompatible changes, but there are still a few issues to be aware of with this major release. For more details on the changes in this release including new APIs, links to the relevant GitHub issues and some of the bug fixes, read Cory’s [blog](https://lukasa.co.uk/2013/09/Requests_20/) on the subject. ### API Changes¶ * There were a couple changes to how Requests handles exceptions. `RequestException` is now a subclass of `IOError` rather than `RuntimeError` as that more accurately categorizes the type of error. In addition, an invalid URL escape sequence now raises a subclass of `RequestException` rather than a `ValueError`. requests.get('http://%zz/') # raises requests.exceptions.InvalidURL Lastly, `httplib.IncompleteRead` exceptions caused by incorrect chunked encoding will now raise a Requests `ChunkedEncodingError` instead. * The proxy API has changed slightly. The scheme for a proxy URL is now required. proxies = { "http": "10.10.1.10:3128", # use http://10.10.1.10:3128 instead } # In requests 1.x, this was legal, in requests 2.x, # this raises requests.exceptions.MissingSchema requests.get("http://example.org", proxies=proxies) ### Behavioural Changes¶ * Keys in the `headers` dictionary are now native strings on all Python versions, i.e. bytestrings on Python 2 and unicode on Python 3. If the keys are not native strings (unicode on Python 2 or bytestrings on Python 3) they will be converted to the native string type assuming UTF-8 encoding. * Values in the `headers` dictionary should always be strings. This has been the project’s position since before 1.0 but a recent change (since version 2.11.0) enforces this more strictly. It’s advised to avoid passing header values as unicode when possible. Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. ### Useful Links * [Quickstart](../user/quickstart/) * [Advanced Usage](../user/advanced/) * API Reference * [Release History](../community/updates/#release-history) * [Contributors Guide](../dev/contributing/) * [Recommended Packages and Extensions](../community/recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### [Table of Contents](../) * Developer Interface * Main Interface * `request` * `head` * `get` * `post` * `put` * `patch` * `delete` * Exceptions * `RequestException` * `ConnectionError` * `HTTPError` * `TooManyRedirects` * `ConnectTimeout` * `ReadTimeout` * `Timeout` * `JSONDecodeError` * Request Sessions * `Session` * `Session.auth` * `Session.cert` * `Session.close` * `Session.cookies` * `Session.delete` * `Session.get` * `Session.get_adapter` * `Session.get_redirect_target` * `Session.head` * `Session.headers` * `Session.hooks` * `Session.max_redirects` * `Session.merge_environment_settings` * `Session.mount` * `Session.options` * `Session.params` * `Session.patch` * `Session.post` * `Session.prepare_request` * `Session.proxies` * `Session.put` * `Session.rebuild_auth` * `Session.rebuild_method` * `Session.rebuild_proxies` * `Session.request` * `Session.resolve_redirects` * `Session.send` * `Session.should_strip_auth` * `Session.stream` * `Session.trust_env` * `Session.verify` * Lower-Level Classes * `Request` * `Request.deregister_hook` * `Request.prepare` * `Request.register_hook` * `Response` * `Response.apparent_encoding` * `Response.close` * `Response.content` * `Response.cookies` * `Response.elapsed` * `Response.encoding` * `Response.headers` * `Response.history` * `Response.is_permanent_redirect` * `Response.is_redirect` * `Response.iter_content` * `Response.iter_lines` * `Response.json` * `Response.links` * `Response.next` * `Response.ok` * `Response.raise_for_status` * `Response.raw` * `Response.reason` * `Response.request` * `Response.status_code` * `Response.text` * `Response.url` * Lower-Lower-Level Classes * `PreparedRequest` * `PreparedRequest.body` * `PreparedRequest.deregister_hook` * `PreparedRequest.headers` * `PreparedRequest.hooks` * `PreparedRequest.method` * `PreparedRequest.path_url` * `PreparedRequest.prepare` * `PreparedRequest.prepare_auth` * `PreparedRequest.prepare_body` * `PreparedRequest.prepare_content_length` * `PreparedRequest.prepare_cookies` * `PreparedRequest.prepare_headers` * `PreparedRequest.prepare_hooks` * `PreparedRequest.prepare_method` * `PreparedRequest.prepare_url` * `PreparedRequest.register_hook` * `PreparedRequest.url` * `BaseAdapter` * `BaseAdapter.close` * `BaseAdapter.send` * `HTTPAdapter` * `HTTPAdapter.add_headers` * `HTTPAdapter.build_connection_pool_key_attributes` * `HTTPAdapter.build_response` * `HTTPAdapter.cert_verify` * `HTTPAdapter.close` * `HTTPAdapter.get_connection` * `HTTPAdapter.get_connection_with_tls_context` * `HTTPAdapter.init_poolmanager` * `HTTPAdapter.proxy_headers` * `HTTPAdapter.proxy_manager_for` * `HTTPAdapter.request_url` * `HTTPAdapter.send` * Authentication * `AuthBase` * `HTTPBasicAuth` * `HTTPProxyAuth` * `HTTPDigestAuth` * Encodings * `get_encodings_from_content` * `get_encoding_from_headers` * `get_unicode_from_response` * Cookies * `dict_from_cookiejar` * `add_dict_to_cookiejar` * `cookiejar_from_dict` * `RequestsCookieJar` * `RequestsCookieJar.add_cookie_header` * `RequestsCookieJar.clear` * `RequestsCookieJar.clear_expired_cookies` * `RequestsCookieJar.clear_session_cookies` * `RequestsCookieJar.copy` * `RequestsCookieJar.extract_cookies` * `RequestsCookieJar.get` * `RequestsCookieJar.get_dict` * `RequestsCookieJar.get_policy` * `RequestsCookieJar.items` * `RequestsCookieJar.iteritems` * `RequestsCookieJar.iterkeys` * `RequestsCookieJar.itervalues` * `RequestsCookieJar.keys` * `RequestsCookieJar.list_domains` * `RequestsCookieJar.list_paths` * `RequestsCookieJar.make_cookies` * `RequestsCookieJar.multiple_domains` * `RequestsCookieJar.pop` * `RequestsCookieJar.popitem` * `RequestsCookieJar.set` * `RequestsCookieJar.set_cookie` * `RequestsCookieJar.set_cookie_if_ok` * `RequestsCookieJar.setdefault` * `RequestsCookieJar.update` * `RequestsCookieJar.values` * `CookieConflictError` * `CookieConflictError.add_note` * `CookieConflictError.with_traceback` * Status Code Lookup * `codes` * Migrating to 1.x * API Changes * Licensing * Migrating to 2.x * API Changes * Behavioural Changes ### Related Topics * [Documentation overview](../) * Previous: [Community Updates](../community/updates/ "previous chapter") * Next: [Contributor’s Guide](../dev/contributing/ "next chapter") ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests) --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/community/faq/ # Path: community/faq/ # Frequently Asked Questions¶ This part of the documentation answers common questions about Requests. ## Encoded Data?¶ Requests automatically decompresses gzip-encoded responses, and does its best to decode response content to unicode when possible. When either the [brotli](https://pypi.org/project/Brotli/) or [brotlicffi](https://pypi.org/project/brotlicffi/) package is installed, requests also decodes Brotli-encoded responses. You can get direct access to the raw response (and even the socket), if needed as well. ## Custom User-Agents?¶ Requests allows you to easily override User-Agent strings, along with any other HTTP Header. See [documentation about headers](../../user/quickstart/#custom-headers). ## Why not Httplib2?¶ Chris Adams gave an excellent summary on [Hacker News](http://news.ycombinator.com/item?id=2884406): > httplib2 is part of why you should use requests: it’s far more respectable > as a client but not as well documented and it still takes way too much code > for basic operations. I appreciate what httplib2 is trying to do, that > there’s a ton of hard low-level annoyances in building a modern HTTP client, > but really, just use requests instead. Kenneth Reitz is very motivated and > he gets the degree to which simple things should be simple whereas httplib2 > feels more like an academic exercise than something people should use to > build production systems[1]. > > Disclosure: I’m listed in the requests AUTHORS file but can claim credit > for, oh, about 0.0001% of the awesomeness. > > 1\. is a good > example: an annoying bug which affect many people, there was a fix available > for months, which worked great when I applied it in a fork and pounded a > couple TB of data through it, but it took over a year to make it into trunk > and even longer to make it onto PyPI where any other project which required > ” httplib2” would get the working version. ## Python 3 Support?¶ Yes! Requests supports all [officially supported versions of Python](https://devguide.python.org/versions/) and recent releases of PyPy. ## Python 2 Support?¶ No! As of Requests 2.28.0, Requests no longer supports Python 2.7. Users who have been unable to migrate should pin to requests<2.28. Full information can be found in [psf/requests#6023](https://github.com/psf/requests/issues/6023). It is _highly_ recommended users migrate to a supported Python 3.x version now since Python 2.7 is no longer receiving bug fixes or security updates as of January 1, 2020. ## What are “hostname doesn’t match” errors?¶ These errors occur when [SSL certificate verification](../../user/advanced/#verification) fails to match the certificate the server responds with to the hostname Requests thinks it’s contacting. If you’re certain the server’s SSL setup is correct (for example, because you can visit the site with your browser) and you’re using Python 2.7, a possible explanation is that you need Server-Name-Indication. [Server-Name- Indication](https://en.wikipedia.org/wiki/Server_Name_Indication), or SNI, is an official extension to SSL where the client tells the server what hostname it is contacting. This is important when servers are using [Virtual Hosting](https://en.wikipedia.org/wiki/Virtual_hosting). When such servers are hosting more than one SSL site they need to be able to return the appropriate certificate based on the hostname the client is connecting to. Python 3 already includes native support for SNI in their SSL modules. Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. ### Useful Links * [Quickstart](../../user/quickstart/) * [Advanced Usage](../../user/advanced/) * [API Reference](../../api/) * [Release History](../updates/#release-history) * [Contributors Guide](../../dev/contributing/) * [Recommended Packages and Extensions](../recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### [Table of Contents](../../) * Frequently Asked Questions * Encoded Data? * Custom User-Agents? * Why not Httplib2? * Python 3 Support? * Python 2 Support? * What are “hostname doesn’t match” errors? ### Related Topics * [Documentation overview](../../) * Previous: [Recommended Packages and Extensions](../recommended/ "previous chapter") * Next: [Integrations](../out-there/ "next chapter") ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests) --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/ # Path: index # Requests: HTTP for Humans™¶ Release v2.32.5. ([Installation](user/install/#install)) [![Requests Downloads Per Month Badge](https://static.pepy.tech/badge/requests/month)](https://pepy.tech/project/requests) [![License Badge](https://img.shields.io/pypi/l/requests.svg)](https://pypi.org/project/requests/) [![Wheel Support Badge](https://img.shields.io/pypi/wheel/requests.svg)](https://pypi.org/project/requests/) [![Python Version Support Badge](https://img.shields.io/pypi/pyversions/requests.svg)](https://pypi.org/project/requests/) **Requests** is an elegant and simple HTTP library for Python, built for human beings. * * * **Behold, the power of Requests** : >>> r = requests.get('https://api.github.com/user', auth=('user', 'pass')) >>> r.status_code 200 >>> r.headers['content-type'] 'application/json; charset=utf8' >>> r.encoding 'utf-8' >>> r.text '{"type":"User"...' >>> r.json() {'private_gists': 419, 'total_private_repos': 77, ...} See [similar code, sans Requests](https://gist.github.com/973705). **Requests** allows you to send HTTP/1.1 requests extremely easily. There’s no need to manually add query strings to your URLs, or to form-encode your POST data. Keep-alive and HTTP connection pooling are 100% automatic, thanks to [urllib3](https://github.com/urllib3/urllib3). ## Beloved Features¶ Requests is ready for today’s web. * Keep-Alive & Connection Pooling * International Domains and URLs * Sessions with Cookie Persistence * Browser-style SSL Verification * Automatic Content Decoding * Basic/Digest Authentication * Elegant Key/Value Cookies * Automatic Decompression * Unicode Response Bodies * HTTP(S) Proxy Support * Multipart File Uploads * Streaming Downloads * Connection Timeouts * Chunked Requests * `.netrc` Support Requests officially supports Python 3.9+, and runs great on PyPy. ## The User Guide¶ This part of the documentation, which is mostly prose, begins with some background information about Requests, then focuses on step-by-step instructions for getting the most out of Requests. * [Installation of Requests](user/install/) * [$ python -m pip install requests](user/install/#python-m-pip-install-requests) * [Get the Source Code](user/install/#get-the-source-code) * [Quickstart](user/quickstart/) * [Make a Request](user/quickstart/#make-a-request) * [Passing Parameters In URLs](user/quickstart/#passing-parameters-in-urls) * [Response Content](user/quickstart/#response-content) * [Binary Response Content](user/quickstart/#binary-response-content) * [JSON Response Content](user/quickstart/#json-response-content) * [Raw Response Content](user/quickstart/#raw-response-content) * [Custom Headers](user/quickstart/#custom-headers) * [More complicated POST requests](user/quickstart/#more-complicated-post-requests) * [POST a Multipart-Encoded File](user/quickstart/#post-a-multipart-encoded-file) * [Response Status Codes](user/quickstart/#response-status-codes) * [Response Headers](user/quickstart/#response-headers) * [Cookies](user/quickstart/#cookies) * [Redirection and History](user/quickstart/#redirection-and-history) * [Timeouts](user/quickstart/#timeouts) * [Errors and Exceptions](user/quickstart/#errors-and-exceptions) * [Advanced Usage](user/advanced/) * [Session Objects](user/advanced/#session-objects) * [Request and Response Objects](user/advanced/#request-and-response-objects) * [Prepared Requests](user/advanced/#prepared-requests) * [SSL Cert Verification](user/advanced/#ssl-cert-verification) * [Client Side Certificates](user/advanced/#client-side-certificates) * [CA Certificates](user/advanced/#ca-certificates) * [Body Content Workflow](user/advanced/#body-content-workflow) * [Keep-Alive](user/advanced/#keep-alive) * [Streaming Uploads](user/advanced/#streaming-uploads) * [Chunk-Encoded Requests](user/advanced/#chunk-encoded-requests) * [POST Multiple Multipart-Encoded Files](user/advanced/#post-multiple-multipart-encoded-files) * [Event Hooks](user/advanced/#event-hooks) * [Custom Authentication](user/advanced/#custom-authentication) * [Streaming Requests](user/advanced/#streaming-requests) * [Proxies](user/advanced/#proxies) * [Compliance](user/advanced/#compliance) * [HTTP Verbs](user/advanced/#http-verbs) * [Custom Verbs](user/advanced/#custom-verbs) * [Link Headers](user/advanced/#link-headers) * [Transport Adapters](user/advanced/#transport-adapters) * [Blocking Or Non-Blocking?](user/advanced/#blocking-or-non-blocking) * [Header Ordering](user/advanced/#header-ordering) * [Timeouts](user/advanced/#timeouts) * [Authentication](user/authentication/) * [Basic Authentication](user/authentication/#basic-authentication) * [Digest Authentication](user/authentication/#digest-authentication) * [OAuth 1 Authentication](user/authentication/#oauth-1-authentication) * [OAuth 2 and OpenID Connect Authentication](user/authentication/#oauth-2-and-openid-connect-authentication) * [Other Authentication](user/authentication/#other-authentication) * [New Forms of Authentication](user/authentication/#new-forms-of-authentication) ## The Community Guide¶ This part of the documentation, which is mostly prose, details the Requests ecosystem and community. * [Recommended Packages and Extensions](community/recommended/) * [Certifi CA Bundle](community/recommended/#certifi-ca-bundle) * [CacheControl](community/recommended/#cachecontrol) * [Requests-Toolbelt](community/recommended/#requests-toolbelt) * [Requests-Threads](community/recommended/#requests-threads) * [Requests-OAuthlib](community/recommended/#requests-oauthlib) * [Betamax](community/recommended/#betamax) * [Frequently Asked Questions](community/faq/) * [Encoded Data?](community/faq/#encoded-data) * [Custom User-Agents?](community/faq/#custom-user-agents) * [Why not Httplib2?](community/faq/#why-not-httplib2) * [Python 3 Support?](community/faq/#python-3-support) * [Python 2 Support?](community/faq/#python-2-support) * [What are “hostname doesn’t match” errors?](community/faq/#what-are-hostname-doesn-t-match-errors) * [Integrations](community/out-there/) * [Articles & Talks](community/out-there/#articles-talks) * [Support](community/support/) * [Stack Overflow](community/support/#stack-overflow) * [File an Issue](community/support/#file-an-issue) * [Send a Tweet](community/support/#send-a-tweet) * [Vulnerability Disclosure](community/vulnerabilities/) * [Release Process and Rules](community/release-process/) * [Major Releases](community/release-process/#major-releases) * [Minor Releases](community/release-process/#minor-releases) * [Hotfix Releases](community/release-process/#hotfix-releases) * [Reasoning](community/release-process/#reasoning) * [Community Updates](community/updates/) * [Release History](community/updates/#release-history) ## The API Documentation / Guide¶ If you are looking for information on a specific function, class, or method, this part of the documentation is for you. * [Developer Interface](api/) * [Main Interface](api/#main-interface) * [Exceptions](api/#exceptions) * [Request Sessions](api/#request-sessions) * [Lower-Level Classes](api/#lower-level-classes) * [Lower-Lower-Level Classes](api/#lower-lower-level-classes) * [Authentication](api/#authentication) * [Encodings](api/#encodings) * [Cookies](api/#cookies) * [Status Code Lookup](api/#status-code-lookup) * [Migrating to 1.x](api/#migrating-to-1-x) * [Migrating to 2.x](api/#migrating-to-2-x) ## The Contributor Guide¶ If you want to contribute to the project, this part of the documentation is for you. * [Contributor’s Guide](dev/contributing/) * [Code of Conduct](dev/contributing/#code-of-conduct) * [Get Early Feedback](dev/contributing/#get-early-feedback) * [Contribution Suitability](dev/contributing/#contribution-suitability) * [Code Contributions](dev/contributing/#code-contributions) * [Steps for Submitting Code](dev/contributing/#steps-for-submitting-code) * [Code Review](dev/contributing/#code-review) * [Code Style](dev/contributing/#code-style) * [New Contributors](dev/contributing/#new-contributors) * [Documentation Contributions](dev/contributing/#documentation-contributions) * [Bug Reports](dev/contributing/#bug-reports) * [Feature Requests](dev/contributing/#feature-requests) * [Authors](dev/authors/) * [Keepers of the Crystals](dev/authors/#keepers-of-the-crystals) * [Previous Keepers of Crystals](dev/authors/#previous-keepers-of-crystals) * [Patches and Suggestions](dev/authors/#patches-and-suggestions) There are no more guides. You are now guideless. Good luck. ![Requests logo](_static/requests-sidebar.png) Requests is an elegant and simple HTTP library for Python, built for human beings. ### Useful Links * [Quickstart](user/quickstart/) * [Advanced Usage](user/advanced/) * [API Reference](api/) * [Release History](community/updates/#release-history) * [Contributors Guide](dev/contributing/) * [Recommended Packages and Extensions](community/recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests) --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/user/advanced/ # Path: user/advanced/ # Advanced Usage¶ This document covers some of Requests more advanced features. ## Session Objects¶ The Session object allows you to persist certain parameters across requests. It also persists cookies across all requests made from the Session instance, and will use `urllib3`’s [connection pooling](https://urllib3.readthedocs.io/en/latest/reference/urllib3.connectionpool.html). So if you’re making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase (see [HTTP persistent connection](https://en.wikipedia.org/wiki/HTTP_persistent_connection)). A Session object has all the methods of the main Requests API. Let’s persist some cookies across requests: s = requests.Session() s.get('https://httpbin.org/cookies/set/sessioncookie/123456789') r = s.get('https://httpbin.org/cookies') print(r.text) # '{"cookies": {"sessioncookie": "123456789"}}' Sessions can also be used to provide default data to the request methods. This is done by providing data to the properties on a Session object: s = requests.Session() s.auth = ('user', 'pass') s.headers.update({'x-test': 'true'}) # both 'x-test' and 'x-test2' are sent s.get('https://httpbin.org/headers', headers={'x-test2': 'true'}) Any dictionaries that you pass to a request method will be merged with the session-level values that are set. The method-level parameters override session parameters. Note, however, that method-level parameters will _not_ be persisted across requests, even if using a session. This example will only send the cookies with the first request, but not the second: s = requests.Session() r = s.get('https://httpbin.org/cookies', cookies={'from-my': 'browser'}) print(r.text) # '{"cookies": {"from-my": "browser"}}' r = s.get('https://httpbin.org/cookies') print(r.text) # '{"cookies": {}}' If you want to manually add cookies to your session, use the [Cookie utility functions](../../api/#api-cookies) to manipulate [`Session.cookies`](../../api/#requests.Session.cookies "requests.Session.cookies"). Sessions can also be used as context managers: with requests.Session() as s: s.get('https://httpbin.org/cookies/set/sessioncookie/123456789') This will make sure the session is closed as soon as the `with` block is exited, even if unhandled exceptions occurred. Remove a Value From a Dict Parameter Sometimes you’ll want to omit session-level keys from a dict parameter. To do this, you simply set that key’s value to `None` in the method-level parameter. It will automatically be omitted. All values that are contained within a session are directly available to you. See the [Session API Docs](../../api/#sessionapi) to learn more. ## Request and Response Objects¶ Whenever a call is made to `requests.get()` and friends, you are doing two major things. First, you are constructing a `Request` object which will be sent off to a server to request or query some resource. Second, a `Response` object is generated once Requests gets a response back from the server. The `Response` object contains all of the information returned by the server and also contains the `Request` object you created originally. Here is a simple request to get some very important information from Wikipedia’s servers: >>> r = requests.get('https://en.wikipedia.org/wiki/Monty_Python') If we want to access the headers the server sent back to us, we do this: >>> r.headers {'content-length': '56170', 'x-content-type-options': 'nosniff', 'x-cache': 'HIT from cp1006.eqiad.wmnet, MISS from cp1010.eqiad.wmnet', 'content-encoding': 'gzip', 'age': '3080', 'content-language': 'en', 'vary': 'Accept-Encoding,Cookie', 'server': 'Apache', 'last-modified': 'Wed, 13 Jun 2012 01:33:50 GMT', 'connection': 'close', 'cache-control': 'private, s-maxage=0, max-age=0, must-revalidate', 'date': 'Thu, 14 Jun 2012 12:59:39 GMT', 'content-type': 'text/html; charset=UTF-8', 'x-cache-lookup': 'HIT from cp1006.eqiad.wmnet:3128, MISS from cp1010.eqiad.wmnet:80'} However, if we want to get the headers we sent the server, we simply access the request, and then the request’s headers: >>> r.request.headers {'Accept-Encoding': 'identity, deflate, compress, gzip', 'Accept': '*/*', 'User-Agent': 'python-requests/1.2.0'} ## Prepared Requests¶ Whenever you receive a [`Response`](../../api/#requests.Response "requests.Response") object from an API call or a Session call, the `request` attribute is actually the `PreparedRequest` that was used. In some cases you may wish to do some extra work to the body or headers (or anything else really) before sending a request. The simple recipe for this is the following: from requests import Request, Session s = Session() req = Request('POST', url, data=data, headers=headers) prepped = req.prepare() # do something with prepped.body prepped.body = 'No, I want exactly this as the body.' # do something with prepped.headers del prepped.headers['Content-Type'] resp = s.send(prepped, stream=stream, verify=verify, proxies=proxies, cert=cert, timeout=timeout ) print(resp.status_code) Since you are not doing anything special with the `Request` object, you prepare it immediately and modify the `PreparedRequest` object. You then send that with the other parameters you would have sent to `requests.*` or `Session.*`. However, the above code will lose some of the advantages of having a Requests [`Session`](../../api/#requests.Session "requests.Session") object. In particular, [`Session`](../../api/#requests.Session "requests.Session")-level state such as cookies will not get applied to your request. To get a [`PreparedRequest`](../../api/#requests.PreparedRequest "requests.PreparedRequest") with that state applied, replace the call to [`Request.prepare()`](../../api/#requests.Request.prepare "requests.Request.prepare") with a call to [`Session.prepare_request()`](../../api/#requests.Session.prepare_request "requests.Session.prepare_request"), like this: from requests import Request, Session s = Session() req = Request('GET', url, data=data, headers=headers) prepped = s.prepare_request(req) # do something with prepped.body prepped.body = 'Seriously, send exactly these bytes.' # do something with prepped.headers prepped.headers['Keep-Dead'] = 'parrot' resp = s.send(prepped, stream=stream, verify=verify, proxies=proxies, cert=cert, timeout=timeout ) print(resp.status_code) When you are using the prepared request flow, keep in mind that it does not take into account the environment. This can cause problems if you are using environment variables to change the behaviour of requests. For example: Self- signed SSL certificates specified in `REQUESTS_CA_BUNDLE` will not be taken into account. As a result an `SSL: CERTIFICATE_VERIFY_FAILED` is thrown. You can get around this behaviour by explicitly merging the environment settings into your session: from requests import Request, Session s = Session() req = Request('GET', url) prepped = s.prepare_request(req) # Merge environment settings into session settings = s.merge_environment_settings(prepped.url, {}, None, None, None) resp = s.send(prepped, **settings) print(resp.status_code) ## SSL Cert Verification¶ Requests verifies SSL certificates for HTTPS requests, just like a web browser. By default, SSL verification is enabled, and Requests will throw a SSLError if it’s unable to verify the certificate: >>> requests.get('https://requestb.in') requests.exceptions.SSLError: hostname 'requestb.in' doesn't match either of '*.herokuapp.com', 'herokuapp.com' I don’t have SSL setup on this domain, so it throws an exception. Excellent. GitHub does though: >>> requests.get('https://github.com') You can pass `verify` the path to a CA_BUNDLE file or directory with certificates of trusted CAs: >>> requests.get('https://github.com', verify='/path/to/certfile') or persistent: s = requests.Session() s.verify = '/path/to/certfile' Note If `verify` is set to a path to a directory, the directory must have been processed using the `c_rehash` utility supplied with OpenSSL. This list of trusted CAs can also be specified through the `REQUESTS_CA_BUNDLE` environment variable. If `REQUESTS_CA_BUNDLE` is not set, `CURL_CA_BUNDLE` will be used as fallback. Requests can also ignore verifying the SSL certificate if you set `verify` to False: >>> requests.get('https://kennethreitz.org', verify=False) Note that when `verify` is set to `False`, requests will accept any TLS certificate presented by the server, and will ignore hostname mismatches and/or expired certificates, which will make your application vulnerable to man-in-the-middle (MitM) attacks. Setting verify to `False` may be useful during local development or testing. By default, `verify` is set to True. Option `verify` only applies to host certs. ## Client Side Certificates¶ You can also specify a local cert to use as client side certificate, as a single file (containing the private key and the certificate) or as a tuple of both files’ paths: >>> requests.get('https://kennethreitz.org', cert=('/path/client.cert', '/path/client.key')) or persistent: s = requests.Session() s.cert = '/path/client.cert' If you specify a wrong path or an invalid cert, you’ll get a SSLError: >>> requests.get('https://kennethreitz.org', cert='/wrong_path/client.pem') SSLError: [Errno 336265225] _ssl.c:347: error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib Warning The private key to your local certificate _must_ be unencrypted. Currently, Requests does not support using encrypted keys. ## CA Certificates¶ Requests uses certificates from the package [certifi](https://certifiio.readthedocs.io/). This allows for users to update their trusted certificates without changing the version of Requests. Before version 2.16, Requests bundled a set of root CAs that it trusted, sourced from the [Mozilla trust store](https://hg.mozilla.org/mozilla- central/raw-file/tip/security/nss/lib/ckfw/builtins/certdata.txt). The certificates were only updated once for each Requests version. When `certifi` was not installed, this led to extremely out-of-date certificate bundles when using significantly older versions of Requests. For the sake of security we recommend upgrading certifi frequently! ## Body Content Workflow¶ By default, when you make a request, the body of the response is downloaded immediately. You can override this behaviour and defer downloading the response body until you access the [`Response.content`](../../api/#requests.Response.content "requests.Response.content") attribute with the `stream` parameter: tarball_url = 'https://github.com/psf/requests/tarball/main' r = requests.get(tarball_url, stream=True) At this point only the response headers have been downloaded and the connection remains open, hence allowing us to make content retrieval conditional: if int(r.headers['content-length']) < TOO_LONG: content = r.content ... You can further control the workflow by use of the [`Response.iter_content()`](../../api/#requests.Response.iter_content "requests.Response.iter_content") and [`Response.iter_lines()`](../../api/#requests.Response.iter_lines "requests.Response.iter_lines") methods. Alternatively, you can read the undecoded body from the underlying urllib3 [`urllib3.HTTPResponse`](https://urllib3.readthedocs.io/en/latest/reference/urllib3.response.html#urllib3.response.HTTPResponse "\(in urllib3 v2.5.1.dev40\)") at [`Response.raw`](../../api/#requests.Response.raw "requests.Response.raw"). If you set `stream` to `True` when making a request, Requests cannot release the connection back to the pool unless you consume all the data or call [`Response.close`](../../api/#requests.Response.close "requests.Response.close"). This can lead to inefficiency with connections. If you find yourself partially reading request bodies (or not reading them at all) while using `stream=True`, you should make the request within a `with` statement to ensure it’s always closed: with requests.get('https://httpbin.org/get', stream=True) as r: # Do things with the response here. ## Keep-Alive¶ Excellent news — thanks to urllib3, keep-alive is 100% automatic within a session! Any requests that you make within a session will automatically reuse the appropriate connection! Note that connections are only released back to the pool for reuse once all body data has been read; be sure to either set `stream` to `False` or read the `content` property of the `Response` object. ## Streaming Uploads¶ Requests supports streaming uploads, which allow you to send large streams or files without reading them into memory. To stream and upload, simply provide a file-like object for your body: with open('massive-body', 'rb') as f: requests.post('http://some.url/streamed', data=f) Warning It is strongly recommended that you open files in [binary mode](https://docs.python.org/3/tutorial/inputoutput.html#tut-files "\(in Python v3.14\)"). This is because Requests may attempt to provide the `Content-Length` header for you, and if it does this value will be set to the number of _bytes_ in the file. Errors may occur if you open the file in _text mode_. ## Chunk-Encoded Requests¶ Requests also supports Chunked transfer encoding for outgoing and incoming requests. To send a chunk-encoded request, simply provide a generator (or any iterator without a length) for your body: def gen(): yield 'hi' yield 'there' requests.post('http://some.url/chunked', data=gen()) For chunked encoded responses, it’s best to iterate over the data using [`Response.iter_content()`](../../api/#requests.Response.iter_content "requests.Response.iter_content"). In an ideal situation you’ll have set `stream=True` on the request, in which case you can iterate chunk-by-chunk by calling `iter_content` with a `chunk_size` parameter of `None`. If you want to set a maximum size of the chunk, you can set a `chunk_size` parameter to any integer. ## POST Multiple Multipart-Encoded Files¶ You can send multiple files in one request. For example, suppose you want to upload image files to an HTML form with a multiple file field ‘images’: To do that, just set files to a list of tuples of `(form_field_name, file_info)`: >>> url = 'https://httpbin.org/post' >>> multiple_files = [ ... ('images', ('foo.png', open('foo.png', 'rb'), 'image/png')), ... ('images', ('bar.png', open('bar.png', 'rb'), 'image/png'))] >>> r = requests.post(url, files=multiple_files) >>> r.text { ... 'files': {'images': 'data:image/png;base64,iVBORw ....'} 'Content-Type': 'multipart/form-data; boundary=3131623adb2043caaeb5538cc7aa0b3a', ... } Warning It is strongly recommended that you open files in [binary mode](https://docs.python.org/3/tutorial/inputoutput.html#tut-files "\(in Python v3.14\)"). This is because Requests may attempt to provide the `Content-Length` header for you, and if it does this value will be set to the number of _bytes_ in the file. Errors may occur if you open the file in _text mode_. ## Event Hooks¶ Requests has a hook system that you can use to manipulate portions of the request process, or signal event handling. Available hooks: `response`: The response generated from a Request. You can assign a hook function on a per-request basis by passing a `{hook_name: callback_function}` dictionary to the `hooks` request parameter: hooks={'response': print_url} That `callback_function` will receive a chunk of data as its first argument. def print_url(r, *args, **kwargs): print(r.url) Your callback function must handle its own exceptions. Any unhandled exception won’t be passed silently and thus should be handled by the code calling Requests. If the callback function returns a value, it is assumed that it is to replace the data that was passed in. If the function doesn’t return anything, nothing else is affected. def record_hook(r, *args, **kwargs): r.hook_called = True return r Let’s print some request method arguments at runtime: >>> requests.get('https://httpbin.org/', hooks={'response': print_url}) https://httpbin.org/ You can add multiple hooks to a single request. Let’s call two hooks at once: >>> r = requests.get('https://httpbin.org/', hooks={'response': [print_url, record_hook]}) >>> r.hook_called True You can also add hooks to a `Session` instance. Any hooks you add will then be called on every request made to the session. For example: >>> s = requests.Session() >>> s.hooks['response'].append(print_url) >>> s.get('https://httpbin.org/') https://httpbin.org/ A `Session` can have multiple hooks, which will be called in the order they are added. ## Custom Authentication¶ Requests allows you to specify your own authentication mechanism. Any callable which is passed as the `auth` argument to a request method will have the opportunity to modify the request before it is dispatched. Authentication implementations are subclasses of [`AuthBase`](../../api/#requests.auth.AuthBase "requests.auth.AuthBase"), and are easy to define. Requests provides two common authentication scheme implementations in `requests.auth`: [`HTTPBasicAuth`](../../api/#requests.auth.HTTPBasicAuth "requests.auth.HTTPBasicAuth") and [`HTTPDigestAuth`](../../api/#requests.auth.HTTPDigestAuth "requests.auth.HTTPDigestAuth"). Let’s pretend that we have a web service that will only respond if the `X-Pizza` header is set to a password value. Unlikely, but just go with it. from requests.auth import AuthBase class PizzaAuth(AuthBase): """Attaches HTTP Pizza Authentication to the given Request object.""" def __init__(self, username): # setup any auth-related data here self.username = username def __call__(self, r): # modify and return the request r.headers['X-Pizza'] = self.username return r Then, we can make a request using our Pizza Auth: >>> requests.get('http://pizzabin.org/admin', auth=PizzaAuth('kenneth')) ## Streaming Requests¶ With [`Response.iter_lines()`](../../api/#requests.Response.iter_lines "requests.Response.iter_lines") you can easily iterate over streaming APIs such as the [Twitter Streaming API](https://dev.twitter.com/streaming/overview). Simply set `stream` to `True` and iterate over the response with [`iter_lines`](../../api/#requests.Response.iter_lines "requests.Response.iter_lines"): import json import requests r = requests.get('https://httpbin.org/stream/20', stream=True) for line in r.iter_lines(): # filter out keep-alive new lines if line: decoded_line = line.decode('utf-8') print(json.loads(decoded_line)) When using decode_unicode=True with [`Response.iter_lines()`](../../api/#requests.Response.iter_lines "requests.Response.iter_lines") or [`Response.iter_content()`](../../api/#requests.Response.iter_content "requests.Response.iter_content"), you’ll want to provide a fallback encoding in the event the server doesn’t provide one: r = requests.get('https://httpbin.org/stream/20', stream=True) if r.encoding is None: r.encoding = 'utf-8' for line in r.iter_lines(decode_unicode=True): if line: print(json.loads(line)) Warning [`iter_lines`](../../api/#requests.Response.iter_lines "requests.Response.iter_lines") is not reentrant safe. Calling this method multiple times causes some of the received data being lost. In case you need to call it from multiple places, use the resulting iterator object instead: lines = r.iter_lines() # Save the first line for later or just skip it first_line = next(lines) for line in lines: print(line) ## Proxies¶ If you need to use a proxy, you can configure individual requests with the `proxies` argument to any request method: import requests proxies = { 'http': 'http://10.10.1.10:3128', 'https': 'http://10.10.1.10:1080', } requests.get('http://example.org', proxies=proxies) Alternatively you can configure it once for an entire [`Session`](../../api/#requests.Session "requests.Session"): import requests proxies = { 'http': 'http://10.10.1.10:3128', 'https': 'http://10.10.1.10:1080', } session = requests.Session() session.proxies.update(proxies) session.get('http://example.org') Warning Setting `session.proxies` may behave differently than expected. Values provided will be overwritten by environmental proxies (those returned by [urllib.request.getproxies](https://docs.python.org/3/library/urllib.request.html#urllib.request.getproxies)). To ensure the use of proxies in the presence of environmental proxies, explicitly specify the `proxies` argument on all individual requests as initially explained above. See [#2018](https://github.com/psf/requests/issues/2018) for details. When the proxies configuration is not overridden per request as shown above, Requests relies on the proxy configuration defined by standard environment variables `http_proxy`, `https_proxy`, `no_proxy`, and `all_proxy`. Uppercase variants of these variables are also supported. You can therefore set them to configure Requests (only set the ones relevant to your needs): $ export HTTP_PROXY="http://10.10.1.10:3128" $ export HTTPS_PROXY="http://10.10.1.10:1080" $ export ALL_PROXY="socks5://10.10.1.10:3434" $ python >>> import requests >>> requests.get('http://example.org') To use HTTP Basic Auth with your proxy, use the http://user:password@host/ syntax in any of the above configuration entries: $ export HTTPS_PROXY="http://user:pass@10.10.1.10:1080" $ python >>> proxies = {'http': 'http://user:pass@10.10.1.10:3128/'} Warning Storing sensitive username and password information in an environment variable or a version-controlled file is a security risk and is highly discouraged. To give a proxy for a specific scheme and host, use the scheme://hostname form for the key. This will match for any request to the given scheme and exact hostname. proxies = {'http://10.20.1.128': 'http://10.10.1.10:5323'} Note that proxy URLs must include the scheme. Finally, note that using a proxy for https connections typically requires your local machine to trust the proxy’s root certificate. By default the list of certificates trusted by Requests can be found with: from requests.utils import DEFAULT_CA_BUNDLE_PATH print(DEFAULT_CA_BUNDLE_PATH) You override this default certificate bundle by setting the `REQUESTS_CA_BUNDLE` (or `CURL_CA_BUNDLE`) environment variable to another file path: $ export REQUESTS_CA_BUNDLE="/usr/local/myproxy_info/cacert.pem" $ export https_proxy="http://10.10.1.10:1080" $ python >>> import requests >>> requests.get('https://example.org') ### SOCKS¶ New in version 2.10.0. In addition to basic HTTP proxies, Requests also supports proxies using the SOCKS protocol. This is an optional feature that requires that additional third-party libraries be installed before use. You can get the dependencies for this feature from `pip`: $ python -m pip install 'requests[socks]' Once you’ve installed those dependencies, using a SOCKS proxy is just as easy as using a HTTP one: proxies = { 'http': 'socks5://user:pass@host:port', 'https': 'socks5://user:pass@host:port' } Using the scheme `socks5` causes the DNS resolution to happen on the client, rather than on the proxy server. This is in line with curl, which uses the scheme to decide whether to do the DNS resolution on the client or proxy. If you want to resolve the domains on the proxy server, use `socks5h` as the scheme. ## Compliance¶ Requests is intended to be compliant with all relevant specifications and RFCs where that compliance will not cause difficulties for users. This attention to the specification can lead to some behaviour that may seem unusual to those not familiar with the relevant specification. ### Encodings¶ When you receive a response, Requests makes a guess at the encoding to use for decoding the response when you access the [`Response.text`](../../api/#requests.Response.text "requests.Response.text") attribute. Requests will first check for an encoding in the HTTP header, and if none is present, will use [charset_normalizer](https://pypi.org/project/charset_normalizer/) or [chardet](https://github.com/chardet/chardet) to attempt to guess the encoding. If `chardet` is installed, `requests` uses it, however for python3 `chardet` is no longer a mandatory dependency. The `chardet` library is an LGPL-licenced dependency and some users of requests cannot depend on mandatory LGPL-licensed dependencies. When you install `requests` without specifying `[use_chardet_on_py3]` extra, and `chardet` is not already installed, `requests` uses `charset-normalizer` (MIT-licensed) to guess the encoding. The only time Requests will not guess the encoding is if no explicit charset is present in the HTTP headers **and** the `Content-Type` header contains `text`. In this situation, [RFC 2616](https://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.7.1) specifies that the default charset must be `ISO-8859-1`. Requests follows the specification in this case. If you require a different encoding, you can manually set the [`Response.encoding`](../../api/#requests.Response.encoding "requests.Response.encoding") property, or use the raw [`Response.content`](../../api/#requests.Response.content "requests.Response.content"). ## HTTP Verbs¶ Requests provides access to almost the full range of HTTP verbs: GET, OPTIONS, HEAD, POST, PUT, PATCH and DELETE. The following provides detailed examples of using these various verbs in Requests, using the GitHub API. We will begin with the verb most commonly used: GET. HTTP GET is an idempotent method that returns a resource from a given URL. As a result, it is the verb you ought to use when attempting to retrieve data from a web location. An example usage would be attempting to get information about a specific commit from GitHub. Suppose we wanted commit `a050faf` on Requests. We would get it like so: >>> import requests >>> r = requests.get('https://api.github.com/repos/psf/requests/git/commits/a050faf084662f3a352dd1a941f2c7c9f886d4ad') We should confirm that GitHub responded correctly. If it has, we want to work out what type of content it is. Do this like so: >>> if r.status_code == requests.codes.ok: ... print(r.headers['content-type']) ... application/json; charset=utf-8 So, GitHub returns JSON. That’s great, we can use the [`r.json`](../../api/#requests.Response.json "requests.Response.json") method to parse it into Python objects. >>> commit_data = r.json() >>> print(commit_data.keys()) ['committer', 'author', 'url', 'tree', 'sha', 'parents', 'message'] >>> print(commit_data['committer']) {'date': '2012-05-10T11:10:50-07:00', 'email': 'me@kennethreitz.com', 'name': 'Kenneth Reitz'} >>> print(commit_data['message']) makin' history So far, so simple. Well, let’s investigate the GitHub API a little bit. Now, we could look at the documentation, but we might have a little more fun if we use Requests instead. We can take advantage of the Requests OPTIONS verb to see what kinds of HTTP methods are supported on the url we just used. >>> verbs = requests.options(r.url) >>> verbs.status_code 500 Uh, what? That’s unhelpful! Turns out GitHub, like many API providers, don’t actually implement the OPTIONS method. This is an annoying oversight, but it’s OK, we can just use the boring documentation. If GitHub had correctly implemented OPTIONS, however, they should return the allowed methods in the headers, e.g. >>> verbs = requests.options('http://a-good-website.com/api/cats') >>> print(verbs.headers['allow']) GET,HEAD,POST,OPTIONS Turning to the documentation, we see that the only other method allowed for commits is POST, which creates a new commit. As we’re using the Requests repo, we should probably avoid making ham-handed POSTS to it. Instead, let’s play with the Issues feature of GitHub. This documentation was added in response to [Issue #482](https://github.com/psf/requests/issues/482). Given that this issue already exists, we will use it as an example. Let’s start by getting it. >>> r = requests.get('https://api.github.com/repos/psf/requests/issues/482') >>> r.status_code 200 >>> issue = json.loads(r.text) >>> print(issue['title']) Feature any http verb in docs >>> print(issue['comments']) 3 Cool, we have three comments. Let’s take a look at the last of them. >>> r = requests.get(r.url + '/comments') >>> r.status_code 200 >>> comments = r.json() >>> print(comments[0].keys()) ['body', 'url', 'created_at', 'updated_at', 'user', 'id'] >>> print(comments[2]['body']) Probably in the "advanced" section Well, that seems like a silly place. Let’s post a comment telling the poster that he’s silly. Who is the poster, anyway? >>> print(comments[2]['user']['login']) kennethreitz OK, so let’s tell this Kenneth guy that we think this example should go in the quickstart guide instead. According to the GitHub API doc, the way to do this is to POST to the thread. Let’s do it. >>> body = json.dumps({u"body": u"Sounds great! I'll get right on it!"}) >>> url = u"https://api.github.com/repos/psf/requests/issues/482/comments" >>> r = requests.post(url=url, data=body) >>> r.status_code 404 Huh, that’s weird. We probably need to authenticate. That’ll be a pain, right? Wrong. Requests makes it easy to use many forms of authentication, including the very common Basic Auth. >>> from requests.auth import HTTPBasicAuth >>> auth = HTTPBasicAuth('fake@example.com', 'not_a_real_password') >>> r = requests.post(url=url, data=body, auth=auth) >>> r.status_code 201 >>> content = r.json() >>> print(content['body']) Sounds great! I'll get right on it. Brilliant. Oh, wait, no! I meant to add that it would take me a while, because I had to go feed my cat. If only I could edit this comment! Happily, GitHub allows us to use another HTTP verb, PATCH, to edit this comment. Let’s do that. >>> print(content[u"id"]) 5804413 >>> body = json.dumps({u"body": u"Sounds great! I'll get right on it once I feed my cat."}) >>> url = u"https://api.github.com/repos/psf/requests/issues/comments/5804413" >>> r = requests.patch(url=url, data=body, auth=auth) >>> r.status_code 200 Excellent. Now, just to torture this Kenneth guy, I’ve decided to let him sweat and not tell him that I’m working on this. That means I want to delete this comment. GitHub lets us delete comments using the incredibly aptly named DELETE method. Let’s get rid of it. >>> r = requests.delete(url=url, auth=auth) >>> r.status_code 204 >>> r.headers['status'] '204 No Content' Excellent. All gone. The last thing I want to know is how much of my ratelimit I’ve used. Let’s find out. GitHub sends that information in the headers, so rather than download the whole page I’ll send a HEAD request to get the headers. >>> r = requests.head(url=url, auth=auth) >>> print(r.headers) ... 'x-ratelimit-remaining': '4995' 'x-ratelimit-limit': '5000' ... Excellent. Time to write a Python program that abuses the GitHub API in all kinds of exciting ways, 4995 more times. ## Custom Verbs¶ From time to time you may be working with a server that, for whatever reason, allows use or even requires use of HTTP verbs not covered above. One example of this would be the MKCOL method some WEBDAV servers use. Do not fret, these can still be used with Requests. These make use of the built-in `.request` method. For example: >>> r = requests.request('MKCOL', url, data=data) >>> r.status_code 200 # Assuming your call was correct Utilising this, you can make use of any method verb that your server allows. ## Link Headers¶ Many HTTP APIs feature Link headers. They make APIs more self describing and discoverable. GitHub uses these for [pagination](https://docs.github.com/en/rest/guides/using-pagination-in-the- rest-api) in their API, for example: >>> url = 'https://api.github.com/users/kennethreitz/repos?page=1&per_page=10' >>> r = requests.head(url=url) >>> r.headers['link'] '; rel="next", ; rel="last"' Requests will automatically parse these link headers and make them easily consumable: >>> r.links["next"] {'url': 'https://api.github.com/users/kennethreitz/repos?page=2&per_page=10', 'rel': 'next'} >>> r.links["last"] {'url': 'https://api.github.com/users/kennethreitz/repos?page=7&per_page=10', 'rel': 'last'} ## Transport Adapters¶ As of v1.0.0, Requests has moved to a modular internal design using Transport Adapters. These objects provide a mechanism to define interaction methods for an HTTP service. In particular, they allow you to apply per-service configuration. Requests ships with a single Transport Adapter, the [`HTTPAdapter`](../../api/#requests.adapters.HTTPAdapter "requests.adapters.HTTPAdapter"). This adapter provides the default Requests interaction with HTTP and HTTPS using the powerful [urllib3](https://github.com/urllib3/urllib3) library. Whenever a Requests [`Session`](../../api/#requests.Session "requests.Session") is initialized, one of these is attached to the [`Session`](../../api/#requests.Session "requests.Session") object for HTTP, and one for HTTPS. Requests enables users to create and use their own Transport Adapters that provide specific functionality. Once created, a Transport Adapter can be mounted to a Session object, along with an indication of which web services it should apply to. >>> s = requests.Session() >>> s.mount('https://github.com/', MyAdapter()) The mount call registers a specific instance of a Transport Adapter to a prefix. Once mounted, any HTTP request made using that session whose URL starts with the given prefix will use the given Transport Adapter. Note The adapter will be chosen based on a longest prefix match. Be mindful prefixes such as `http://localhost` will also match `http://localhost.other.com` or `http://localhost@other.com`. It’s recommended to terminate full hostnames with a `/`. Many of the details of implementing a Transport Adapter are beyond the scope of this documentation, but take a look at the next example for a simple SSL use- case. For more than that, you might look at subclassing the [`BaseAdapter`](../../api/#requests.adapters.BaseAdapter "requests.adapters.BaseAdapter"). ### Example: Specific SSL Version¶ The Requests team has made a specific choice to use whatever SSL version is default in the underlying library ([urllib3](https://github.com/urllib3/urllib3)). Normally this is fine, but from time to time, you might find yourself needing to connect to a service- endpoint that uses a version that isn’t compatible with the default. You can use Transport Adapters for this by taking most of the existing implementation of HTTPAdapter, and adding a parameter _ssl_version_ that gets passed-through to urllib3. We’ll make a Transport Adapter that instructs the library to use SSLv3: import ssl from urllib3.poolmanager import PoolManager from requests.adapters import HTTPAdapter class Ssl3HttpAdapter(HTTPAdapter): """"Transport adapter" that allows us to use SSLv3.""" def init_poolmanager(self, connections, maxsize, block=False): self.poolmanager = PoolManager( num_pools=connections, maxsize=maxsize, block=block, ssl_version=ssl.PROTOCOL_SSLv3) ### Example: Automatic Retries¶ By default, Requests does not retry failed connections. However, it is possible to implement automatic retries with a powerful array of features, including backoff, within a Requests [`Session`](../../api/#requests.Session "requests.Session") using the [urllib3.util.Retry](https://urllib3.readthedocs.io/en/stable/reference/urllib3.util.html#urllib3.util.Retry) class: from urllib3.util import Retry from requests import Session from requests.adapters import HTTPAdapter s = Session() retries = Retry( total=3, backoff_factor=0.1, status_forcelist=[502, 503, 504], allowed_methods={'POST'}, ) s.mount('https://', HTTPAdapter(max_retries=retries)) ## Blocking Or Non-Blocking?¶ With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The [`Response.content`](../../api/#requests.Response.content "requests.Response.content") property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block. If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks. Some excellent examples are [requests- threads](https://github.com/requests/requests-threads), [grequests](https://github.com/spyoungtech/grequests), [requests- futures](https://github.com/ross/requests-futures), and [httpx](https://github.com/encode/httpx). ## Header Ordering¶ In unusual circumstances you may want to provide headers in an ordered manner. If you pass an `OrderedDict` to the `headers` keyword argument, that will provide the headers with an ordering. _However_ , the ordering of the default headers used by Requests will be preferred, which means that if you override default headers in the `headers` keyword argument, they may appear out of order compared to other headers in that keyword argument. If this is problematic, users should consider setting the default headers on a [`Session`](../../api/#requests.Session "requests.Session") object, by setting [`Session.headers`](../../api/#requests.Session.headers "requests.Session.headers") to a custom `OrderedDict`. That ordering will always be preferred. ## Timeouts¶ Most requests to external servers should have a timeout attached, in case the server is not responding in a timely manner. By default, requests do not time out unless a timeout value is set explicitly. Without a timeout, your code may hang for minutes or more. The **connect** timeout is the number of seconds Requests will wait for your client to establish a connection to a remote machine (corresponding to the [connect()](https://linux.die.net/man/2/connect)) call on the socket. It’s a good practice to set connect timeouts to slightly larger than a multiple of 3, which is the default [TCP packet retransmission window](https://datatracker.ietf.org/doc/html/rfc2988). Once your client has connected to the server and sent the HTTP request, the **read** timeout is the number of seconds the client will wait for the server to send a response. (Specifically, it’s the number of seconds that the client will wait _between_ bytes sent from the server. In 99.9% of cases, this is the time before the server sends the first byte). If you specify a single value for the timeout, like this: r = requests.get('https://github.com', timeout=5) The timeout value will be applied to both the `connect` and the `read` timeouts. Specify a tuple if you would like to set the values separately: r = requests.get('https://github.com', timeout=(3.05, 27)) If the remote server is very slow, you can tell Requests to wait forever for a response, by passing None as a timeout value and then retrieving a cup of coffee. r = requests.get('https://github.com', timeout=None) Note The connect timeout applies to each connection attempt to an IP address. If multiple addresses exist for a domain name, the underlying `urllib3` will try each address sequentially until one successfully connects. This may lead to an effective total connection timeout _multiple_ times longer than the specified time, e.g. an unresponsive server having both IPv4 and IPv6 addresses will have its perceived timeout _doubled_ , so take that into account when setting the connection timeout. Note Neither the connect nor read timeouts are [wall clock](https://wiki.php.net/rfc/max_execution_wall_time). This means that if you start a request, and look at the time, and then look at the time when the request finishes or times out, the real-world time may be greater than what you specified. Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. ### Useful Links * [Quickstart](../quickstart/) * Advanced Usage * [API Reference](../../api/) * [Release History](../../community/updates/#release-history) * [Contributors Guide](../../dev/contributing/) * [Recommended Packages and Extensions](../../community/recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### [Table of Contents](../../) * Advanced Usage * Session Objects * Request and Response Objects * Prepared Requests * SSL Cert Verification * Client Side Certificates * CA Certificates * Body Content Workflow * Keep-Alive * Streaming Uploads * Chunk-Encoded Requests * POST Multiple Multipart-Encoded Files * Event Hooks * Custom Authentication * Streaming Requests * Proxies * SOCKS * Compliance * Encodings * HTTP Verbs * Custom Verbs * Link Headers * Transport Adapters * Example: Specific SSL Version * Example: Automatic Retries * Blocking Or Non-Blocking? * Header Ordering * Timeouts ### Related Topics * [Documentation overview](../../) * Previous: [Quickstart](../quickstart/ "previous chapter") * Next: [Authentication](../authentication/ "next chapter") ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests) --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/user/authentication/ # Path: user/authentication/ # Authentication¶ This document discusses using various kinds of authentication with Requests. Many web services require authentication, and there are many different types. Below, we outline various forms of authentication available in Requests, from the simple to the complex. ## Basic Authentication¶ Many web services that require authentication accept HTTP Basic Auth. This is the simplest kind, and Requests supports it straight out of the box. Making requests with HTTP Basic Auth is very simple: >>> from requests.auth import HTTPBasicAuth >>> basic = HTTPBasicAuth('user', 'pass') >>> requests.get('https://httpbin.org/basic-auth/user/pass', auth=basic) In fact, HTTP Basic Auth is so common that Requests provides a handy shorthand for using it: >>> requests.get('https://httpbin.org/basic-auth/user/pass', auth=('user', 'pass')) Providing the credentials in a tuple like this is exactly the same as the `HTTPBasicAuth` example above. ### netrc Authentication¶ If no authentication method is given with the `auth` argument, Requests will attempt to get the authentication credentials for the URL’s hostname from the user’s netrc file. The netrc file overrides raw HTTP authentication headers set with headers=. If credentials for the hostname are found, the request is sent with HTTP Basic Auth. Requests will search for the netrc file at ~/.netrc, ~/_netrc, or at the path specified by the NETRC environment variable. ~ denotes the user’s home directory, which is $HOME on Unix based systems and %USERPROFILE% on Windows. Usage of netrc file can be disabled by setting trust_env to False in the Requests session: >>> s = requests.Session() >>> s.trust_env = False >>> s.get('https://httpbin.org/basic-auth/user/pass') ## Digest Authentication¶ Another very popular form of HTTP Authentication is Digest Authentication, and Requests supports this out of the box as well: >>> from requests.auth import HTTPDigestAuth >>> url = 'https://httpbin.org/digest-auth/auth/user/pass' >>> requests.get(url, auth=HTTPDigestAuth('user', 'pass')) ## OAuth 1 Authentication¶ A common form of authentication for several web APIs is OAuth. The `requests- oauthlib` library allows Requests users to easily make OAuth 1 authenticated requests: >>> import requests >>> from requests_oauthlib import OAuth1 >>> url = 'https://api.twitter.com/1.1/account/verify_credentials.json' >>> auth = OAuth1('YOUR_APP_KEY', 'YOUR_APP_SECRET', ... 'USER_OAUTH_TOKEN', 'USER_OAUTH_TOKEN_SECRET') >>> requests.get(url, auth=auth) For more information on how to OAuth flow works, please see the official [OAuth](https://oauth.net/) website. For examples and documentation on requests-oauthlib, please see the [requests_oauthlib](https://github.com/requests/requests-oauthlib) repository on GitHub ## OAuth 2 and OpenID Connect Authentication¶ The `requests-oauthlib` library also handles OAuth 2, the authentication mechanism underpinning OpenID Connect. See the [requests-oauthlib OAuth2 documentation](https://requests- oauthlib.readthedocs.io/en/latest/oauth2_workflow.html) for details of the various OAuth 2 credential management flows: * [Web Application Flow](https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#web-application-flow) * [Mobile Application Flow](https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#mobile-application-flow) * [Legacy Application Flow](https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#legacy-application-flow) * [Backend Application Flow](https://requests-oauthlib.readthedocs.io/en/latest/oauth2_workflow.html#backend-application-flow) ## Other Authentication¶ Requests is designed to allow other forms of authentication to be easily and quickly plugged in. Members of the open-source community frequently write authentication handlers for more complicated or less commonly-used forms of authentication. Some of the best have been brought together under the [Requests organization](https://github.com/requests), including: * [Kerberos](https://github.com/requests/requests-kerberos) * [NTLM](https://github.com/requests/requests-ntlm) If you want to use any of these forms of authentication, go straight to their GitHub page and follow the instructions. ## New Forms of Authentication¶ If you can’t find a good implementation of the form of authentication you want, you can implement it yourself. Requests makes it easy to add your own forms of authentication. To do so, subclass [`AuthBase`](../../api/#requests.auth.AuthBase "requests.auth.AuthBase") and implement the `__call__()` method: >>> import requests >>> class MyAuth(requests.auth.AuthBase): ... def __call__(self, r): ... # Implement my authentication ... return r ... >>> url = 'https://httpbin.org/get' >>> requests.get(url, auth=MyAuth()) When an authentication handler is attached to a request, it is called during request setup. The `__call__` method must therefore do whatever is required to make the authentication work. Some forms of authentication will additionally add hooks to provide further functionality. Further examples can be found under the [Requests organization](https://github.com/requests) and in the `auth.py` file. Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. ### Useful Links * [Quickstart](../quickstart/) * [Advanced Usage](../advanced/) * [API Reference](../../api/) * [Release History](../../community/updates/#release-history) * [Contributors Guide](../../dev/contributing/) * [Recommended Packages and Extensions](../../community/recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### [Table of Contents](../../) * Authentication * Basic Authentication * netrc Authentication * Digest Authentication * OAuth 1 Authentication * OAuth 2 and OpenID Connect Authentication * Other Authentication * New Forms of Authentication ### Related Topics * [Documentation overview](../../) * Previous: [Advanced Usage](../advanced/ "previous chapter") * Next: [Recommended Packages and Extensions](../../community/recommended/ "next chapter") ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests) --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/user/install/ # Path: user/install/ # Installation of Requests¶ This part of the documentation covers the installation of Requests. The first step to using any software package is getting it properly installed. ## $ python -m pip install requests¶ To install Requests, simply run this simple command in your terminal of choice: $ python -m pip install requests ## Get the Source Code¶ Requests is actively developed on GitHub, where the code is [always available](https://github.com/psf/requests). You can either clone the public repository: $ git clone https://github.com/psf/requests.git Or, download the [tarball](https://github.com/psf/requests/tarball/main): $ curl -OL https://github.com/psf/requests/tarball/main # optionally, zipball is also available (for Windows users). Once you have a copy of the source, you can embed it in your own Python package, or install it into your site-packages easily: $ cd requests $ python -m pip install . Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. ### Useful Links * [Quickstart](../quickstart/) * [Advanced Usage](../advanced/) * [API Reference](../../api/) * [Release History](../../community/updates/#release-history) * [Contributors Guide](../../dev/contributing/) * [Recommended Packages and Extensions](../../community/recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### [Table of Contents](../../) * Installation of Requests * $ python -m pip install requests * Get the Source Code ### Related Topics * [Documentation overview](../../) * Previous: [Requests: HTTP for Humans™](../../ "previous chapter") * Next: [Quickstart](../quickstart/ "next chapter") ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests) --- # Requests Documentation # Source: https://requests.readthedocs.io/en/latest/user/quickstart/ # Path: user/quickstart/ # Quickstart¶ Eager to get started? This page gives a good introduction in how to get started with Requests. First, make sure that: * Requests is [installed](../install/#install) * Requests is [up-to-date](../../community/updates/#updates) Let’s get started with some simple examples. ## Make a Request¶ Making a request with Requests is very simple. Begin by importing the Requests module: >>> import requests Now, let’s try to get a webpage. For this example, let’s get GitHub’s public timeline: >>> r = requests.get('https://api.github.com/events') Now, we have a [`Response`](../../api/#requests.Response "requests.Response") object called `r`. We can get all the information we need from this object. Requests’ simple API means that all forms of HTTP request are as obvious. For example, this is how you make an HTTP POST request: >>> r = requests.post('https://httpbin.org/post', data={'key': 'value'}) Nice, right? What about the other HTTP request types: PUT, DELETE, HEAD and OPTIONS? These are all just as simple: >>> r = requests.put('https://httpbin.org/put', data={'key': 'value'}) >>> r = requests.delete('https://httpbin.org/delete') >>> r = requests.head('https://httpbin.org/get') >>> r = requests.options('https://httpbin.org/get') That’s all well and good, but it’s also only the start of what Requests can do. ## Passing Parameters In URLs¶ You often want to send some sort of data in the URL’s query string. If you were constructing the URL by hand, this data would be given as key/value pairs in the URL after a question mark, e.g. `httpbin.org/get?key=val`. Requests allows you to provide these arguments as a dictionary of strings, using the `params` keyword argument. As an example, if you wanted to pass `key1=value1` and `key2=value2` to `httpbin.org/get`, you would use the following code: >>> payload = {'key1': 'value1', 'key2': 'value2'} >>> r = requests.get('https://httpbin.org/get', params=payload) You can see that the URL has been correctly encoded by printing the URL: >>> print(r.url) https://httpbin.org/get?key2=value2&key1=value1 Note that any dictionary key whose value is `None` will not be added to the URL’s query string. You can also pass a list of items as a value: >>> payload = {'key1': 'value1', 'key2': ['value2', 'value3']} >>> r = requests.get('https://httpbin.org/get', params=payload) >>> print(r.url) https://httpbin.org/get?key1=value1&key2=value2&key2=value3 ## Response Content¶ We can read the content of the server’s response. Consider the GitHub timeline again: >>> import requests >>> r = requests.get('https://api.github.com/events') >>> r.text '[{"repository":{"open_issues":0,"url":"https://github.com/... Requests will automatically decode content from the server. Most unicode charsets are seamlessly decoded. When you make a request, Requests makes educated guesses about the encoding of the response based on the HTTP headers. The text encoding guessed by Requests is used when you access `r.text`. You can find out what encoding Requests is using, and change it, using the `r.encoding` property: >>> r.encoding 'utf-8' >>> r.encoding = 'ISO-8859-1' If you change the encoding, Requests will use the new value of `r.encoding` whenever you call `r.text`. You might want to do this in any situation where you can apply special logic to work out what the encoding of the content will be. For example, HTML and XML have the ability to specify their encoding in their body. In situations like this, you should use `r.content` to find the encoding, and then set `r.encoding`. This will let you use `r.text` with the correct encoding. Requests will also use custom encodings in the event that you need them. If you have created your own encoding and registered it with the `codecs` module, you can simply use the codec name as the value of `r.encoding` and Requests will handle the decoding for you. ## Binary Response Content¶ You can also access the response body as bytes, for non-text requests: >>> r.content b'[{"repository":{"open_issues":0,"url":"https://github.com/... The `gzip` and `deflate` transfer-encodings are automatically decoded for you. The `br` transfer-encoding is automatically decoded for you if a Brotli library like [brotli](https://pypi.org/project/brotli) or [brotlicffi](https://pypi.org/project/brotlicffi) is installed. For example, to create an image from binary data returned by a request, you can use the following code: >>> from PIL import Image >>> from io import BytesIO >>> i = Image.open(BytesIO(r.content)) ## JSON Response Content¶ There’s also a builtin JSON decoder, in case you’re dealing with JSON data: >>> import requests >>> r = requests.get('https://api.github.com/events') >>> r.json() [{'repository': {'open_issues': 0, 'url': 'https://github.com/... In case the JSON decoding fails, `r.json()` raises an exception. For example, if the response gets a 204 (No Content), or if the response contains invalid JSON, attempting `r.json()` raises `requests.exceptions.JSONDecodeError`. This wrapper exception provides interoperability for multiple exceptions that may be thrown by different python versions and json serialization libraries. It should be noted that the success of the call to `r.json()` does **not** indicate the success of the response. Some servers may return a JSON object in a failed response (e.g. error details with HTTP 500). Such JSON will be decoded and returned. To check that a request is successful, use `r.raise_for_status()` or check `r.status_code` is what you expect. ## Raw Response Content¶ In the rare case that you’d like to get the raw socket response from the server, you can access `r.raw`. If you want to do this, make sure you set `stream=True` in your initial request. Once you do, you can do this: >>> r = requests.get('https://api.github.com/events', stream=True) >>> r.raw >>> r.raw.read(10) b'\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x03' In general, however, you should use a pattern like this to save what is being streamed to a file: with open(filename, 'wb') as fd: for chunk in r.iter_content(chunk_size=128): fd.write(chunk) Using `Response.iter_content` will handle a lot of what you would otherwise have to handle when using `Response.raw` directly. When streaming a download, the above is the preferred and recommended way to retrieve the content. Note that `chunk_size` can be freely adjusted to a number that may better fit your use cases. Note An important note about using `Response.iter_content` versus `Response.raw`. `Response.iter_content` will automatically decode the `gzip` and `deflate` transfer-encodings. `Response.raw` is a raw stream of bytes – it does not transform the response content. If you really need access to the bytes as they were returned, use `Response.raw`. ## Custom Headers¶ If you’d like to add HTTP headers to a request, simply pass in a `dict` to the `headers` parameter. For example, we didn’t specify our user-agent in the previous example: >>> url = 'https://api.github.com/some/endpoint' >>> headers = {'user-agent': 'my-app/0.0.1'} >>> r = requests.get(url, headers=headers) Note: Custom headers are given less precedence than more specific sources of information. For instance: * Authorization headers set with headers= will be overridden if credentials are specified in `.netrc`, which in turn will be overridden by the `auth=` parameter. Requests will search for the netrc file at ~/.netrc, ~/_netrc, or at the path specified by the NETRC environment variable. Check details in [netrc authentication](../authentication/#authentication). * Authorization headers will be removed if you get redirected off-host. * Proxy-Authorization headers will be overridden by proxy credentials provided in the URL. * Content-Length headers will be overridden when we can determine the length of the content. Furthermore, Requests does not change its behavior at all based on which custom headers are specified. The headers are simply passed on into the final request. Note: All header values must be a `string`, bytestring, or unicode. While permitted, it’s advised to avoid passing unicode header values. ## More complicated POST requests¶ Typically, you want to send some form-encoded data — much like an HTML form. To do this, simply pass a dictionary to the `data` argument. Your dictionary of data will automatically be form-encoded when the request is made: >>> payload = {'key1': 'value1', 'key2': 'value2'} >>> r = requests.post('https://httpbin.org/post', data=payload) >>> print(r.text) { ... "form": { "key2": "value2", "key1": "value1" }, ... } The `data` argument can also have multiple values for each key. This can be done by making `data` either a list of tuples or a dictionary with lists as values. This is particularly useful when the form has multiple elements that use the same key: >>> payload_tuples = [('key1', 'value1'), ('key1', 'value2')] >>> r1 = requests.post('https://httpbin.org/post', data=payload_tuples) >>> payload_dict = {'key1': ['value1', 'value2']} >>> r2 = requests.post('https://httpbin.org/post', data=payload_dict) >>> print(r1.text) { ... "form": { "key1": [ "value1", "value2" ] }, ... } >>> r1.text == r2.text True There are times that you may want to send data that is not form-encoded. If you pass in a `string` instead of a `dict`, that data will be posted directly. For example, the GitHub API v3 accepts JSON-Encoded POST/PATCH data: >>> import json >>> url = 'https://api.github.com/some/endpoint' >>> payload = {'some': 'data'} >>> r = requests.post(url, data=json.dumps(payload)) Please note that the above code will NOT add the `Content-Type` header (so in particular it will NOT set it to `application/json`). If you need that header set and you don’t want to encode the `dict` yourself, you can also pass it directly using the `json` parameter (added in version 2.4.2) and it will be encoded automatically: >>> url = 'https://api.github.com/some/endpoint' >>> payload = {'some': 'data'} >>> r = requests.post(url, json=payload) Note, the `json` parameter is ignored if either `data` or `files` is passed. ## POST a Multipart-Encoded File¶ Requests makes it simple to upload Multipart-encoded files: >>> url = 'https://httpbin.org/post' >>> files = {'file': open('report.xls', 'rb')} >>> r = requests.post(url, files=files) >>> r.text { ... "files": { "file": "" }, ... } You can set the filename, content_type and headers explicitly: >>> url = 'https://httpbin.org/post' >>> files = {'file': ('report.xls', open('report.xls', 'rb'), 'application/vnd.ms-excel', {'Expires': '0'})} >>> r = requests.post(url, files=files) >>> r.text { ... "files": { "file": "" }, ... } If you want, you can send strings to be received as files: >>> url = 'https://httpbin.org/post' >>> files = {'file': ('report.csv', 'some,data,to,send\nanother,row,to,send\n')} >>> r = requests.post(url, files=files) >>> r.text { ... "files": { "file": "some,data,to,send\\nanother,row,to,send\\n" }, ... } In the event you are posting a very large file as a `multipart/form-data` request, you may want to stream the request. By default, `requests` does not support this, but there is a separate package which does - `requests- toolbelt`. You should read [the toolbelt’s documentation](https://toolbelt.readthedocs.io) for more details about how to use it. For sending multiple files in one request refer to the [advanced](../advanced/#advanced) section. Warning It is strongly recommended that you open files in [binary mode](https://docs.python.org/3/tutorial/inputoutput.html#tut-files "\(in Python v3.14\)"). This is because Requests may attempt to provide the `Content-Length` header for you, and if it does this value will be set to the number of _bytes_ in the file. Errors may occur if you open the file in _text mode_. ## Response Status Codes¶ We can check the response status code: >>> r = requests.get('https://httpbin.org/get') >>> r.status_code 200 Requests also comes with a built-in status code lookup object for easy reference: >>> r.status_code == requests.codes.ok True If we made a bad request (a 4XX client error or 5XX server error response), we can raise it with [`Response.raise_for_status()`](../../api/#requests.Response.raise_for_status "requests.Response.raise_for_status"): >>> bad_r = requests.get('https://httpbin.org/status/404') >>> bad_r.status_code 404 >>> bad_r.raise_for_status() Traceback (most recent call last): File "requests/models.py", line 832, in raise_for_status raise http_error requests.exceptions.HTTPError: 404 Client Error But, since our `status_code` for `r` was `200`, when we call `raise_for_status()` we get: >>> r.raise_for_status() None All is well. ## Response Headers¶ We can view the server’s response headers using a Python dictionary: >>> r.headers { 'content-encoding': 'gzip', 'transfer-encoding': 'chunked', 'connection': 'close', 'server': 'nginx/1.0.4', 'x-runtime': '148ms', 'etag': '"e1ca502697e5c9317743dc078f67693f"', 'content-type': 'application/json' } The dictionary is special, though: it’s made just for HTTP headers. According to [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.2), HTTP Header names are case-insensitive. So, we can access the headers using any capitalization we want: >>> r.headers['Content-Type'] 'application/json' >>> r.headers.get('content-type') 'application/json' It is also special in that the server could have sent the same header multiple times with different values, but requests combines them so they can be represented in the dictionary within a single mapping, as per [RFC 7230](https://tools.ietf.org/html/rfc7230#section-3.2): > A recipient MAY combine multiple header fields with the same field name into > one “field-name: field-value” pair, without changing the semantics of the > message, by appending each subsequent field value to the combined field > value in order, separated by a comma. ## Cookies¶ If a response contains some Cookies, you can quickly access them: >>> url = 'http://example.com/some/cookie/setting/url' >>> r = requests.get(url) >>> r.cookies['example_cookie_name'] 'example_cookie_value' To send your own cookies to the server, you can use the `cookies` parameter: >>> url = 'https://httpbin.org/cookies' >>> cookies = dict(cookies_are='working') >>> r = requests.get(url, cookies=cookies) >>> r.text '{"cookies": {"cookies_are": "working"}}' Cookies are returned in a [`RequestsCookieJar`](../../api/#requests.cookies.RequestsCookieJar "requests.cookies.RequestsCookieJar"), which acts like a `dict` but also offers a more complete interface, suitable for use over multiple domains or paths. Cookie jars can also be passed in to requests: >>> jar = requests.cookies.RequestsCookieJar() >>> jar.set('tasty_cookie', 'yum', domain='httpbin.org', path='/cookies') >>> jar.set('gross_cookie', 'blech', domain='httpbin.org', path='/elsewhere') >>> url = 'https://httpbin.org/cookies' >>> r = requests.get(url, cookies=jar) >>> r.text '{"cookies": {"tasty_cookie": "yum"}}' ## Redirection and History¶ By default Requests will perform location redirection for all verbs except HEAD. We can use the `history` property of the Response object to track redirection. The [`Response.history`](../../api/#requests.Response.history "requests.Response.history") list contains the [`Response`](../../api/#requests.Response "requests.Response") objects that were created in order to complete the request. The list is sorted from the oldest to the most recent response. For example, GitHub redirects all HTTP requests to HTTPS: >>> r = requests.get('http://github.com/') >>> r.url 'https://github.com/' >>> r.status_code 200 >>> r.history [] If you’re using GET, OPTIONS, POST, PUT, PATCH or DELETE, you can disable redirection handling with the `allow_redirects` parameter: >>> r = requests.get('http://github.com/', allow_redirects=False) >>> r.status_code 301 >>> r.history [] If you’re using HEAD, you can enable redirection as well: >>> r = requests.head('http://github.com/', allow_redirects=True) >>> r.url 'https://github.com/' >>> r.history [] ## Timeouts¶ You can tell Requests to stop waiting for a response after a given number of seconds with the `timeout` parameter. Nearly all production code should use this parameter in nearly all requests. Failure to do so can cause your program to hang indefinitely: >>> requests.get('https://github.com/', timeout=0.001) Traceback (most recent call last): File "", line 1, in requests.exceptions.Timeout: HTTPConnectionPool(host='github.com', port=80): Request timed out. (timeout=0.001) Note `timeout` is not a time limit on the entire response download; rather, an exception is raised if the server has not issued a response for `timeout` seconds (more precisely, if no bytes have been received on the underlying socket for `timeout` seconds). If no timeout is specified explicitly, requests do not time out. ## Errors and Exceptions¶ In the event of a network problem (e.g. DNS failure, refused connection, etc), Requests will raise a [`ConnectionError`](../../api/#requests.ConnectionError "requests.exceptions.ConnectionError") exception. [`Response.raise_for_status()`](../../api/#requests.Response.raise_for_status "requests.Response.raise_for_status") will raise an [`HTTPError`](../../api/#requests.HTTPError "requests.exceptions.HTTPError") if the HTTP request returned an unsuccessful status code. If a request times out, a [`Timeout`](../../api/#requests.Timeout "requests.exceptions.Timeout") exception is raised. If a request exceeds the configured number of maximum redirections, a [`TooManyRedirects`](../../api/#requests.TooManyRedirects "requests.exceptions.TooManyRedirects") exception is raised. All exceptions that Requests explicitly raises inherit from [`requests.exceptions.RequestException`](../../api/#requests.RequestException "requests.exceptions.RequestException"). * * * Ready for more? Check out the [advanced](../advanced/#advanced) section. Requests is an elegant and simple HTTP library for Python, built for human beings. You are currently looking at the documentation of the development release. ### Useful Links * Quickstart * [Advanced Usage](../advanced/) * [API Reference](../../api/) * [Release History](../../community/updates/#release-history) * [Contributors Guide](../../dev/contributing/) * [Recommended Packages and Extensions](../../community/recommended/) * [Requests @ GitHub](https://github.com/psf/requests) * [Requests @ PyPI](https://pypi.org/project/requests/) * [Issue Tracker](https://github.com/psf/requests/issues) ### [Table of Contents](../../) * Quickstart * Make a Request * Passing Parameters In URLs * Response Content * Binary Response Content * JSON Response Content * Raw Response Content * Custom Headers * More complicated POST requests * POST a Multipart-Encoded File * Response Status Codes * Response Headers * Cookies * Redirection and History * Timeouts * Errors and Exceptions ### Related Topics * [Documentation overview](../../) * Previous: [Installation of Requests](../install/ "previous chapter") * Next: [Advanced Usage](../advanced/ "next chapter") ### Quick search (C)MMXVIX. A Kenneth Reitz Project. [ ![Fork me on GitHub](https://github.blog/wp- content/uploads/2008/12/forkme_right_darkblue_121621.png) ](https://github.com/requests/requests)