Advanced usage of Python requests - timeouts, retries, hooks
The Python HTTP library requests is probably my favourite HTTP utility in all the languages I program in. It's simple, intuitive and ubiquitous in the Python community. Most of the programs that interface with HTTP use either requests or urllib3 from the standard library.
While it's easy to immediately be productive with requests because of the simple API, the library also offers extensibility for advanced use cases. If you're writing an API-heavy client or a web scraper you'll probably need tolerance for network failures, helpful debugging traces and syntactic sugar.
Below is a summary of features I've found useful in requests when writing web scraping tools or programs that extensively use JSON API's.
- Request hooks
- Setting base URLs
- Setting default timeouts
- Retry on failure
- Debugging HTTP requests
- Testing and mocking requests
- Mimicking browser behaviors