You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For long running scraping processes when path where to cache results is provided it would be nice to automatically recover from failures.
This may involve storing of progress and picking it up (e.g. storing URLs left to scrape).
Note: that could be covered by externally and could be outside the scope of this package. E.g. something like "reverse-proxy-like" module could do this high-level planning and orchestration and distribute scraping across a cluster of nodes running this package on a subset of URLs that do not require persistence.
The text was updated successfully, but these errors were encountered:
For long running scraping processes when path where to cache results is provided it would be nice to automatically recover from failures.
This may involve storing of progress and picking it up (e.g. storing URLs left to scrape).
Note: that could be covered by externally and could be outside the scope of this package. E.g. something like "reverse-proxy-like" module could do this high-level planning and orchestration and distribute scraping across a cluster of nodes running this package on a subset of URLs that do not require persistence.
The text was updated successfully, but these errors were encountered: