Rebirth failed requests

Rebirth failed requests

Rebirth failed requests of past runs into a pristine state with no retries so you can rescrape them by resurrecting the run.

DEVELOPER_TOOLSOPEN_SOURCEApify

There isn't an easy way on Apify to retry fully failed (or just handled) requests. This actor allows you to set those requests to pristine unhandled state with 0 retries so you can resurrect the run and process them again.

How it works

This actor scans all requests in a queue of a run and recognizes failed requests by their retryCount and errorMessages properties. If your actor deliberately changes these 2 properties (outside of the default Crawler behavior), the rebirth will not work properly.

Requirements

  • Runs must use request queue. (Request list support might be added in the future)
  • The run should be able to be resumed with a proper state management (imagine actor migration)

How to run

Detailed input description is available on actor's page.

  • You can provide either:
    • run IDs to scan for requests to be rebirth
    • actor or task ID with combination of dates to find all runs in that timespan (to scan for requests to be rebirth)
  • After requests are rebirth, you will see unhandled requests in the run's queue and you can resurrect the runs to get them processed again
  • You can check automatic resurrecting of runs with specified concurrency (to work with your max memory limit)
  • You can override a build. Normally, the actor is resurrected with the same build it had but often you might want to run newest version like latest

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!