Example using GitHub Gist

Example using GitHub Gist

Example of an Apify actor with source code hosted in a GitHub Gist. For example, this is useful if you want to have public code examples accompanied by a working API that anyone can try straight away.

DEVELOPER_TOOLSOPEN_SOURCEApify

This is an example of an Apify actor that is stored as GitHub Gist. Gists are useful if your actor has multiple source code files but you don't want to create a full Git repository for it or don't want to host your source files directly on Apify. All files of this Gist are provided under Apache 2.0 license.

Whenever you edit the Gist, you'll need to rebuild the actor.

Are you missing anything? Something not clear? Please let us know at support@apify.com

actor-in-gist-example.js

Contains the source code of the actor in Node.js.

package.json

The file used by NPM to maintain metadata about the package, such as list of dependencies. See NPM docs for more details.

Dockerfile

Contains instructions for Docker how to build the image for the act. For more information, see Dockerfile reference.

README.md

The Markup file is used to generate a long description of the actor that is displayed in Apify Library.

Frequently Asked Questions

Is it legal to scrape job listings or public data?

Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.

Do I need to code to use this scraper?

No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.

What data does it extract?

It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.

Can I scrape multiple pages or filter by location?

Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.

How do I get started?

You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!