Watch Database is a comprehensive repository featuring detailed watch specifications, high-quality images, movement, caliber information, and more.
This Apify Actor fetches various watch-related details, including brands, models, families, and individual watch specifications. The Actor supports both GET and POST requests depending on the selected operation.
Only include the parameters required for the selected selectPageType
endpoint and leave the rest blank. Unnecessary parameters should not be populated to prevent incorrect API calls.
The Actor provides access to the following API endpoints:
Endpoint | Description |
---|---|
get-all-watch-makes | Retrieves all available watch makes |
get-all-watch-models-by-makeid | Retrieves all watch models for a given {make ID} |
get-all-watch-family-by-makeid-and-modelid | Retrieves all watch families for a given {make ID} and {model ID}** |
get-watches-by-makeid | Retrieves watches based on a specific {make ID} with pagination |
get-watches-by-modelid | Retrieves watches based on a specific {model ID} with pagination |
get-watches-by-familyid | Retrieves watches based on a specific {family ID} with pagination |
get-watch-details-by-watchid | Retrieves detailed information about a specific watch by its {watch ID} |
search-reference | Searches for watches based on a reference term (POST request) |
The Actor expects an input JSON with the following structure:
1{ 2 "selectPageType": "get-all-watch-makes", 3 "watchId": "", 4 "makeId": "", 5 "modelId": "", 6 "familyId": "", 7 "page": 1, 8 "limit": 10, 9 "referenceSearchTerm": "" 10}
Only include the parameters required for the selected selectPageType
endpoint and leave the rest blank. Unnecessary parameters should not be populated to prevent incorrect API calls.
If an invalid selectPageType
is provided, the Actor will log an error message and terminate execution. In case of API request failure, the error details will be logged for debugging.
The Actor relies on the following dependencies:
axios
for making HTTP requestsform-data
for handling POST requests with form dataapify
for Actor execution1[ 2 { 3 "id": 12345, 4 "brand": "Rolex", 5 "model": "Submariner", 6 "family": "Diver", 7 "price": "$10,000" 8 } 9]
OTHER INFO
A template for scraping data from a single web page in JavaScript (Node.js). The URL of the web page is passed in via input, which is defined by the input schema. The template uses the Axios client to get the HTML of the page and the Cheerio library to parse the data from it. The data are then stored in a dataset where you can easily access them.
The scraped data in this template are page headings but you can easily edit the code to scrape whatever you want from the page.
Actor.getInput()
gets the input where the page URL is defined
axios.get(url)
fetches the page
cheerio.load(response.data)
loads the page data and enables parsing the headings
This parses the headings from the page and here you can edit the code to parse whatever you need from the page
$("h1, h2, h3, h4, h5, h6").each((_i, element) => {...});
Actor.pushData(headings)
stores the headings in the dataset
For complete information see this article. In short, you will:
If you would like to develop locally, you can pull the existing Actor from Apify console using Apify CLI:
Install apify-cli
Using Homebrew
brew install apify-cli
Using NPM
npm -g install apify-cli
Pull the Actor by its unique <ActorId>
, which is one of the following:
You can find both by clicking on the Actor title at the top of the page, which will open a modal containing both Actor unique name and Actor ID.
This command will copy the Actor into the current directory on your local machine.
apify pull <ActorId>
To learn more about Apify and Actors, take a look at the following resources:
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!