Browser Pool is a tool that leverages the Apify platform and provides you with headless browsers for building AI agents.
Why using Browser Pool?
Because it can rely on well tested features by Apify, such as proxies, which help to circumvent obstacles and let you concentrate on developing the online automation.
How to use Browser Pool
Go to the Standby tab and copy the Actor URL.
Replace https:// with wss://: you now have the URL for connecting your Playwright session to Apify through CDP, which looks something like this:
Finally, you can use the URL with Playwright. Let's say you want to generate and download the emoji of a smiling rocket on emojikitchen.dev:
1import fs from 'fs;2import{ chromium }from'playwright';34console.log('Connecting to a remote browser on the Apify platform');5const wsEndpoint ='wss://marco-gullo--browser-pool.apify.actor?token=$TOKEN&other_params...';6const browser =await chromium.connect(wsEndpoint);78console.log('Browser connection established, creating context');9const context =await browser.newContext({viewport:{height:1000,width:1600}});1011console.log('Opening new page');12const page =await context.newPage();1314const timeout =60_000;15console.log(`Going to: ${url}. Timeout = ${timeout}ms`);16await page.goto(url,{ timeout });1718console.log('Selecting emojis');19await page.getByRole('img',{name:'rocket'}).first().click();20await page.getByRole('img',{name:'smile',exact:true}).nth(1).click();2122console.log('Saving screenshot');23const screenshot =await page.getByRole('img',{name:'rocket-smile'}).screenshot();24fs.writeFileSync('rocket-smile.png', screenshot);2526console.log('Closing the browser');27await context.close();28await browser.close();
This code is executed locally, and in the end you will have this nice picture on your computer:
Nevertheless, the browser runs on the Apify platform, so there is no need for you to install Chromium.
Moreover, you can mock your location or try to circumvent blocks using Apify's proxies.
To do so, you need to use search parameters: see below.
Search parameters
You can customize your session using search parameters.
They are designed to be compatible with browserless.io:
proxy: either datacenter or residential; selects the corresponding default proxy groups.
proxyGroups: stringified JSON of an array of Apify proxy groups, e.g., ["RESIDENTIAL5"].
Is it legal to scrape job listings or public data?
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
Do I need to code to use this scraper?
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
What data does it extract?
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Can I scrape multiple pages or filter by location?
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
How do I get started?
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!