Reddit Scraper Pro is a powerful, user-friendly tool for extracting data from Reddit without API limitations. Offers scraping of posts, users, comments, and communities, advanced search capabilities, and multiple export options. Perfect for brand monitoring, trend tracking, and competitor research.
Reddit Scraper Pro is the most comprehensive and user-friendly tool available for extracting data from Reddit. Unlike other scrapers, it offers unmatched flexibility and speed, allowing you to gather vast amounts of data effortlessly without any API limitations or authentication requirements. Powered by the robust Apify SDK, it’s built to handle all your Reddit scraping needs efficiently.
We strive to make Reddit Scraper Pro the most comprehensive tool for your Reddit data extraction needs. However, if you find that something is missing or not working as expected:
Report an Issue: You can easily report any issues directly in the Run console. This helps us track and address problems efficiently.
Email Support: For more detailed inquiries or feature requests, feel free to email harshmaur@gmail.com.
Rest assured, you will receive a prompt response to your issue or request. We pride ourselves on our quick problem-solving and feature implementation. Your feedback is invaluable in helping us continually improve Reddit Scraper Pro to meet your needs.
Our commitment is to provide swift solutions and implement new features rapidly, ensuring that Reddit Scraper Pro remains the most up-to-date and efficient Reddit scraping tool available.
Start scraping Reddit with Reddit Scraper Pro for 20$ per month.
Reddit Scraper doesn't require any coding skills to start using it.
Reddit Scraper Pro offers versatile input options to suit your needs:
To scrape Reddit using search terms, follow these steps:
Example search configuration:
This setup will search for cryptocurrency and blockchain-related content, focusing on hot posts and comments from the last month, excluding NSFW content, and limiting the results to 50 posts with up to 100 total comments (max 20 per post).
To see the full list of parameters, their default values, and how to set the values of your own, head over to Input Schema tab.
Here are some input examples for different use cases, based on the input schema. Default values are included where applicable:
Scraping posts from a subreddit:
1{ 2 "startUrls": [{ "url": "https://www.reddit.com/r/technology/" }], 3 "crawlCommentsPerPost": false, 4 "maxPostsCount": 10, 5 "maxCommentsPerPost": 10, 6 "includeNSFW": false, 7 "proxy": { 8 "useApifyProxy": true, 9 "apifyProxyGroups": ["RESIDENTIAL"] 10 } 11}
Searching for posts on a specific topic:
1{ 2 "searchTerms": ["artificial intelligence"], 3 "searchPosts": true, 4 "searchComments": false, 5 "searchCommunities": false, 6 "searchSort": "hot", 7 "searchTime": "week", 8 "maxPostsCount": 50, 9 "includeNSFW": false, 10 "proxy": { 11 "useApifyProxy": true, 12 "apifyProxyGroups": ["RESIDENTIAL"] 13 } 14}
Scraping comments from a specific post:
1{ 2 "startUrls": [ 3 { 4 "url": "https://www.reddit.com/r/AskReddit/comments/example_post_id/example_post_title/" 5 } 6 ], 7 "crawlCommentsPerPost": true, 8 "maxCommentsPerPost": 100, 9 "includeNSFW": false, 10 "proxy": { 11 "useApifyProxy": true, 12 "apifyProxyGroups": ["RESIDENTIAL"] 13 } 14}
Extracting community information:
1{ 2 "startUrls": [{ "url": "https://www.reddit.com/r/AskScience/" }], 3 "maxPostsCount": 0, 4 "maxCommentsCount": 0, 5 "includeNSFW": false, 6 "proxy": { 7 "useApifyProxy": true, 8 "apifyProxyGroups": ["RESIDENTIAL"] 9 } 10}
Scraping user posts and comments:
1{ 2 "startUrls": [{ "url": "https://www.reddit.com/user/example_username" }], 3 "maxPostsCount": 20, 4 "maxCommentsCount": 50, 5 "includeNSFW": false, 6 "proxy": { 7 "useApifyProxy": true, 8 "apifyProxyGroups": ["RESIDENTIAL"] 9 } 10}
Searching for comments across Reddit:
1{ 2 "searchTerms": ["climate change"], 3 "searchPosts": false, 4 "searchComments": true, 5 "searchCommunities": false, 6 "searchSort": "top", 7 "searchTime": "month", 8 "maxCommentsCount": 100, 9 "includeNSFW": false, 10 "proxy": { 11 "useApifyProxy": true, 12 "apifyProxyGroups": ["RESIDENTIAL"] 13 } 14}
Scraping popular posts from multiple subreddits:
1{ 2 "startUrls": [ 3 { "url": "https://www.reddit.com/r/news/" }, 4 { "url": "https://www.reddit.com/r/worldnews/" }, 5 { "url": "https://www.reddit.com/r/politics/" } 6 ], 7 "maxPostsCount": 10, 8 "crawlCommentsPerPost": true, 9 "maxCommentsPerPost": 5, 10 "includeNSFW": false, 11 "proxy": { 12 "useApifyProxy": true, 13 "apifyProxyGroups": ["RESIDENTIAL"] 14 } 15}
These examples demonstrate various configurations for different use cases of the Reddit Scraper, adhering to the provided input schema and including default values where applicable.
If you need to limit the scope of your search, you can do that by setting the max number of posts you want to scrape inside a community or user. You can also set a limit to the number of comments for each post. You can limit the number of communities and the number of leaderboards by using the following parameters:
1{ 2 "maxPostsCount": 10, 3 "maxCommentsPerPost": 5, 4 "maxCommunitiesCount": 2, 5 "maxCommentsCount": 100, 6 "maxItems": 1000 7}
You can also set maxItems
to prevent a very long run of the Actor. This parameter will stop your scraper when it reaches the number of results you've indicated. Useful for testing.
While scraping publicly available data from Reddit is generally allowed, it's important to comply with Reddit's terms of service and respect the site's usage policies. It's recommended to use the scraper responsibly, avoid excessive requests, and ensure that the scraped data is used in compliance with applicable laws and regulations. You can read more about compliance with ToS in our blogpost.
No, it is not required. Reddit maintains its data publicly accessible and does not enforce users to login.
Yes. Please use Apify's residential proxies for Reddit scraping.
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!