đź’« Universal GraphQL Scraper
GraphQL is a data query and manipulation language for APIs that allows a client to specify what data it needs ("declarative data fetching"). A GraphQL server can fetch data from separate sources for a single client query and present the results in a unified graph.[2] It is not tied to any specific database or storage engine.
Trying to be Universal GraphQL Scraper Actor
Web scraping is so much easier now, thanks to GraphQL. It’s like you ask websites for data, and they just serve it up on a silver platter, no fuss. Honestly, it's made scraping feel less like a chore and more like a friendly chat with the internet. Now, here's hoping Amazon.com jumps on the GraphQL bandwagon soon—fingers crossed!
https://apify.hashnode.dev
Scraping blogs from https://apify.hashnode.dev
https://apify.hashnode.dev/api/graphql
1query ($host: String!, $after: String, $first: Int!, $filter: PublicationPostConnectionFilter) { 2 publication(host: $host) { 3 id 4 posts(after: $after, first: $first, filter: $filter) { 5 edges { 6 node { 7 __typename id title subtitle slug publishedAt url brief 8 } 9 __typename 10 } 11 pageInfo { hasNextPage endCursor __typename } 12 totalDocuments 13 __typename 14 } 15 __typename 16 } 17}
1{ 2 "host": "apify.hashnode.dev" 3}
1{ 2 "limit": 200, 3 4 "url": "https://apify.hashnode.dev/api/graphql", 5 "variables": "{ "host": "apify.hashnode.dev", "first": 10 }", 6 7 "query": "query ($host: String!, $after: String, $first: Int!, $filter: PublicationPostConnectionFilter) {\r
publication(host: $host) {\r
id\r
posts(after: $after, first: $first, filter: $filter) {\r
edges {\r
node {\r
__typename id title subtitle slug publishedAt url brief\r
}\r
__typename\r
}\r
pageInfo { hasNextPage endCursor __typename }\r
totalDocuments\r
__typename\r
}\r
__typename\r
}\r
}", 8 9 "cursor.step" : 25, 10 "cursor.next" : "after", 11 "cursor.limit" : "first", 12 13 "parse.root" : "publication.posts", 14 "parse.list" : "edges", 15 "parse.item" : "node", 16 "parse.total" : "totalDocuments", 17 "parse.next" : "pageInfo.endCursor" 18 19}
1SYNTAX: 2 { commandName(parameter: ARGUMENT) { FIELD_LIST } }
Example :
1{ 2 search(name:"pants" count:10) { id name __typename } 3}
JSON Input :
1{ 2 "query": "{ search(name:"pants" count:10) { id name __typename } }" 3}
1SYNTAX: 2 query ($VARIABLE: TYPE) { commandName(parameter: $VARIABLE) { FIELD_LIST } }
Example :
1query ($text:String $limit:Int) { 2 search(name:$text count:$limit) { id name __typename } 3}
JSON Input :
1{ 2 "query": "query ($text:String $limit:Int){ search(name:$text count:$limit){id name __typename} }", 3 "variables": { "text": "pants", "limit": 10 } 4}
1SYNTAX: 2 query ($VARIABLE: TYPE) { commandName(parameter: $VARIABLE) { ... FRAGMENT_NAME } } 3 fragment FRAGMENT_NAME on TYPE { FIELD_LIST }
Example: Without Fragment
1query ($text:String $limit:Int) { 2 search(name:$text count:$limit) { 3 id name sku description url specifications __typename 4 variations { id name sku description url __typename } 5 similarProducts { id name sku description url __typename } 6 } 7}
Example: With Fragment
1query ($text:String $limit:Int) { 2 search(name:$text count:$limit) { 3 ... ProductInfo 4 specifications 5 variations { ... ProductInfo } 6 similarProducts { ... ProductInfo } 7 } 8} 9 10fragment ProductInfo on Product { id name sku description url __typename }
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!