Retrieve information from SoundCloud without any restrictions or rate limits. Extremely fast data harvesting about tracks, comments, user profiles, albums, playlists, and more. Download URLs of tracks, monetization methods, number of likes, number of shares, and many more are ready for you.
Since SoundCloud doesn't provide a good and free API, this actor should help you to retrieve data from it.
The SoundCloud data scraper supports the following features:
Search any keyword - Search for any keywords with no rate limit and retrieve the results with ease.
Search for users, tracks, playlists, or albums - Filter your search results by object type. Albums, Playlists, Tracks, and Users are available for any search.
Get tracks - ID, title, duration, user information, monetization models, repost or like counts, and many other deep-level information is ready for your consumption.
Tracks available for download - Get the download URLs for the SoundCloud tracks.
Comments for each of the tracks - Looking for user comments? No problem. Retrieve all the comments, dates, user information, and many other things blazing fast!
This scraper is under active development. If you have any feature requests you can create an issue from here.
The input of this scraper should be JSON containing the list of pages on SoundCloud that should be visited. Possible fields are:
startUrls
: (Required) (Array) List of SoundCloud URLs. URLs to start with. It should be a list, search, track, user, album, or playlist URL.
includeComments
: (Optional) (Boolean) This will add all the comments that SoundCloud provides into the track objects. Please keep in mind that the time and resources the actor uses will increase proportionally to the number of comments.
endPage
: (Optional) (Number) Final number of page that you want to scrape. The default is Infinite
. This applies to all search
requests and startUrls
individually.
maxItems
: (Optional) (Number) You can limit scraped items. This should be useful when you search through the big lists or search results.
proxy
: (Required) (Proxy Object) Proxy configuration.
This solution requires the use of Proxy servers, either your own proxy servers or you can use Apify Proxy.
When you want to scrape over a specific list URL, just copy and paste the link as one of the startUrl.
If you would like to scrape only the first page of a list then put the link for the page and have the endPage
as 1.
With the last approach that is explained above you can also fetch any interval of pages. If you provide the 5th page of a list and define the endPage
parameter as 6 then you'll have the 5th and 6th pages only.
The actor is optimized to run blazing fast and scrape as many items as possible. Therefore, it forefronts all the detailed requests. If the actor doesn't block very often it'll scrape 100 tracks in 2 minutes with ~0.01-0.03 compute units.
1{ 2 "startUrls":[ 3 "https://soundcloud.com/sibel-dar-c/ekin-ekinci-gel-art-k", 4 "https://soundcloud.com/sibel-dar-c", 5 "https://soundcloud.com/swedish-hiphop-rap-fm/sets/pistoler-poesi-och-sex", 6 "https://soundcloud.com/search/sets?q=dnb", 7 "https://soundcloud.com/search/people?q=dnb", 8 "https://soundcloud.com/search/albums?q=dnb", 9 "https://soundcloud.com/search/sounds?q=dnb" 10 ], 11 "endPage":3, 12 "maxItems":20, 13 "includeComments":false, 14 "proxy":{ 15 "useApifyProxy":true 16 } 17}
During the run, the actor will output messages letting you know what is going on. Each message always contains a short label specifying which page from the provided list is currently specified. When items are loaded from the page, you should see a message about this event with a loaded item count and total item count for each page.
If you provide incorrect input to the actor, it will immediately stop with a failure state and output an explanation of what is wrong.
During the run, the actor stores results into a dataset. Each item is a separate item in the dataset.
You can manage the results in any language (Python, PHP, Node JS/NPM). See the FAQ or our API reference to learn more about getting results from this SoundCloud actor.
The structure of each item in SoundCloud looks like this:
1{ 2 "artwork_url": "https://i1.sndcdn.com/artworks-KUd5CFXdByl4gCVn-qYc5Ug-large.jpg", 3 "caption": null, 4 "commentable": true, 5 "comment_count": 754, 6 "created_at": "2023-01-24T13:15:38Z", 7 "description": "🚨SUBSCRIBE: bit.ly/DnbaSubscribe
🎫 360 SIGN UP: https://t.me/dnballstars360
📲FOLLOW: @dnballstars
📰VISIT: dnballstars.co.uk
🎧PLAYLISTS: lnk.to/DNBA-hotpick
💡Enable Alerts 💥", 8 "downloadable": false, 9 "download_count": 0, 10 "duration": 3558922, 11 "full_duration": 3558922, 12 "embeddable_by": "all", 13 "genre": "Drum & Bass", 14 "has_downloads_left": false, 15 "id": 1431326908, 16 "kind": "track", 17 "label_name": null, 18 "last_modified": "2023-03-13T21:42:46Z", 19 "license": "all-rights-reserved", 20 "likes_count": 13904, 21 "permalink": "hedex-dnb-allstars-360", 22 "permalink_url": "https://soundcloud.com/dnballstars/hedex-dnb-allstars-360", 23 "playback_count": 307681, 24 "public": true, 25 "publisher_metadata": { 26 "id": 1431326908, 27 "urn": "soundcloud:tracks:1431326908", 28 "artist": "", 29 "contains_music": true 30 }, 31 "purchase_title": "JOIN US", 32 "purchase_url": "https://t.me/dnballstars360", 33 "release_date": null, 34 "reposts_count": 507, 35 "secret_token": null, 36 "sharing": "public", 37 "state": "finished", 38 "streamable": true, 39 "tag_list": "", 40 "title": "Hedex - DnB Allstars 360°", 41 "track_format": "single-track", 42 "uri": "https://api.soundcloud.com/tracks/1431326908", 43 "urn": "soundcloud:tracks:1431326908", 44 "user_id": 366752855, 45 "visuals": null, 46 "waveform_url": "https://wave.sndcdn.com/AExJj2NYeBEy_m.json", 47 "display_date": "2023-01-24T19:00:18Z", 48 "media": { 49 "transcodings": [ 50 { 51 "url": "https://api-v2.soundcloud.com/media/soundcloud:tracks:1431326908/c75cef09-1ef2-46ea-b8a2-d9b1740eaa56/stream/hls", 52 "preset": "mp3_1_0", 53 "duration": 3558922, 54 "snipped": false, 55 "format": { 56 "protocol": "hls", 57 "mime_type": "audio/mpeg" 58 }, 59 "quality": "sq" 60 }, 61 { 62 "url": "https://api-v2.soundcloud.com/media/soundcloud:tracks:1431326908/c75cef09-1ef2-46ea-b8a2-d9b1740eaa56/stream/progressive", 63 "preset": "mp3_1_0", 64 "duration": 3558922, 65 "snipped": false, 66 "format": { 67 "protocol": "progressive", 68 "mime_type": "audio/mpeg" 69 }, 70 "quality": "sq" 71 }, 72 { 73 "url": "https://api-v2.soundcloud.com/media/soundcloud:tracks:1431326908/2b7c6c4a-2bce-465d-8254-5b440e662328/stream/hls", 74 "preset": "opus_0_0", 75 "duration": 3558895, 76 "snipped": false, 77 "format": { 78 "protocol": "hls", 79 "mime_type": "audio/ogg; codecs="opus"" 80 }, 81 "quality": "sq" 82 } 83 ] 84 }, 85 "station_urn": "soundcloud:system-playlists:track-stations:1431326908", 86 "station_permalink": "track-stations:1431326908", 87 "track_authorization": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJnZW8iOiJUUiIsInN1YiI6IiIsInJpZCI6IjEwMmU5MWEwLTNkOWEtNGM0ZC04OTllLTZlMTIwZDM2YWRhMCIsImlhdCI6MTY3OTQ3NTA1Mn0.abKMM2OF_Xh4-JDtM4vEFDzJqQiNPQq7dp027Nc6_KQ", 88 "monetization_model": "NOT_APPLICABLE", 89 "policy": "ALLOW", 90 "user": { 91 "avatar_url": "https://i1.sndcdn.com/avatars-000367648727-gp601z-large.jpg", 92 "first_name": "", 93 "followers_count": 84897, 94 "full_name": "", 95 "id": 366752855, 96 "kind": "user", 97 "last_modified": "2023-03-20T12:53:28Z", 98 "last_name": "", 99 "permalink": "dnballstars", 100 "permalink_url": "https://soundcloud.com/dnballstars", 101 "uri": "https://api.soundcloud.com/users/366752855", 102 "urn": "soundcloud:users:366752855", 103 "username": "DnB Allstars", 104 "verified": true, 105 "city": "London", 106 "country_code": null, 107 "badges": { 108 "pro": false, 109 "pro_unlimited": true, 110 "verified": true 111 }, 112 "station_urn": "soundcloud:system-playlists:artist-stations:366752855", 113 "station_permalink": "artist-stations:366752855" 114 }, 115 "comments": [ 116 { 117 "kind": "comment", 118 "id": 1900670014, 119 "body": "Oh my days 😳😳😳", 120 "created_at": "2023-03-16T17:34:43Z", 121 "timestamp": 326931, 122 "track_id": 1431326908, 123 "user_id": 199049081, 124 "self": { 125 "urn": "soundcloud:comments:1900670014" 126 }, 127 "user": { 128 "avatar_url": "https://a1.sndcdn.com/images/default_avatar_large.png", 129 "first_name": "", 130 "followers_count": 7, 131 "full_name": "", 132 "id": 199049081, 133 "kind": "user", 134 "last_modified": "2023-03-16T17:34:38Z", 135 "last_name": "", 136 "permalink": "user-655659681", 137 "permalink_url": "https://soundcloud.com/user-655659681", 138 "uri": "https://api.soundcloud.com/users/199049081", 139 "urn": "soundcloud:users:199049081", 140 "username": "User 655659681", 141 "verified": false, 142 "city": null, 143 "country_code": null, 144 "badges": { 145 "pro": false, 146 "pro_unlimited": false, 147 "verified": false 148 }, 149 "station_urn": "soundcloud:system-playlists:artist-stations:199049081", 150 "station_permalink": "artist-stations:199049081" 151 } 152 }, 153 { 154 "kind": "comment", 155 "id": 1900505365, 156 "body": "🧐🧐🧐🧐", 157 "created_at": "2023-03-16T11:42:41Z", 158 "timestamp": 124858, 159 "track_id": 1431326908, 160 "user_id": 1051717858, 161 "self": { 162 "urn": "soundcloud:comments:1900505365" 163 }, 164 "user": { 165 "avatar_url": "https://i1.sndcdn.com/avatars-m0rsLpyetcHHc00w-yXkOwA-large.jpg", 166 "first_name": "", 167 "followers_count": 4, 168 "full_name": "", 169 "id": 1051717858, 170 "kind": "user", 171 "last_modified": "2023-01-13T09:42:44Z", 172 "last_name": "", 173 "permalink": "user-172639176", 174 "permalink_url": "https://soundcloud.com/user-172639176", 175 "uri": "https://api.soundcloud.com/users/1051717858", 176 "urn": "soundcloud:users:1051717858", 177 "username": "tommymoore05", 178 "verified": false, 179 "city": "", 180 "country_code": "GB", 181 "badges": { 182 "pro": false, 183 "pro_unlimited": false, 184 "verified": false 185 }, 186 "station_urn": "soundcloud:system-playlists:artist-stations:1051717858", 187 "station_permalink": "artist-stations:1051717858" 188 } 189 }, 190 ] 191}
Please visit us through epctex.com to see all the products that are available for you. If you are looking for any custom integration or so, please reach out to us through the chat box in epctex.com. In need of support? business@epctex.com is at your service.
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!