Official Lang & Schwarz Trade Republic Scraper (www.ls-tc.de) is a powerful tool designed to extract public securities data, stock quotes and data for a given stock ticker. Get closing quote of each stock pick dating back to 1992. Important for Legal reasons.
🔎 What is the Lang & Schwarz Trade Republic Scraper?
The Lang & Schwarz Trade Republic Scraper is a web scraping tool designed to extract historical price data for financial instruments from the Lang & Schwarz Trade Republic website. This actor retrieves data based on specified symbols and time ranges, providing structured data for various analytical and monitoring purposes. It leverages the Lang & Schwarz Trade Republic API for reliable data extraction.
🧾 What data can the Lang & Schwarz Trade Republic Scraper extract?
The Lang & Schwarz Trade Republic Scraper extracts the following data points for each financial instrument:
💼 What use cases does the Lang & Schwarz Trade Republic Scraper support?
The Lang & Schwarz Trade Republic Scraper is valuable for a range of applications where monitoring financial instrument prices is important:
📖 How to use the Lang & Schwarz Trade Republic Scraper?
📥 Input
To run the Lang & Schwarz Trade Republic Scraper, provide the following input parameters:
symbol
(Required): The symbol of the financial instrument to scrape (e.g., "AAPL").
time_range
(Optional): The time range for which to retrieve historical price data. Options include 'today', '3days', '5days', '1week', '1month', '3months', '6months', '1year', '2years', '5years', 'full'. Default is '1week'.
start_urls
(Optional): An array of start URLs to begin the scraping process. Each object in the array should have a url
property. Example:
1[ 2 { 3 "symbol": "AAPL", 4 "time_range": "1week" 5 } 6]
🛠️ Technical Details
The Lang & Schwarz Trade Republic Scraper uses the following technologies and libraries:
The scraper includes error handling and retry mechanisms to ensure reliable data extraction. It uses exponential backoff for retrying requests in case of 403 Forbidden errors.
📤 Output
The results are stored in the default dataset associated with the actor. Each item is an ad, having the following format:
1[{ 2 "Date": "2025-02-16 00:00:00", 3 "Price": 233.025 4}, 5{ 6 "Date": "2025-02-17 00:00:00", 7 "Price": 234.2 8}, 9{ 10 "Date": "2025-02-18 00:00:00", 11 "Price": 234.125 12}, 13{ 14 "Date": "2025-02-19 00:00:00", 15 "Price": 234.675 16}, 17{ 18 "Date": "2025-02-20 00:00:00", 19 "Price": 233.975 20}, 21{ 22 "Date": "2025-02-21 21:59:50", 23 "Price": 234.575 24}]
Yes, if you're scraping publicly available data for personal or internal use. Always review Websute's Terms of Service before large-scale use or redistribution.
No. This is a no-code tool — just enter a job title, location, and run the scraper directly from your dashboard or Apify actor page.
It extracts job titles, companies, salaries (if available), descriptions, locations, and post dates. You can export all of it to Excel or JSON.
Yes, you can scrape multiple pages and refine by job title, location, keyword, or more depending on the input settings you use.
You can use the Try Now button on this page to go to the scraper. You’ll be guided to input a search term and get structured results. No setup needed!