Documentation Index
Fetch the complete documentation index at: https://docs.lobstr.io/llms.txt
Use this file to discover all available pages before exploring further.
This endpoint creates a new squid for a specified crawler by providing the crawler’s hash ID. A squid is a container that groups together related tasks and configurations for a specific scraping operation.
What is a Squid?
A squid acts as a project workspace for your scraping tasks:
- Groups tasks: All URLs or items you want to scrape with the same crawler
- Stores configuration: Crawler parameters, concurrency settings, delivery options
- Manages runs: Tracks execution history and results
- Enables scheduling: Set up automated recurring scrapes
Your API authentication token. Value: Token YOUR_API_KEY
Must be application/json. Value: application/json
Request Body
The unique ID (hash) of the crawler to use for this squid. Example: "4734d096159ef05210e0e1677e8be823"
Custom name for the squid. If not provided, an auto-generated name will be used. Example: "My Google Maps Scraper"
Response Field Explanations
Unique squid identifier. Example: "c106a44a98044ef18acc59986ae10967"
Squid name (auto-generated as “Crawler Name (N)” if not provided). Example: "Google Maps (1)"
ID of the associated crawler. Example: "4734d096159ef05210e0e1677e8be823"
Whether the squid is active and can run. Example: true
Number of concurrent tasks (default: 1). Example: 1
Squid-level parameters with default values from crawler. Example: {}
Cron schedule configuration (null if not scheduled). Example: null
Whether to stop after all tasks complete (false = run continuously). Example: false
After creating a squid, you can update its parameters and settings using the Update Squid endpoint before adding tasks.
The squid automatically inherits default parameters from the crawler. Check the response ‘params’ field to see what defaults were applied.
Custom names help organize multiple squids using the same crawler. Without a custom name, squids are numbered sequentially.
Make sure to get the crawler ID from the List Crawlers endpoint first. Using an invalid crawler ID will result in an error.
Code Examples
curl -X POST "https://api.lobstr.io/v1/squids" \
-H "Authorization: Token YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"crawler": "4734d096159ef05210e0e1677e8be823",
"name": "My Google Maps Scraper"
}'
Response
{
"id": "e86b29c032024b66aff529e1d43c2bd7",
"account": [],
"concurrency": 1,
"crawler": "4734d096159ef05210e0e1677e8be823",
"created_at": "2025-02-03T14:24:23Z",
"is_active": true,
"name": "My Google Maps Scraper",
"params": {
"max_results": 200,
"ratings": "Any rating",
"country": "United States",
"language": "English (United States)"
},
"schedule": null,
"to_complete": false
}