POST
/v1/squidsCreate Squid
Create a new squid container for a specific crawler to organize and run scraping tasksThis endpoint creates a new squid for a specified crawler by providing the crawler's hash ID. A squid is a container that groups together related tasks and configurations for a specific scraping operation.
What is a Squid?
A squid acts as a project workspace for your scraping tasks:
- Groups tasks: All URLs or items you want to scrape with the same crawler
- Stores configuration: Crawler parameters, concurrency settings, delivery options
- Manages runs: Tracks execution history and results
- Enables scheduling: Set up automated recurring scrapes
Headers
| Key | Value | Required |
|---|---|---|
| Authorization | Token YOUR_API_KEY | Yes |
| Content-Type | application/json | Yes |
Request Body
crawlerstring
RequiredExample: 4734d096159ef05210e0e1677e8be823
namestring
OptionalExample: My Google Maps Scraper
Response Field Explanations
id
string
Example: c106a44a98044ef18acc59986ae10967
name
string
Example: Google Maps (1)
crawler
string
Example: 4734d096159ef05210e0e1677e8be823
is_active
boolean
Example: true
concurrency
integer
Example: 1
params
object
Example: {}
schedule
objectnull
Example: null
to_complete
boolean
Example: false
Pro Tip
After creating a squid, you can update its parameters and settings using the Update Squid endpoint before adding tasks.
Note
The squid automatically inherits default parameters from the crawler. Check the response 'params' field to see what defaults were applied.
Pro Tip
Custom names help organize multiple squids using the same crawler. Without a custom name, squids are numbered sequentially.
Warning
Make sure to get the crawler ID from the List Crawlers endpoint first. Using an invalid crawler ID will result in an error.