Use this file to discover all available pages before exploring further.
Crawler chaining lets you connect two squids so that when a run on the source squid completes, its results are automatically extracted and queued as tasks on a target squid, triggering a new run downstream.Example use case: Run a Google Maps Leads scrape to collect place URLs, then automatically pass those URLs into a Google Maps Reviews scrape — without any manual intervention.
Chains are limited to a maximum depth of 3 squids. A squid can only have one outgoing chain at a time.
You configure a chain on a source squid, specifying the target crawler and a field_map
When a run on the source squid completes with a status of DONE, the scheduler extracts the mapped fields from its results
Those values are queued as tasks on the target squid (auto-created on first trigger if it doesn’t exist yet)
If autostart is true, a new run starts immediately on the target squid
The chain is only triggered when the source run reaches a status of DONE. Runs that end with ERROR or any other status will not trigger the downstream chain.
Maps a result field from the source crawler to an input parameter on the target crawler. Example: {"url": "url"} — passes the url field from source results as the url input on the target.
If true, the downstream run starts automatically when tasks are queued. If false, tasks are queued and the run is created as paused for manual review. Defaults to true.