Rate Limit Response
When you exceed the rate limit, the API will return a429 Too Many Requests status code with the following response:
Rate Limit Strategy
Plan-Based Limits
Rate limits vary based on your subscription plan. Higher-tier plans receive increased rate limits to support larger-scale operations.Endpoint-Specific Limits
Different endpoints may have different rate limits:- Scraping endpoints (
/scrape,/batch/scrape) - Limited by requests per minute - Crawling endpoints (
/crawl) - Limited by concurrent crawls and pages per crawl - Search endpoint (
/search) - Limited by searches per minute - Extract endpoint (
/extract) - Limited by extraction requests per minute - Research endpoint (
/deep-research) - Limited by concurrent research operations
Handling Rate Limits
Best Practices
Implement Exponential Backoff
Implement Exponential Backoff
When you receive a 429 response, wait before retrying. Use exponential backoff to gradually increase wait times:
Use Batch Operations
Use Batch Operations
Instead of making individual requests, use batch operations when scraping multiple URLs:Batch operations count as a single request against your rate limit while processing multiple URLs.
Monitor Your Usage
Monitor Your Usage
Track your API usage to stay within limits:Response:
Use Webhooks for Async Operations
Use Webhooks for Async Operations
For crawling and batch operations, use webhooks instead of polling for status:This reduces the number of status check requests you need to make.
Credit System
In addition to rate limits, Firecrawl uses a credit-based system:Credit Consumption
- Scrape: 1 credit per page
- Crawl: 1 credit per page crawled
- Batch Scrape: 1 credit per URL
- Search: Credits vary based on scraping options
- Extract: Token-based pricing (separate from credits)
- Map: 1 credit per request
Insufficient Credits
When you run out of credits, you’ll receive a402 Payment Required response:
Crawl-Specific Limits
Concurrent Crawls
The number of simultaneous crawl operations you can run depends on your plan:Pages Per Crawl
You can limit the number of pages in a single crawl using thelimit parameter:
Crawl Delays
Respect website rate limits by adding delays between requests:delay parameter specifies the number of seconds to wait between scraping pages.
Error Handling
Rate Limit Headers
While not explicitly documented in all responses, monitor HTTP status codes to detect rate limiting:Server Errors
Occasional500 Internal Server Error responses may occur. These are different from rate limits and should be retried with exponential backoff:
Optimizing API Usage
1. Use the Right Endpoint
- Use
/mapto discover URLs before crawling - Use
/batch/scrapefor known URL lists - Use
/crawlfor comprehensive site scraping
2. Filter Content Efficiently
Use crawl options to reduce unnecessary requests:3. Request Only Needed Formats
Specify only the formats you need:SDK Rate Limit Handling
Our official SDKs include built-in rate limit handling:Python SDK
Node.js SDK
Contact Support
If you’re experiencing consistent rate limiting issues or need higher limits:- Review your usage patterns and optimize requests
- Consider upgrading to a higher-tier plan
- Contact [email protected] to discuss custom rate limits