Launch Week I Recap
Introduction
Last week marked an exciting milestone for Firecrawl as we kicked off our inaugural Launch Week, unveiling a series of new features and updates designed to enhance your web scraping experience. Let’s take a look back at the improvements we introduced throughout the week.
We started Launch Week by introducing our highly anticipated Teams feature. Teams enables seamless collaboration on web scraping projects, allowing you to work alongside your colleagues and tackle complex data gathering tasks together. With updated pricing plans to accommodate teams of all sizes, Firecrawl is now an excellent platform for collaborative web scraping.
On Day 2, we improved your data collection capabilities by doubling the rate limits for our /scrape endpoint across all plans. This means you can now gather more data in the same amount of time, enabling you to take on larger projects and scrape more frequently.
Day 3 saw the unveiling of our new Map endpoint, which allows you to transform a single URL into a comprehensive map of an entire website quickly. As a fast and easy way to gather all the URLs on a website, the Map endpoint opens up new possibilities for your web scraping projects.
Day 4 marked a significant release: Firecrawl /v1. This more reliable and developer-friendly API makes gathering web data easier. With new scrape formats, improved crawl status, enhanced markdown parsing, v1 support for all SDKs (including new Go and Rust SDKs), and an improved developer experience, v1 enhances your web scraping workflow.
On Day 5, we introduced a new feature: Real-Time Crawling with WebSockets. Our WebSocket-based method, Crawl URL and Watch, enables real-time data extraction and monitoring, allowing you to process data immediately, react to errors quickly, and know precisely when your crawl is complete.
Day 6 brought v1 support for LLM Extract, enabling you to extract structured data from web pages using the extract format in /scrape. With the ability to pass a schema or just provide a prompt, LLM extraction is now more flexible and powerful.
We wrapped up Launch Week with the introduction of /crawl webhook support. You can now send notifications to your apps during a crawl, with four types of events: crawl.started, crawl.page, crawl.completed, and crawl.failed. This feature allows for more seamless integration of Firecrawl into your workflows.
Wrapping Up
Launch Week showcased our commitment to continually evolving and improving Firecrawl to meet the needs of our users. From collaborative features like Teams to performance improvements like increased rate limits, and from new endpoints like Map and Extract to real-time capabilities with WebSockets and Webhooks, we’ve expanded the possibilities for your web scraping projects.
We’d like to thank our community for your support, feedback, and enthusiasm throughout Launch Week and beyond. Your input drives us to innovate and push the boundaries of what’s possible with web scraping.
Stay tuned for more updates as we continue to shape the future of data gathering together. Happy scraping!
About the Author
Eric Ciarla is the Chief Operating Officer (COO) of Firecrawl and leads marketing. He also worked on Mendable.ai and sold it to companies like Snapchat, Coinbase, and MongoDB. Previously worked at Ford and Fracta as a Data Scientist. Eric also co-founded SideGuide, a tool for learning code within VS Code with 50,000 users.
More articles by Eric Ciarla
How to Create an llms.txt File for Any Website
Learn how to generate an llms.txt file for any website using the llms.txt Generator and Firecrawl.
Cloudflare Error 1015: How to solve it?
Cloudflare Error 1015 is a rate limiting error that occurs when Cloudflare detects that you are exceeding the request limit set by the website owner.
Build an agent that checks for website contradictions
Using Firecrawl and Claude to scrape your website's data and look for contradictions.
Why Companies Need a Data Strategy for Generative AI
Learn why a well-defined data strategy is essential for building robust, production-ready generative AI systems, and discover practical steps for curation, maintenance, and integration.
Getting Started with OpenAI's Predicted Outputs for Faster LLM Responses
A guide to leveraging Predicted Outputs to speed up LLM tasks with GPT-4o models.
How to easily install requests with pip and python
A tutorial on installing the requests library in Python using various methods, with usage examples and troubleshooting tips
How to quickly install BeautifulSoup with Python
A guide on installing the BeautifulSoup library in Python using various methods, with usage examples and troubleshooting tips
How to Use OpenAI's o1 Reasoning Models in Your Applications
Learn how to harness OpenAI's latest o1 series models for complex reasoning tasks in your apps.