Releases: mendableai/firecrawl
v1.4.4
🚀 Features & Enhancements
- Scrape API: Added action & wait time validation (#1146)
- Extraction Improvements:
- Environment Setup: Added Serper & Search API env vars to docker-compose (#1147)
- Credit System Update: Now displays "tokens" instead of "credits" when out of tokens (#1178)
✏️ Examples
- Gemini 2.0 Crawler: Implemented new crawling example (#1161)
- Gemini TrendFinder: https://github.com/mendableai/gemini-trendfinder
- Normal Search to Open Deep Research: https://github.com/nickscamara/open-deep-research
🐛 Fixes
- HTML Transformer: Updated free_string function parameter type (#1163)
- Gemini Crawler: Updated library & improved PDF link extraction (#1175)
- Crawl Queue Worker: Only reports successful page count in num_docs (#1179)
- Scraping & URLs:
What's Changed
- [FIR-796] feat(api/types): Add action and wait time validation for scrape requests by @ftonato in #1146
- Implemented Gemini 2.0 crawler by @aparupganguly in #1161
- Add Serper and Search API env vars to docker-compose by @RealLukeMartin in #1147
- fix(html-transformer): Update free_string function parameter type by @carterlasalle in #1163
- Add detection of PDF/image sub-links and extract text via Gemini by @mayooear in #1173
- fix: update gemini library. extract pdf links from scraped content by @mayooear in #1175
- feat(v1/checkCredits): say "tokens" instead of "credits" if out of tokens by @mogery in #1178
- feat(v1/extract) Show sources out of __experimental by @nickscamara in #1180
- (feat/extract) Multi-entity prompt improvements by @nickscamara in #1181
- fix(queue-worker/crawl): only report successful page count in num_docs (FIR-960) by @mogery in #1179
- fix: relative url 2 full url use error base url by @dolonfly in #584
- fix(v1/batch/scrape): use scrape rate limit by @mogery in #1182
New Contributors
- @RealLukeMartin made their first contribution in #1147
- @carterlasalle made their first contribution in #1163
- @mayooear made their first contribution in #1173
- @dolonfly made their first contribution in #584
Full Changelog: v1.4.3...v1.4.4
Examples Week - v1.4.3
Summary of changes
- Open Deep Research: An open source version of OpenAI Deep Research. See here
- R1 Web Extractor Feature: New extraction capability added.
- O3-Mini Web Crawler: Introduces a lightweight crawler for specific use cases.
- Updated Model Parameters: Enhancements to o3-mini_company_researcher.
- URL Deduplication: Fixes handling of URLs ending with /, index.html, index.php, etc.
- Improved URL Blocking: Uses tldts parsing for better blocklist management.
- Valid JSON via rawHtml in Scrape: Ensures valid JSON extraction.
- Product Reviews Summarizer: Implements summarization using o3-mini.
- Scrape Options for Extract: Adds more configuration options for extracting data.
- O3-Mini Job Resource Extractor: Extracts job-related resources using o3-mini.
- Cached Scrapes for Extract evals: Improves performance by using cached data for extractions evals.
What's Changed
- You forgot an 'e' by @sami0596 in #1118
- added cached scrapes to extract by @rafaelsideguide in #1107
- Added R1 web extractor feature by @aparupganguly in #1115
- Feature o3-mini web crawler by @aparupganguly in #1120
- Updated Model Parameters (o3-mini_company_researcher) by @aparupganguly in #1130
- Fix corepack and self hosting setup by @rothnic in #1131
- fix(crawl-redis/generateURLPermutations): dedupe index.html/index.php/slash/bare URL ends (FIR-827) by @mogery in #1134
- feat(blocklist): Improve URL blocking with tldts parsing by @ftonato in #1117
- fix(scrape): allow getting valid JSON via rawHtml (FIR-852) by @mogery in #1138
- Implemented prodcut reviews summarizer using o3 mini by @aparupganguly in #1139
- [Feat] Added scrapeOptions to extract by @rafaelsideguide in #1133
- Feature/o3 mini job resource extractor by @aparupganguly in #1144
New Contributors
- @sami0596 made their first contribution in #1118
- @aparupganguly made their first contribution in #1115
- @rothnic made their first contribution in #1131
Full Changelog: v1.4.2...v1.4.3
Extract and API Improvements - v1.4.2
We're excited to announce several new features and improvements:
New Features
- Added web search capabilities to the extract endpoint via the
enableWebSearch
parameter - Introduced source tracking with
__experimental_showSources
parameter - Added configurable webhook events for crawl and batch operations
- New
timeout
parameter for map endpoint - Optional ad blocking with
blockAds
parameter (enabled by default)
Infrastructure & UI
- Enhanced proxy selection and infrastructure reliability
- Added domain checker tool to cloud platform
- Redesigned LLMs.txt generator interface for better usability
What's Changed
- (feat/extract) Refactor and Reranker improvements by @nickscamara in #1100
- Fix bad WebSocket URL in CrawlWatcher by @ProfHercules in #1053
- (feat/extract) Add sources to the extraction by @nickscamara in #1101
- feat(v1/map): Timeout parameter (FIR-393) by @mogery in #1105
- fix(scrapeURL/fire-engine): default to separate US-generic proxy list if no location is specified (FIR-728) by @mogery in #1104
- feat(scrapeUrl/fire-engine): add blockAds flag (FIR-692) by @mogery in #1106
- (feat/extract) Logs analyzeSchemaAndPrompt output did not match the schema by @nickscamara in #1108
- (feat/extract) Improved completions to use model's limits by @nickscamara in #1109
- feat(v0): store v0 users (team ID) in Redis for collection (FIR-698) by @mogery in #1111
- feat(github/ci): connect to tailscale (FIR-748) by @mogery in #1112
- (feat/conc) Move fully to a concurrency limit system by @nickscamara in #1045
- Added instructions for empty string to extract prompts by @rafaelsideguide in #1114
New Contributors
- @ProfHercules made their first contribution in #1053
Full Changelog: 1.4.1...v1.4.2
Firecrawl website changelog: https://firecrawl.dev/changelog
Extract Improvements - v1.4.1
We've significantly enhanced our data extraction capabilities with several key updates:
- Extract now returns a lot more data due to a new re-ranker system
- Improved infrastructure reliability
- Migrated from Cheerio to a high-performance Rust-based parser for faster and more memory-efficient parsing
- Enhanced crawl cancellation functionality for better control over running jobs
What's Changed
- Added "today" to extract prompts by @rafaelsideguide in #1084
- docs: update cancel crawl response by @ftonato in #1087
- port most of cheerio stuff to rust by @mogery in #1089
- Re-ranker changes by @nickscamara in #1090
- Rerank with lower threshold + back to map if length = 0 by @rafaelsideguide in #1086
Full Changelog: v1.4.0...1.4.1
Introducing /extract - v.1.4.0
Get structured web data with /extract
We’re excited to announce the release of /extract - get data from any website with just a prompt. With /extract, you can retrieve any information from anywhere on a website without being limited by scraping roadblocks or the typical context constraints of LLMs.
No more manual copy-pasting, broken scraping scripts, or debugging LLM calls. - it’s never been easier to enrich your data, create datasets, or power AI applications with clean, structured data from any website.
Companies are already using extract to:
- Enrich CRM data
- Streamline KYB processes
- Monitor competitors
- Supercharge onboarding experiences
- Build targeted prospecting lists
Instead of spending hours manually researching, fixing broken scrapers, or piecing together data from multiple sources, simply specify what information you need and the target website, and let the Firecrawl handle the entire retrieval process.
Specifically, you can:
- Extract structured data from entire websites using URL wildcards (https://example.com/*)
- Define custom schemas to capture exactly what you need—from simple product details to complex organizational structures
- Guide the extraction with custom prompts to ensure the LLM focuses on your target information
- Deploy anywhere with comprehensive support for Python, Node, cURL, and other popular tools. For no-code workflows, just connect via Zapier or use our API to set up integrations with other tools.
This versatility translates into a wide range of real-world applications—enabling you to enrich web data for just about any use case.
Limitations - (and the road ahead)
- Let's be honest - while /extract is pretty awesome at grabbing web data, it's not perfect yet. Here's what we're still working on:
- Big sites are tricky - It can't (yet!) grab every single product on Amazon in one go
- Complex searches need work - Things like "find all posts from 2025" aren't quite there
- Sometimes, it's a bit quirky - Results can vary between runs, though it usually gets what you need
- But here's the exciting part: we're seeing the future of web scraping take shape
Try it out
Curious to try /extract out for yourself?
Visit our playground to try out /extract - you get 500,000 tokens for free
Dive into our Extract Beta documentation for detailed technical guidance and API reference
Want a no-code solution? Connect /extract to thousands of applications through our enhanced Zapier integration
That's all for now! Happy Extracting from the whole Firecrawl team 🔥
Full Changelog: v.1.3.0...v1.4.0
v1.3 - /extract improvements
What's Changed
- feat: new snips test framework (FIR-414) by @mogery in #1033
- (feat/extract) New re-ranker + multi entity extraction by @nickscamara in #1061
- __experimental_streamSteps by @nickscamara in #1063
Full Changelog: v1.2.1...v.1.3.0
v1.2.1 - /extract Beta Improvements
What's Changed
- Indexes, Caching for /extract, Improvements by @nickscamara in #1037
- [SDK] fixed none and undefined on response by @rafaelsideguide in #1034
- feat: use new random user agent instead of the old one by @1101-1 in #1038
- (feat/extract) Move extract to a queue system by @nickscamara in #1044
/extract (beta) changes
-
We have updated the /extract endpoint to now be asynchronous. When you make a request to /extract, it will return an ID that you can use to check the status of your extract job. If you are using our SDKs, there are no changes required to your code, but please make sure to update the SDKs to the latest versions as soon as possible.
-
For those using the API directly, we have made it backwards compatible. However, you have 10 days to update your implementation to the new asynchronous model.
-
For more details about the parameters, refer to the docs sent to you.
New Contributors
Full Changelog: v1.2.0...v1.2.1
Changelog: https://www.firecrawl.dev/changelog#/extract-changes
v1.2.0 - v1/search is now available!
/v1/search
The search endpoint combines web search with Firecrawl’s scraping capabilities to return full page content for any query.
Include scrapeOptions
with formats: ["markdown"]
to get complete markdown content for each search result otherwise it defaults to getting SERP results (url, title, description).
More info here /v1/search docs
What's Changed
- /extract URL trace by @nickscamara in #1014
- (feat/v1) Search by @nickscamara in #1032
Full Changelog: v1.1.1...v1.2.0
v1.1.1
What's Changed
- feat(python-sdk): Make API key optional for self-hosted instances by @RutamBhagat in #990
- Sitemap fixes by @mogery in #1010
- fixed optional+default bug on llm schema by @rafaelsideguide in #955
- [FIR-37] feat: extract and return favicon URL during scraping by @ftonato in #1018
- fix: merge mock success data by @yujunhui in #1013
- feat(rust-sdk): Make API key optional for self-hosted instances by @RutamBhagat in #991
- feat(scrapeURL/pdf): switch to MU (FIR-356) by @mogery in #1016
New Contributors
Full Changelog: v1.1.0...v1.1.1
v1.1.0
Starting today we are going to be posting weekly releases here and on firecrawl.dev/changelog. This release is just a summary of all the improvements and fixes we pushed since v1 release here. Thank you all for the contributions!
v1.1.0
Changelog Highlights
Feature Enhancements
- New Features:
- Geolocation, mobile scraping, 4x faster parsing, better webhooks,
- Credit packs, auto-recharges and batch scraping support.
- Iframe support and query parameter differentiation for URLs.
- Similar URL deduplication.
- Enhanced map ranking and sitemap fetching.
Performance Improvements
- Faster crawl status filtering and improved map ranking algorithm.
- Optimized Kubernetes setup and simplified build processes.
- Sitemap discoverability and performance improved
Bug Fixes
- Resolved issues:
- Badly formatted JSON, scrolling actions, and encoding errors.
- Crawl limits, relative URLs, and missing error handlers.
- Fixed self-hosted crawling inconsistencies and schema errors.
SDK Updates
- Added dynamic WebSocket imports with fallback support.
- Optional API keys for self-hosted instances.
- Improved error handling across SDKs.
Documentation Updates
- Improved API docs and examples.
- Updated self-hosting URLs and added Kubernetes optimizations.
- Added articles: mastering
/scrape
and/crawl
.
Miscellaneous
- Added new Firecrawl examples
- Enhanced metadata handling for webhooks and improved sitemap fetching.
- Updated blocklist and streamlined error messages.
What's Changed
- Add docs to api spec example by @ericciarla in #637
- [Docs] upgraded the path of the self-hosted documentation URL to
/v1
. by @shige in #635 - Removal of generic classnames/ids from onlyMainContent cleaning by @nickscamara in #638
- Improved team credits check and billing notifications by @nickscamara in #640
- Fixed 500 errors when JSON is badly formatted by @nickscamara in #648
- Better engine for wait + other params by @nickscamara in #649
- fix(py-sdk): removed asyncio package by @rafaelsideguide in #654
- perf(js-sdk): move
dotenv
anduuid
todevDependencies
, fixzod
import by @MonsterDeveloper in #614 - build(js-sdk): simplify build process by @MonsterDeveloper in #611
- fix(v0/crawl-status): don't crash on big crawls when requesting jobs from supa by @mogery in #653
- Manual Rate Limiter for select team ids by @nickscamara in #664
- O1 crawler example by @ericciarla in #676
- [Bug] Fixed screenshot typo and added test for fullpage screenshot by @rafaelsideguide in #677
- v1/map improvements + higher limits by @nickscamara in #674
- Remove print statement in map by @anjor in #612
- fix wrong link to self host documentation by @itasli in #623
- feat: kubernetes example optimization by @yekkhan in #639
- Rust SDK 1.0.0 by @mogery in #689
- feat: Actions by @mogery in #682
- Fix the error message when trying search in v0 by @nickscamara in #690
- remove space in the examples/o1_web_crawler folder name by @h4r5h4 in #679
- o1 job recommender example by @ericciarla in #707
- Move auth and check credits operations into an RPC by @mogery in #704
- bugfix: using onlyIncludeTags and removeTags together by @skeptrunedev in #685
- Concurrency limits by @mogery in #721
- Docs: Remove wait_until_done from python-sdk example by @bytrangle in #728
- Improves error handler in Node SDK to return the status code by @nickscamara in #727
- Fixes crawl failed and webhooks not working properly by @nickscamara in #731
- [BUG] Fixed URLs with params by @rafaelsideguide in #732
- Fixed the self host issues where methods don't work by @nickscamara in #733
- Make sure the entrypoint script has the correct line endings by @busaud in #753
- Rm cluster mode + rm fly deployments by @nickscamara in #754
- Fixed Issue #734 by @Harsh0707005 in #747
- bugfix: self-host crawling doesnt respect limit by @busaud in #755
- [BUG] Fixed missing error handling in JS-SDK by @rafaelsideguide in #759
- [SKD] Cancel Crawl by @rafaelsideguide in #760
- fixed developer.notion special case by @rafaelsideguide in #762
- Spelling Corrections in README by @fadkeabhi in #763
- [RPC] Improvements to credit_usage rpc by @nickscamara in #767
- [BUG] filters failed and unknown jobs now by @rafaelsideguide in #761
- [Doc] Better explained how includePaths and excludePaths work by @rafaelsideguide in #766
- Update README.md by @busaud in #757
- ADDED : Contributors and Back to top by @Ruhi14 in #768
- Retries for ACUC RPC + Price credits fallback by @nickscamara in #773
- [BUG] added check files on crawl by @rafaelsideguide in #779
- [Feat] Performance improvements crawl status filters by @rafaelsideguide in #780
- Admin alerts for high usage by @nickscamara in #783
- Geolocation support for Firecrawl by @nickscamara in #784
- Return all the website metadata by @nickscamara in #785
- Extractor options logging v1 fix by @nickscamara in #788
- Update requirements.txt by @rishi-raj-jain in #790
- Improved /map ranking algorithm for search queries by @nickscamara in #798
- Fix Typos and Grammar in
SELF_HOST.md
by @Mefisto04 in #799 - [Bug] encoding error for special token by @rafaelsideguide in #793
- [BUG-SDK] missing error in response by @rafaelsideguide in #796
- examples: sales web crawler by @rishi-raj-jain in #797
- feat: clear ACUC cache endpoint based on team ID by @mogery in #807
- feat: skipTlsVerification by @tomkosm in #808
- feat: Batch Scrape by @mogery in #789
- feat: Auto Recharge Credits + Credit Packs by @nickscamara in #809
- Remove ph logs for single_urls by @nickscamara in #829
- Bump to gemini-1.5-pro-002 website_qa_with_gemini_caching.ipynb and add flash example by @s-smits in #739
- Add SearchApi as a Web Search Tool by @SebastjanPrachovskij in #628
- RM wait before interacting by @nickscamara in #838
- chore(README.md): use
satisfies
instead ofas
for ts example by @twlite in #831 - Geo-location rename to location by @nickscamara in #830
- concurrency limit fix by @mogery in #824
- [feat] Iframe support by @tomkosm in #855
- Fix go parser by @tomkosm in #856
- Support for the 2 new actions by @nickscamara in #858
- Adds support for mobile web scraping + mobile screenshot by @nickscamara in #847
- [Feat] Added remove base64 images options (true by default) by @rafaelsideguide in #867
- [Fix] Prevent Python Firecrawl logger from interfering with loggers in client applications by @reasonmethis in #613
- [BUG] Added trycatch and removed redundancy by @rafaelsideguide in #869
- Update CONTRIBUTING.md by @swyxio in https://github.com/mendableai/firecrawl/p...