Skip to content

Releases: mendableai/firecrawl

v1.4.4

14 Feb 16:03
Compare
Choose a tag to compare

🚀 Features & Enhancements

  • Scrape API: Added action & wait time validation (#1146)
  • Extraction Improvements:
    • Added detection of PDF/image sub-links & extracted text via Gemini (#1173)
    • Multi-entity prompt enhancements for extraction (#1181)
    • Show sources out of __experimental in extraction (#1180)
  • Environment Setup: Added Serper & Search API env vars to docker-compose (#1147)
  • Credit System Update: Now displays "tokens" instead of "credits" when out of tokens (#1178)

✏️ Examples

🐛 Fixes

  • HTML Transformer: Updated free_string function parameter type (#1163)
  • Gemini Crawler: Updated library & improved PDF link extraction (#1175)
  • Crawl Queue Worker: Only reports successful page count in num_docs (#1179)
  • Scraping & URLs:
    • Fixed relative URL conversion (#584)
    • Enforced scrape rate limit in batch scraping (#1182)

What's Changed

  • [FIR-796] feat(api/types): Add action and wait time validation for scrape requests by @ftonato in #1146
  • Implemented Gemini 2.0 crawler by @aparupganguly in #1161
  • Add Serper and Search API env vars to docker-compose by @RealLukeMartin in #1147
  • fix(html-transformer): Update free_string function parameter type by @carterlasalle in #1163
  • Add detection of PDF/image sub-links and extract text via Gemini by @mayooear in #1173
  • fix: update gemini library. extract pdf links from scraped content by @mayooear in #1175
  • feat(v1/checkCredits): say "tokens" instead of "credits" if out of tokens by @mogery in #1178
  • feat(v1/extract) Show sources out of __experimental by @nickscamara in #1180
  • (feat/extract) Multi-entity prompt improvements by @nickscamara in #1181
  • fix(queue-worker/crawl): only report successful page count in num_docs (FIR-960) by @mogery in #1179
  • fix: relative url 2 full url use error base url by @dolonfly in #584
  • fix(v1/batch/scrape): use scrape rate limit by @mogery in #1182

New Contributors

Full Changelog: v1.4.3...v1.4.4

Examples Week - v1.4.3

07 Feb 16:41
2b7b740
Compare
Choose a tag to compare

Summary of changes

  • Open Deep Research: An open source version of OpenAI Deep Research. See here
  • R1 Web Extractor Feature: New extraction capability added.
  • O3-Mini Web Crawler: Introduces a lightweight crawler for specific use cases.
  • Updated Model Parameters: Enhancements to o3-mini_company_researcher.
  • URL Deduplication: Fixes handling of URLs ending with /, index.html, index.php, etc.
  • Improved URL Blocking: Uses tldts parsing for better blocklist management.
  • Valid JSON via rawHtml in Scrape: Ensures valid JSON extraction.
  • Product Reviews Summarizer: Implements summarization using o3-mini.
  • Scrape Options for Extract: Adds more configuration options for extracting data.
  • O3-Mini Job Resource Extractor: Extracts job-related resources using o3-mini.
  • Cached Scrapes for Extract evals: Improves performance by using cached data for extractions evals.

What's Changed

New Contributors

Full Changelog: v1.4.2...v1.4.3

Extract and API Improvements - v1.4.2

31 Jan 16:23
Compare
Choose a tag to compare

We're excited to announce several new features and improvements:

New Features

  • Added web search capabilities to the extract endpoint via the enableWebSearch parameter
  • Introduced source tracking with __experimental_showSources parameter
  • Added configurable webhook events for crawl and batch operations
  • New timeout parameter for map endpoint
  • Optional ad blocking with blockAds parameter (enabled by default)

Infrastructure & UI

  • Enhanced proxy selection and infrastructure reliability
  • Added domain checker tool to cloud platform
  • Redesigned LLMs.txt generator interface for better usability

What's Changed

  • (feat/extract) Refactor and Reranker improvements by @nickscamara in #1100
  • Fix bad WebSocket URL in CrawlWatcher by @ProfHercules in #1053
  • (feat/extract) Add sources to the extraction by @nickscamara in #1101
  • feat(v1/map): Timeout parameter (FIR-393) by @mogery in #1105
  • fix(scrapeURL/fire-engine): default to separate US-generic proxy list if no location is specified (FIR-728) by @mogery in #1104
  • feat(scrapeUrl/fire-engine): add blockAds flag (FIR-692) by @mogery in #1106
  • (feat/extract) Logs analyzeSchemaAndPrompt output did not match the schema by @nickscamara in #1108
  • (feat/extract) Improved completions to use model's limits by @nickscamara in #1109
  • feat(v0): store v0 users (team ID) in Redis for collection (FIR-698) by @mogery in #1111
  • feat(github/ci): connect to tailscale (FIR-748) by @mogery in #1112
  • (feat/conc) Move fully to a concurrency limit system by @nickscamara in #1045
  • Added instructions for empty string to extract prompts by @rafaelsideguide in #1114

New Contributors

Full Changelog: 1.4.1...v1.4.2
Firecrawl website changelog: https://firecrawl.dev/changelog

Extract Improvements - v1.4.1

24 Jan 22:50
fa5544a
Compare
Choose a tag to compare

We've significantly enhanced our data extraction capabilities with several key updates:

  • Extract now returns a lot more data due to a new re-ranker system
  • Improved infrastructure reliability
  • Migrated from Cheerio to a high-performance Rust-based parser for faster and more memory-efficient parsing
  • Enhanced crawl cancellation functionality for better control over running jobs

What's Changed

Full Changelog: v1.4.0...1.4.1

Introducing /extract - v.1.4.0

20 Jan 14:17
Compare
Choose a tag to compare

Get structured web data with /extract

We’re excited to announce the release of /extract - get data from any website with just a prompt. With /extract, you can retrieve any information from anywhere on a website without being limited by scraping roadblocks or the typical context constraints of LLMs.

Frame 46557

No more manual copy-pasting, broken scraping scripts, or debugging LLM calls. - it’s never been easier to enrich your data, create datasets, or power AI applications with clean, structured data from any website.

Companies are already using extract to:

  • Enrich CRM data
  • Streamline KYB processes
  • Monitor competitors
  • Supercharge onboarding experiences
  • Build targeted prospecting lists

Instead of spending hours manually researching, fixing broken scrapers, or piecing together data from multiple sources, simply specify what information you need and the target website, and let the Firecrawl handle the entire retrieval process.

Specifically, you can:

  • Extract structured data from entire websites using URL wildcards (https://example.com/*)
  • Define custom schemas to capture exactly what you need—from simple product details to complex organizational structures
  • Guide the extraction with custom prompts to ensure the LLM focuses on your target information
  • Deploy anywhere with comprehensive support for Python, Node, cURL, and other popular tools. For no-code workflows, just connect via Zapier or use our API to set up integrations with other tools.

This versatility translates into a wide range of real-world applications—enabling you to enrich web data for just about any use case.

Limitations - (and the road ahead)

  • Let's be honest - while /extract is pretty awesome at grabbing web data, it's not perfect yet. Here's what we're still working on:
  • Big sites are tricky - It can't (yet!) grab every single product on Amazon in one go
  • Complex searches need work - Things like "find all posts from 2025" aren't quite there
  • Sometimes, it's a bit quirky - Results can vary between runs, though it usually gets what you need
  • But here's the exciting part: we're seeing the future of web scraping take shape

Try it out

Curious to try /extract out for yourself?
Visit our playground to try out /extract - you get 500,000 tokens for free
Dive into our Extract Beta documentation for detailed technical guidance and API reference
Want a no-code solution? Connect /extract to thousands of applications through our enhanced Zapier integration

That's all for now! Happy Extracting from the whole Firecrawl team 🔥

Full Changelog: v.1.3.0...v1.4.0

v1.3 - /extract improvements

14 Jan 22:40
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.2.1...v.1.3.0

v1.2.1 - /extract Beta Improvements

10 Jan 17:54
Compare
Choose a tag to compare

What's Changed

/extract (beta) changes

  • We have updated the /extract endpoint to now be asynchronous. When you make a request to /extract, it will return an ID that you can use to check the status of your extract job. If you are using our SDKs, there are no changes required to your code, but please make sure to update the SDKs to the latest versions as soon as possible.

  • For those using the API directly, we have made it backwards compatible. However, you have 10 days to update your implementation to the new asynchronous model.

  • For more details about the parameters, refer to the docs sent to you.

New Contributors

Full Changelog: v1.2.0...v1.2.1

Changelog: https://www.firecrawl.dev/changelog#/extract-changes

v1.2.0 - v1/search is now available!

02 Jan 23:24
Compare
Choose a tag to compare

/v1/search

The search endpoint combines web search with Firecrawl’s scraping capabilities to return full page content for any query.

Include scrapeOptions with formats: ["markdown"] to get complete markdown content for each search result otherwise it defaults to getting SERP results (url, title, description).

More info here /v1/search docs

What's Changed

Full Changelog: v1.1.1...v1.2.0

v1.1.1

30 Dec 15:30
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v1.1.0...v1.1.1

v1.1.0

27 Dec 18:34
c5b6495
Compare
Choose a tag to compare

Starting today we are going to be posting weekly releases here and on firecrawl.dev/changelog. This release is just a summary of all the improvements and fixes we pushed since v1 release here. Thank you all for the contributions!

v1.1.0

Changelog Highlights

Feature Enhancements

  • New Features:
    • Geolocation, mobile scraping, 4x faster parsing, better webhooks,
    • Credit packs, auto-recharges and batch scraping support.
    • Iframe support and query parameter differentiation for URLs.
    • Similar URL deduplication.
    • Enhanced map ranking and sitemap fetching.

Performance Improvements

  • Faster crawl status filtering and improved map ranking algorithm.
  • Optimized Kubernetes setup and simplified build processes.
  • Sitemap discoverability and performance improved

Bug Fixes

  • Resolved issues:
    • Badly formatted JSON, scrolling actions, and encoding errors.
    • Crawl limits, relative URLs, and missing error handlers.
  • Fixed self-hosted crawling inconsistencies and schema errors.

SDK Updates

  • Added dynamic WebSocket imports with fallback support.
  • Optional API keys for self-hosted instances.
  • Improved error handling across SDKs.

Documentation Updates

  • Improved API docs and examples.
  • Updated self-hosting URLs and added Kubernetes optimizations.
  • Added articles: mastering /scrape and /crawl.

Miscellaneous

  • Added new Firecrawl examples
  • Enhanced metadata handling for webhooks and improved sitemap fetching.
  • Updated blocklist and streamlined error messages.

What's Changed

Read more