Screaming Frog SEO Spider Free Deal

Name: Screaming Frog SEO Spider

What it does? It’s the industry leading website crawler for Windows, macOS and Ubuntu, trusted by thousands of SEOs and agencies worldwide for technical SEO site audits.

Deal Details: Screaming Frog SEO Spider offers free version limited to 500 URLs or paid licence at £199/year for unlimited crawls.

Deal Category: ,

The Screaming Frog SEO Spider is a website crawler that helps improve onsite SEO by auditing for common SEO issues.

The industry leading website crawler for Windows, macOS and Linux is trusted by thousands of SEOs and agencies worldwide for technical SEO site audits.

Users download and crawl 500 URLs for free, or buy a licence for £199 Per Year to remove the limit and access advanced features.

SEO Spider Tool capabilities

The Screaming Frog SEO Spider identifies over 300 SEO issues, warnings and opportunities to improve SEO, website health and user experience.

Issues are an error or issue that should ideally be fixed.

Warnings are not necessarily an issue, but should be checked and potentially fixed.

Opportunities are potential areas for optimisation and improvement.

Priorities are based upon potential impact that may require more attention, rather than definitive action, from broadly accepted SEO best practice.

Issues provide direction to users who can make sense of the data and interpret it into appropriate prioritised actions relevant to each unique business, website and objectives.

Issue guidance is handwritten by professional SEOs, not AI.

Core features and functionality

Find Broken Links

Users crawl a website instantly and find broken links (404s) and server errors.

Teams bulk export the errors and source URLs to fix, or send to a developer.

Audit Redirects

Users find temporary and permanent redirects, identify redirect chains and loops, or upload a list of URLs to audit in a site migration.

Analyse Page Titles & Meta Data

Teams analyse page titles and meta descriptions during a crawl and identify those that are too long, short, missing, or duplicated across the site.

Discover Duplicate Content

Users discover exact duplicate URLs with an md5 algorithmic check, partially duplicated elements such as page titles, descriptions or headings and find low content pages.

Extract Data with XPath

Teams collect any data from the HTML of a web page using CSS Path, XPath or regex.

This might include social meta tags, additional headings, prices, SKUs or more.

Review Robots & Directives

Users view URLs blocked by robots.txt, meta robots or X-Robots-Tag directives such as noindex or nofollow, as well as canonicals and rel=”next” and rel=”prev”.

Generate XML Sitemaps

Teams quickly create XML Sitemaps and Image XML Sitemaps, with advanced configuration over URLs to include, last modified, priority and change frequency.

Integrate with GA, GSC & PSI

Users connect to the Google Analytics, Search Console and PageSpeed Insights APIs and fetch user and performance data for all URLs in a crawl for greater insight.

Crawl JavaScript Websites

Teams render web pages using the integrated Chromium WRS to crawl dynamic, JavaScript rich websites and frameworks, such as Angular, React and Vue.js.

Visualise Site Architecture

Users evaluate internal linking and URL structure using interactive crawl and directory force-directed diagrams and tree graph site visualisations.

Schedule Audits

Teams schedule crawls to run at chosen intervals and auto export crawl data to any location, including Google Sheets.

Users automate entirely via command line.

Compare Crawls & Staging

Users track progress of SEO issues and opportunities and see what’s changed between crawls.

Teams compare staging against production environments using advanced URL Mapping.

Free vs Paid version features

  • Find Broken Links, Errors & Redirects
  • Analyse Page Titles & Meta Data
  • Review Meta Robots & Directives
  • Audit hreflang Attributes
  • Discover Exact Duplicate Pages
  • Generate XML Sitemaps
  • Site Visualisations
  • Crawl Limit: Free version allows 500 URLs, Paid version allows Unlimited URLs (dependent on allocated memory and storage)
  • Scheduling (Paid only)
  • Crawl Configuration (Paid only)
  • Save & Open Crawls (Paid only)
  • JavaScript Rendering (Paid only)
  • Crawl Comparison (Paid only)
  • Near Duplicate Content (Paid only)
  • Custom robots.txt (Paid only)
  • Mobile Usability (Paid only)
  • AMP Crawling & Validation (Paid only)
  • Structured Data & Validation (Paid only)
  • Spelling & Grammar Checks (Paid only)
  • Custom Source Code Search (Paid only)
  • Custom Extraction (Paid only)
  • Custom JavaScript (Paid only)
  • Crawl with OpenAI & Gemini (Paid only)
  • Google Analytics Integration (Paid only)
  • Search Console Integration (Paid only)
  • PageSpeed Insights Integration (Paid only)
  • Accessibility Auditing (Paid only)
  • Link Metrics Integration (Paid only)
  • Forms Based Authentication (Paid only)
  • Segmentation (Paid only)
  • Looker Studio Crawl Report (Paid only)
  • Free Technical Support

What the SEO Spider crawls and reports on

The Screaming Frog SEO Spider is an SEO auditing tool, built by real SEOs with thousands of users worldwide.

  • Errors – Client errors such as broken links and server errors (No responses, 4XX client and 5XX server errors).
  • Redirects – Permanent, temporary, JavaScript redirects, meta and HTTP refreshes, redirect chains and loops.
  • Blocked URLs – View and audit URLs disallowed by the robots.txt protocol.
  • Blocked Resources – View and audit blocked resources in rendering mode.
  • External Links – View all external links, their status codes and source pages.
  • Site Structure – Analyse site architecture, indexability and crawl depth by directory.
  • Internal Linking – Analyse internal links, link counts, crawl depth and calculate internal Link Score.
  • Anchor Text – View aggregated and granular anchor text of all links. Identify non-descriptive anchor text to improve for users and search engines.
  • Security – Discover insecure pages, mixed content, insecure forms, missing security headers and more.
  • URL Issues – Non ASCII characters, underscores, uppercase characters, parameters, long URLs, repetitive paths or broken bookmarks.
  • Page Titles – Identify missing, duplicate, long, short or multiple titles.
  • Meta Description – Identify missing, duplicate, long, short or multiple descriptions.
  • Headings – View h1 and h2 headings, including if any are missing, duplicate, long, short, multiple or non-sequential.
  • Content – View word count, analyse readability and identify low relevance content that deviates from the average content focus of the site.
  • Directives – View directives in meta robots or the X-Robots-Tag header, including noindex, nofollow, none, nosnippet and more.
  • Canonicals – Analyse canonical link elements and canonical HTTP headers.
  • Pagination – View rel=”next” and rel=”prev” attributes, as well as common set up issues with paginated pages.
  • hreflang Attributes – Audit missing return tags, inconsistent and incorrect languages codes, non-200 response hreflang and more.
  • Duplicate Content – Discover exact, near duplicate and semantically similar pages using algorithmic checks and vector embeddings.
  • Rendering – Crawl JavaScript frameworks like AngularJS and React, by crawling the rendered HTML after JavaScript has executed.
  • JavaScript – Identify content, links, page titles, descriptions, headings and other key elements that rely on JavaScript.
  • Images – Find all images and discover those that are too large, missing alt text, background images, missing size attributes and more.
  • Validation – Find issues that can impact search bots from being able to parse and understand a page reliably.
  • User-Agent – Crawl as a search bot such as Googlebot or Bingbot, AI crawlers, or a custom UA.
  • Custom HTTP Headers – Supply any header value in a request, from Accept-Language to cookie.
  • Custom Source Code Search – Search for anything in the source code of a website such as analytics tracking tags, keywords or code.
  • Custom Extraction – Scrape any data from the HTML of a URL using XPath, CSS Path selectors or regex.
  • Custom JavaScript – Run custom JavaScript snippets while crawling, to extract data, trigger mouseover events, scroll a page or nearly anything else users are able to do in the Chrome console.
  • Google Analytics Integration – Connect to the Google Analytics API and pull in user and conversion data directly during a crawl.
  • Google Search Console Integration – Connect to the Google Search Analytics and URL Inspection APIs and collect performance and index status data in bulk.
  • PageSpeed Insights Integration – Connect to the PSI API for Lighthouse metrics, speed opportunities, diagnostics and Chrome User Experience Report (CrUX) data at scale.
  • Mobile Usability – Use Lighthouse to check for common mobile usability issues.
  • External Link Metrics – Pull external link metrics from Majestic, Ahrefs and Moz APIs into a crawl to perform content audits or profile links.
  • XML Sitemap Generation – Create an XML sitemap and an image sitemap using the SEO spider.
  • Custom robots.txt – Download, edit and test a site’s robots.txt using the new custom robots.txt.
  • Rendered Screen Shots – Fetch, view and analyse the rendered pages crawled.
  • Store & View HTML & Rendered HTML – Essential for analysing the DOM.
  • AMP Crawling & Validation – Crawl AMP URLs and validate them, using the official integrated AMP Validator.
  • XML Sitemap Analysis – Crawl an XML Sitemap independently or part of a crawl, to find missing, non-indexable and orphan pages.
  • Visualisations – Analyse the internal linking and URL structure of the website, using the crawl and directory tree force-directed diagrams and tree graphs.
  • Structured Data & Validation – Extract and validate structured data against Schema.org specifications and Google rich result features.
  • Spelling & Grammar – Spell and grammar check websites in over 25 different languages.
  • Website Accessibility – Use the open-source AXE accessibility rule set for automated accessibility validation to test against Web Content Accessibility Guidelines (WCAG).
  • AI Integration – Set up custom AI prompts with OpenAI, Gemini, Ollama and Anthropic while crawling for insight.
  • Crawl Comparison – Compare crawl data to identify changes and track technical SEO progress. Compare site structure, detect changes in key elements and use URL mapping to compare staging Vs production.
  • Looker Studio Crawl Reports – Set up automated Looker Studio crawl reports to monitor site health and trends.

Over 300 SEO issues, warnings and opportunities

The Screaming Frog SEO Spider identifies over 300 SEO issues, warnings and opportunities that can be seen in the Issues tab of the app.

Response Codes

HTTP response status codes indicate whether an HTTP request made during a crawl has been successfully completed.

Users find issues related to URLs that are blocked from being crawled, return a no response, redirect, client or server error.

Security

Website security is important to protect users and reduce risk from common threats.

Teams find issues related to basic security best practices, such as HTTPS, mixed content, and HTTP security headers.

URL

Ensuring a website has logical and relevant URLs is vital for users and search engines in understanding website structure, and the content of a page.

Users find issues in non-optimal formats, or URLs that shouldn’t be discoverable.

Page Titles

Relevant and descriptive page titles are essential, as they help both users and search engines understand the purpose of a page.

Teams find issues related to missing, duplicate, long or even multiple page titles.

Meta Description

Meta descriptions can be used in search engine result snippets, so writing a good meta description can be helpful for users and drive more clicks to a website.

Users find issues related to missing, duplicate, long or even multiple meta descriptions.

H1

Headings help provide structure and organisation to a web page, and can allow users and search engines to better understand the content.

The h1 should describe the main title and purpose of the page.

Teams find issues related to missing, duplicate, long or non-sequential h1’s.

H2

Headings are titles and subtitles within the copy of a page to guide users and search engines to better understand the content.

The h2 heading is often used to describe sections within a document and act as signposts for the user.

Users find issues related to missing, duplicate, long or non-sequential h2’s.

Content

Ensuring web pages deliver the best on-page content is vital to satisfy users and for SEO.

Teams find issues related to exact and near duplicate content, low content, spelling, grammar and readability.

Images

Imagery is crucial in delivering rich web experiences, whether that’s to support branding, selling products or impactful visuals.

Users find issues related to large images, missing alt text, incorrectly sized images and cumulative layout shift.