Robots.txt and Sitemap for Framer Websites

Regardless of the technology stack used to build a website, the Robots.txt and Sitemap files are essential foundational elements in SEO optimization. They are not only critical “resources” submitted to search engines, but also key references for search engine crawlers to automatically discover and index website content. A properly configured Robots.txt file can effectively guide search engines on which pages are allowed to be crawled and which should be blocked, preventing the indexing of irrelevant or sensitive content. Meanwhile, the Sitemap provides a complete list of the site’s page structure, helping search engines discover and understand the content more efficiently and comprehensively.
For websites built with Framer, properly setting up and ensuring the accessibility of the Robots.txt and Sitemap files is particularly crucial. If either is misconfigured or inaccessible, search engines will not be able to crawl the site properly, which will directly affect its search ranking and visibility. This article will explain in detail the toggle settings and default paths for Robots.txt and Sitemap in Framer, ensuring that your Framer site is visible to search engines and positioned for stronger SEO performance.
Robots.txt and Sitemap Toggle Settings for Framer Websites

In Framer websites, the accessibility of the Robots.txt file and whether a page is allowed to be crawled by search engines are mainly controlled through the search-related options in the Page Settings. Specifically, the toggle is located under the Search section as the "Show page in search engines" option.
1 Toggle Mechanism for Robots.txt Access Permissions
By default, Framer sets the "Show page in search engines" option to enabled, which means the page is allowed to be crawled by search engine bots, and the Robots.txt file is made accessible to direct search engines for normal indexing. However, during the early stages of website design and development, developers often disable this option to prevent search engines from crawling pages that are incomplete or contain disorganized content.
When this option is turned off, the page’s robots meta tag is automatically set to noindex, and access to the page in the Robots.txt file is restricted, preventing search engines from indexing the page content. This serves as an effective protective measure to avoid unfinished or test pages from being included in search results.
2 Impact on Sitemap Generation
In addition to affecting the Robots.txt file and page indexing permissions, this toggle also directly influences the content generated in the Sitemap. When the “Show page in search engines” option is turned off, the Sitemap URL for that page remains accessible, but its content is blank and does not actually include valid page information. This is because the toggle also controls whether the page is included in the Sitemap file, thereby determining if search engines can discover the page through the Sitemap.
3 Important Notes for the Publishing Stage
Therefore, during the official launch and publishing phase of the website, it is essential to ensure that the "Show page in search engines" option is re-enabled and saved to restore normal access to the Robots.txt file and the page’s eligibility for indexing by search engines. Failing to complete this step will prevent the website from being effectively crawled, severely impacting SEO performance.
Robots.txt for Framer Websites

1 Access Path of the Robots.txt File
In Framer websites, the default access path for the Robots.txt file is fixed and follows the format:
Simply enter the above URL in your browser’s address bar (replacing yourdomain.com with your own website’s domain) to view the contents of the file. If the URL is not accessible, you should first review and check the Search toggle mentioned in the previous chapter to ensure it is enabled. When this toggle is turned off, the Robots.txt file becomes inaccessible, preventing search engine crawlers from indexing the site’s content.
2 Default Robots.txt Content in Framer
Using Jane Framer Studio as an example, the Robots.txt content generated by Framer is as follows:
Next, we will analyze the purpose and function of each of these directives line by line.
1). The Purpose and Function of [User-agent: *]
Principle: The User-agent directive specifies which type of search engine crawler the rule applies to. Different search engine crawlers have different User-agent identifiers, such as Googlebot (Google), Bingbot (Bing), and Baiduspider (Baidu).
Current Setting: The asterisk (*) represents “all search engine crawlers,” meaning the rules in this file apply regardless of which search engine crawler is visiting.
Impact: This setting means that all crawlers are treated equally without distinction, granting the same access permissions to every crawler.
2). The Purpose and Function of [Allow: /]
Principle: The Allow directive is used to explicitly specify the paths that are allowed to be crawled. The slash (/) represents the root directory of the website and all its files and subdirectories.
Current Setting: Here, the / means that all pages across the entire site are allowed to be crawled by search engines, with no access restrictions.
Additional Note: If certain paths need to be blocked (for example, /private/), websites built on non-Framer architectures can use the Disallow directive to restrict access. However, Framer’s default Robots.txt is fully open to maximize the scope of search engine crawling.
3). The Purpose and Function of [Sitemap: https://janeui.com/sitemap.xml]
Principle: The Sitemap directive informs search engines of the URL address of the site map. A sitemap is a structured file (usually in XML format) that lists all the pages on a website that need to be indexed.
Function: It helps search engines discover new pages and updated content more efficiently. Sitemaps are especially important for websites with complex page structures or few internal links, as crawlers may not be able to find all pages through regular link traversal.
Framer Feature: Framer automatically generates and updates the Sitemap file and includes its URL in the Robots.txt file, ensuring crawlers know the location of the sitemap immediately.
3 The Essence of the Robots.txt “Gentleman’s Agreement”
Although Robots.txt is an important part of SEO optimization, it is not a mandatory security barrier but rather a “gentleman’s agreement” for crawler access:
Most mainstream search engines (such as Google, Bing, and Yahoo) comply with the rules specified in the Robots.txt file and do not crawl paths that are disallowed.
However, malicious crawlers or non-compliant bots can completely ignore these restrictions and directly access protected pages.
Therefore, Robots.txt should be used only as a tool for managing search engine crawling and cannot replace security measures. For content that requires real protection, methods such as password protection, access control, or server-level safeguards should be employed.
4 Optimization Recommendations
By default, Framer’s Robots.txt adopts a fully open site-wide policy (Allow: /), which benefits new websites aiming to quickly establish indexing. However, in the following situations, custom restriction rules can be considered (these must be implemented via CDN or reverse proxy layers, as Framer itself does not support modifying Robots.txt):
Block test pages, admin panels, or directories that are not intended for public access.
Set differentiated crawling permissions for different search engines.
Use Crawl-delay to limit crawling frequency and prevent crawlers from consuming excessive server resources.
Sitemap for Framer Websites

In websites built with Framer, the Sitemap serves as an important "map" for search engines to discover and understand the site structure. This section will explain the default location and access method of the Framer sitemap, provide a detailed breakdown of the sitemap’s XML data structure, and finally offer some common troubleshooting and optimization recommendations.
1 Default and Unchangeable Sitemap URL
Framer automatically generates and publicly exposes the sitemap. The default URL, which usually cannot be directly modified in the Framer backend, is:
Simply replace yourdomain.com with your own domain and open the URL in a browser to view the sitemap. If the URL returns blank or is inaccessible, please first confirm that the "Show page in search engines" option in the page settings (as mentioned earlier) is enabled and the page is published. When all pages are set not to appear in search engines, the sitemap may be empty or contain very few entries.
Framer also lists the sitemap URL in the robots.txt file to help crawlers quickly locate the sitemap (example: Sitemap: https://janeui.com/sitemap.xml).
2 XML Data Structure of the Sitemap (Item-by-Item Analysis)
Below is a simplified example of a typical Framer sitemap output, along with explanations of the meaning of each part:
1). <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
Function: The root element of the sitemap, declaring the sitemap XML protocol being used (standard namespace).
Meaning: It informs parsers and search engines that this is a sitemap file compliant with the sitemaps.org protocol.
2). (Container for Each Page)
Function: Each block represents an individual URL entry within the site. A sitemap contains multiple such entries.
3). <loc> (Location, Required Element)
Function: Specifies the absolute URL of the entry (including the protocol, e.g., https://). This is the most common and mandatory element in a sitemap.
Framer Behavior: The value of <loc> is formed by concatenating the published domain name with the page’s slug (i.e., the slug set for the page in Framer is directly reflected in the <loc> value).
4). Other Sitemap Tags Not Included by Default in Framer
<lastmod>: The last modification date of the page (in ISO 8601 format). Helps crawlers determine if the page has been updated.
<changefreq>: Suggested crawl frequency (e.g., daily, weekly). This is only a suggestion; search engines may not fully follow it.
<priority>: Priority level (0.0 - 1.0), indicating relative importance. This is also a hint rather than a strict rule.
Note: The Framer sitemap shown in the screenshot contains only entries (i.e., the most minimal form). Additional metadata such as lastmod, changefreq, or priority may be included only if the site provides more detailed data or if the sitemap is generated through other systems.
5). Relation to robots.txt and meta tags
The sitemap provides a list of URLs; however, whether these URLs are ultimately indexed is also influenced by robots.txt and the page’s tags (such as noindex). URLs listed in the sitemap are not guaranteed to be indexed by search engines — the sitemap serves only as a basis for “discovery” and “recommendation.”
3 Framer Sitemap Generation and Update Logic (Practical Guide)
Automatic Generation: Framer automatically generates and updates sitemap.xml whenever the site is published or page statuses change, requiring no manual maintenance (this is Framer’s common default behavior).
Inclusion Rules: Only pages allowed to appear in search engines in the Page Settings are included in the sitemap; pages set to noindex or with "Show page in search engines" turned off typically do not appear in the sitemap (or the sitemap may be empty if the entire site is set to be hidden).
Slug-Based URLs: Each in the sitemap directly reflects the current slug of the page. Changing a slug generates a new URL entry, so if URLs are modified, a 301 redirect should be set up to preserve existing SEO value.
robots.txt Reference: Framer lists the sitemap URL in the robots.txt file to facilitate crawler discovery (e.g., Sitemap: https://janeui.com/sitemap.xml).
4 Common Questions and Best Practices Related to Sitemaps
How to verify if the sitemap includes the expected pages: Open https://yourdomain.com/sitemap.xml in a browser, or submit and review the list of submitted URLs and index coverage in Google Search Console → Sitemaps.
Sitemap shows empty or contains few URLs: First, confirm whether the pages are published, whether the “Show page in search engines” option is enabled, and whether the pages are password-protected or access-restricted (protected pages do not appear in the sitemap).
Handling slug/path changes: Modifying a page’s slug will cause new URLs to appear in the sitemap; be sure to configure 301 redirects in Framer’s Redirects settings or at the domain/CDN level to avoid 404 errors and loss of traffic or ranking.
Considerations for large websites: The Sitemap protocol limits the size and number of URLs per single sitemap (protocol limit is 50,000 URLs or 50MB). For very large sites, sitemap indexes should be used. For details on how Framer handles very large sites, refer to Framer official documentation or contact Framer support for accurate information.
Security Reminder: While sitemaps help search engines discover pages, they do not replace access control. If pages contain sensitive content, use authentication or password protection instead of relying solely on robots.txt or excluding pages from the sitemap.
Framer provides an out-of-the-box sitemap at https://yourdomain.com/sitemap.xml and automatically includes pages allowed to be crawled by search engines as entries. Understanding the XML structure of the sitemap (especially , , and ) along with Framer generation logic can help you better troubleshoot issues, control page indexing, and optimize your site’s search performance through redirects and webmaster tools when needed.
Does Framer Support Editing the Contents of Robots.txt and Sitemap?
When building websites with Framer, many site owners wonder if they can directly modify robots.txt and sitemap.xml to customize them according to their SEO strategies. The answer is that currently, Framer does not offer a feature to edit these files directly online. Both robots.txt and sitemap.xml are automatically generated by the system, with fixed and unchangeable paths:
1 Robots.txt
The URL is fixed as: https://yourdomain.com/robots.txt
The content is fixed as:
It is not possible to add or remove specific User-agent rules, nor to use Disallow to block particular paths. Its accessibility depends on whether the "Show page in search engines" toggle is enabled; if turned off, the robots.txt file becomes inaccessible.
2 Sitemap
The URL is fixed as: https://yourdomain.com/sitemap.xml
The content is automatically generated from published pages allowed to be indexed by search engines. You cannot manually add pages or modify tags like or . If a page’s search visibility is turned off, it will be automatically removed from the sitemap.
In other words, Framer only provides toggle-based control (indirectly affecting both via page search visibility settings) but does not support fine-grained editing. If your SEO strategy requires custom rules or a customized sitemap, you must implement this through self-hosted sites or CDN proxy replacements.
Conclusion
Robots.txt and Sitemap are fundamental elements for a website to be correctly crawled and indexed by search engines. For websites built with Framer, although their generation method and paths are fixed, understanding how they work and properly managing the page search visibility toggles can ensure that search engines successfully access your content when the site goes live, thereby improving SEO performance.
In daily maintenance, always remember to verify that search visibility is enabled before publishing official pages, and regularly check the accessibility of your Sitemap and Robots.txt files to keep your website healthy, visible, and competitive in search results.
Jane Framer Studio · Helping your Framer website stand out with creativity and performance. Partner with us today to make both search engines and users fall in love with your site.
Jane will continue to update this section with tutorials and creative notes on framer。 We aim to make this space a reliable learning resource for your Framer journey—and we invite you to follow along with Jane Framer Studio’s latest updates and creative explorations.
Your support helps us create more free tutorials and resources for everyone!