Creation of a robots.txt

To optimise your shop for search engines, you can add a robots file within your file system. Through this you have the possibility to tell search engine crawlers which URLs of your website should be accessible. Crawlers are programmes that automatically scan websites by following links from one website to the next and analyse and index them.

The robots file is not created automatically in Shopware 6, but has to be created manually as a text file.

You create the robots.txt within the public directory. You can insert the example text below into this file:


User-agent: *
Allow: /
Disallow: */?
Disallow: */account/
Disallow: */checkout/
Disallow: */widgets/
Disallow: */navigation/
Disallow: */bundles/

Disallow: */imprint$
Disallow: */privacy$
Disallow: */gtc$

Sitemap: https://YOUR_DOMAIN/sitemap.xml

These rules block the crawling of pages and directories that are marked with a disallow rule. If you want to test whether certain URLs of your website are blocked by the robots.txt, you can check this for example within the Google Search Console. You can find Google's robots.txt tester at the following URL.
 

Creation of multiple robots.txt files

 

This can be achieved by extending the .htaccess file in the public folder: Add to the beginning of ~/public/.htaccess: RewriteRule ^robots\.txt$ robots/%{HTTP_HOST}.txt [NS] Then you can create one .txt per domain in the public/robots folder, which can be treated like a robots.txt. (You need to create the robots folder in the public folder) As an example:
 


~/public/robots/domain.tld.txt
~/public/robots/subshop.domain.tld.txt
~/public/robots/domain2.tld.txt
...

The .txt file must be named like the hostname/domain. For example, if you use https://shopware.com/, create "shopware.com.txt" in the robots folder. Note that the robots.txt in the public folder is then no longer used.

Was this article helpful?

Version

6.0.0 or newer