JIRA 4.0 : Using robots.txt to hide from Search Engines
This page last changed on May 04, 2009 by rosie@atlassian.com.
The robots.txt protocol is used to tell search engines (Google, MSN, etc) which parts of a website should not be crawled. For JIRA instances where non-logged-in users are able to view issues, a robots.txt file is useful for preventing unnecessary crawling of the Issue Navigator views (and unnecessary load on your JIRA server). Editing robots.txtJIRA (version 3.7 and later) installs the following robots.txt file at the root of the JIRA webapp: # robots.txt for JIRA # You may specify URLs in this file that will not be crawled by search engines (Google, MSN, etc) # # By default, all SearchRequestViews in the IssueNavigator (e.g.: Word, XML, RSS, etc) and all IssueViews # (XML, Printable and Word) are excluded by the /sr/ and /si/ directives below. User-agent: * Disallow: /sr/ Disallow: /si/ Alternatively, if you already have a robots.txt file, simply edit it and add Disallow: /sr/ and Disallow: /si/. Publishing robots.txtThe robots.txt file needs to be published at the root of your JIRA internet domain, e.g. jira.mycompany.com/robots.txt.
|
![]() |
Document generated by Confluence on Oct 06, 2009 00:26 |