This page last changed on Jul 06, 2009 by asaintprix@atlassian.com.

To our knowledge, JIRA does not have any memory leaks. We know of various public high-usage JIRA instances (eg. 40k issues, 100+ new issues/day, 22 pages/min in 750Mb of memory) that run for months without problems. When memory problems do occur, the following checklist can help you identify the cause.

Too little memory allocated?

Check the System Info page (see Increasing JIRA memory) after a period of sustained JIRA usage to determine how much memory is allocated.

Checklist

100% of the tasks completed
Set the minimum amount of memory (--JvmMs for the Windows service, -Xms otherwise) Medium
Restart JIRA Medium
Go to Admin -> System Info, and ensure that Total Memory is the minimum you set. Medium

Too much memory allocated?

When increasing Java's memory allocation with -Xmx, please ensure that your system actually has the allocated amount of memory free. For example, if you have a server with 1Gb of RAM, most of it is probably taken up by the operating system, database and whatnot. Setting -Xmx1Gb to a Java process would be a very bad idea. Java would claim most of this memory from swap (disk), which would dramatically slow down everything on the server. If the system ran out of swap, you would get OutOfMemoryErrors.

If the server does not have much memory free, it is better to set -Xmx conservatively (eg. -Xmx256m), and only increase -Xmx when you actually see OutOfMemoryErrors. Java's memory management will work to keep within the limit, which is better than going into swap.

0% of the tasks completed
On Windows, ctrl-alt-del, and check the amount of memory marked "Available": !winmem.png|thumbnail! Medium
On Unix, cat /proc/meminfo or use top to determine free memory. Medium
If JIRA is running, check there is spare available memory. Medium
If raising a support request, please let us know the total system memory and (if on linux) the /proc/meminfo output. Medium

Bugs in older JIRA versions

Please make sure you are using the latest version of JIRA. There are often memory leaks fixed in JIRA. Here are some recent ones:

Errors were reported by the JIRA trusted connection.

  • APP_UNKNOWN; Unknown Application: {0}; ["confluence:4557196"]
JIRA Issues (20 issues)
Key Summary Updated Status
JRA-19198 Classloader leak in atlassian-plugins-2.3.1 Sep 21, 2009 Open
JRA-18742 Error Sep 10, 2009 Resolved
JRA-18581 Single Level Group By Report unbound memory usage Sep 23, 2009 Open
JRA-18202 Add Google Collections to the webapp classpath to workaround FinalizableReferenceQueue memory leak Aug 06, 2009 Resolved
JRA-18129 Memory Leak in SAL 2.0.10 Jul 29, 2009 Resolved
JRA-18116 Memory Leak in Apache Shindig Aug 10, 2009 Resolved
JRA-17390 Memory Leak in Felix framework BundleProtectionDomain May 22, 2009 Resolved
JRA-16765 Re-enable bundled plugins in setenv May 11, 2009 Resolved
JRA-16750 Fix any memory leaks in JIRA mainly caused by restoring data from XML and refreshing all singleton objects May 05, 2009 Resolved
JRA-16742 SOAP search methods are unbounded - this can lead to xml-rpc generating huge xml responses causing memory problems Apr 14, 2009 Resolved
JRA-15898 too many commit Nov 05, 2008 Resolved
JRA-15489 Tomcat Manager not unloading classes leading to Permgen errors Aug 27, 2008 Resolved
JRA-15460 Cannot create index directory on reindexing jira Aug 26, 2008 Resolved
JRA-15059 One/TwoDimensionalTermHitCollectors use StatsJiraLuceneFieldCache with no cacheing Jul 15, 2008 Open
JRA-14053 MappedSortComparator needs to reduce its memory footprint Nov 28, 2007 Closed
JRA-13042 OutOfMemoryError in Events and Issue Status admin pages when lots of issue types and workflows Jul 11, 2007 Resolved
JRA-12665 CustomFields using the DocumentSortComparatorSource may cause a memory leak when sorting Apr 02, 2008 Resolved
JRA-12549 JIRA leaks instances of the VelocityEngine in several places May 14, 2007 Resolved
JRA-12411 OutOfMemoryError during reindex all (due to EagerLoadingOfbizCustomFieldPersister's caching of custom field values) Dec 12, 2007 Resolved
JRA-10828 SOAP getProjects call can blow up with an OutOfMemoryError Oct 14, 2008 Resolved

Too many webapps (out of PermGen space)

People running multiple JSP-based web applications (eg. JIRA and Confluence) in one Java server are likely to see this error:

java.lang.OutOfMemoryError: PermGen space

Java reserves a fixed 64Mb block for loading class files, and with more than one webapp this is often exceeded. You can fix this by setting the -XX:MaxPermSize=128m property. See the Increasing JIRA memory page for details.

Tomcat memory leak

Tomcat caches JSP content. If JIRA is generating huge responses (eg. multi-megabyte Excel or RSS views), then these cached responses will quickly fill up memory and result in OutOfMemoryErrors.

In Tomcat 5.5.15+ there is a workaround – set the org.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true property (see how). For earlier Tomcat versions, including that used in JIRA Standalone 3.6.x and earlier, there is no workaround. Please upgrade Tomcat, or switch to another app server.

100% of the tasks completed
Ensure you are using Tomcat 5.5.15 or above. Medium
On Unix, run {ps -ef | grep java} and make sure the LIMIT_BUFFER property is set. Medium

Other webapps

We strongly recommend running JIRA in its own JVM (app server instance), so that web applications cannot affect each other, and each can be restarted/upgraded separately. Usually this is achieved by running app servers behind Apache or IIS.

If you are getting OutOfMemoryErrors, separating the webapps should be your first action. It is virtually impossible to work out retroactively which webapp is consuming all the memory.

0% of the tasks completed
Check which webapps are running (eg. look in /webapps in Tomcat, and/or check the logs for indications of what is running. Medium
If raising a support request, please attach all the log files (eg. logs/* in Tomcat). Medium

Plugins

Plugins are a frequent cause of memory problems. If you have any third-party plugins in use, try disabling them temporarily. The same applies to Atlassian plugins such as the toolkit, charting and calendar plugins.

0% of the tasks completed
Get a directory listing of the WEB-INF/lib directory, and check for *-plugin*.jar files. Medium
Disable the plugin in the Administration page and remove the jar file from the WEB-INF/lib directory. Medium
If raising a support request, please include this directory listing in the issue. Medium

Millions of notificationinstance records

In order to correctly 'thread' email notifications in mail browsers, JIRA tracks the Message-Id header of mails it sends. In heavily used systems, the notificationinstance table can become huge, with millions of records. This can cause OutOfMemoryErrors in the JDBC driver when it is asked to generate an XML export of the data (see JRA-11725)

0% of the tasks completed
Run the SQL select count&#40;*) from notificationinstance;. If you have over (say) 500,000 records, delete the old ones with {{delete from notificationinstance where id < }}{pick an id halfway}. Medium

Services (custom, CVS, etc)

Occasionally people write their own services, which can cause memory problem if (as is often the case) they iterate over large numbers of issues. If you have any custom services, please try disabling them for a while to eliminate them as a cause of problems.

The CVS service sometimes causes memory problems, if used with a huge CVS repository (in this case, simply increase the allocated memory).

A symptom of a CVS (or general services-related) problem is that JIRA will run out of memory just minutes after startup.

0% of the tasks completed
Go to Admin &#45;> Services Medium
Check for any services other than the usual (backup, mail queue}. Medium
If raising a support request, please cut & paste your services list into the issue. Medium

JIRA backup service with large numbers of issues.

Do you have hundreds of thousands of issues? Is JIRA's built-in backup service running frequently? If so, please switch to a native backup tool and disable the JIRA backup service, which will be taking a lot of CPU and memory to generate backups that are unreliable anyway (due to lack of locking). See the JIRA backups documentation for details.

0% of the tasks completed
Check the total issue count in Admin &#45;> System Info Medium
Go to Admin &#45;> Services Medium
Check if a backup service is configured and note its frequency. Medium

JIRA mail misconfiguration causing comment loops.

Does a user have an e-mail address that is the same as one of the mail accounts in your mail handler services? This can cause a comment loop where notifications are sent out and appended to the issue which then triggers another notification and so forth. If a user then views that issue, it could consume a lot of memory. You can query your database using this query that will show you issues with more than 50 comments. It could be normal for issues that have 50 comments, you want to spot for any irregular pattern in the comments themselves such as repeating notifications.

SELECT count(*) as commentcount, issueid from jiraaction group by issueid having commentcount > 50 order by commentcount desc

The SOAP getProjects request

The SOAP getProjects call loads a huge object graph, particularly when there are many users in JIRA, and thus can cause OutOfMemoryErrors. Please always use getProjectsNoSchemes instead.

0% of the tasks completed
Ensure no locally run SOAP clients use getProjects. Medium
As below - enable and check access logs. Medium

Eclipse Mylyn plugin

If your developers use the Eclipse Mylyn plugin, make sure they are using the latest version. The Mylyn bundled with Eclipse 3.3 (2.0.0.v20070627-1400) uses the getProjects method, causing problems as described above.

0% of the tasks completed
As below - enable access logging and ensure the latest Mylyn plugin is used. Medium

Huge XML/RSS or SOAP requests

This applies particularly to publicly visible JIRAs. Sometimes a crawler can slow down JIRA by making multiple huge requests. Every now and then someone misconfigures their RSS reader to request XML for every issue in the system, and sets it running once a minute. Similarly, people sometimes write SOAP clients without consideration of the performance impact, and set it running automatically. JIRA might survive these (although be oddly slow), but then run out of memory when a legitimate user's large Excel view pushes it over the limit.

The best way to diagnose unusual requests is to enable Tomcat access logging (on by default in JIRA Standalone), and look for requests that take a long time.

In JIRA 3.10 there is a jira.search.views.max.limit property you can set in WEB-INF/classes/jira-application.properties, which is a hard limit on the number of search results returned. It is a good idea to enable this for sites subject to crawler traffic.

0% of the tasks completed
Turn on access logging to see if SOAP requests are being made. Medium
Check your access logs for long-running or repeated requests. Medium

Unusual JIRA usage

Every now and then someone reports memory problems, and after much investigation we discover they have 3,000 custom fields, or are parsing 100Mb emails, or have in some other way used JIRA in unexpected ways. Please be aware of where your JIRA installation deviates from typical usage.

0% of the tasks completed
If raising a support request, cut & paste the System Info output, which include basic usage stats. Medium
Better yet, please attach a JIRA backup of your data (optionally anonymized) so we can replicate the problem. Medium
Turn on access logging to see how JIRA is being used. If submitting a support request, please submit this log too. Medium

Memory diagnostics

If you have been through the list above, there are a few further diagnostics which may provide clues.

Getting memory dumps

By far the most powerful and effective way of identifying memory problems is to have JIRA dump the contents of its memory on exit (when exiting due to an OutOfMemoryError hang). These run with no noticeable performance impact. This can be done in one of two ways:

  • On Sun's JDK 1.5.0_07 and above, or 1.4.2_12 and above, set the -XX:+HeapDumpOnOutOfMemoryError option. If JIRA runs out of memory, it will create a jira_pid*.hprof file containing the memory dump in the directory you started JIRA from.
  • On other platforms, you can use the yourkit profiler agent. Yourkit can take memory snapshots when when the JVM exits, or when an OutOfMemoryError is imminent (eg. 95% memory used), or when manually triggered. The agent part of Yourkit is freely redistributable. For more information, see Profiling Memory and CPU usage with YourKit.

Please reduce your maximum heap size (-Xmx) to 750m or so, so that the generated heap dump is of manageable size. You can turn -Xmx up once a heap dump has been taken.

Enable gc logging

Garbage collection logging looks like this:

0.000: [GC [PSYoungGen: 3072K->501K(3584K)] 3072K->609K(4992K), 0.0054580 secs]
0.785: [GC [PSYoungGen: 3573K->503K(3584K)] 3681K->883K(4992K), 0.0050140 secs]
1.211: [GC [PSYoungGen: 3575K->511K(3584K)] 3955K->1196K(4992K), 0.0043800 secs]
1.734: [GC [PSYoungGen: 3583K->496K(3584K)] 4268K->1450K(4992K), 0.0045770 secs]
2.437: [GC [PSYoungGen: 3568K->499K(3520K)] 4522K->1770K(4928K), 0.0042520 secs]
2.442: [Full GC [PSYoungGen: 499K->181K(3520K)] [PSOldGen: 1270K->1407K(4224K)]
    1770K->1589K(7744K) [PSPermGen: 6658K->6658K(16384K)], 0.0480810 secs]
3.046: [GC [PSYoungGen: 3008K->535K(3968K)] 4415K->1943K(8192K), 0.0103590 secs]
3.466: [GC [PSYoungGen: 3543K->874K(3968K)] 4951K->2282K(8192K), 0.0051330 secs]
3.856: [GC [PSYoungGen: 3882K->1011K(5248K)] 5290K->2507K(9472K), 0.0094050 secs]

This can be parsed with tools like gcviewer to get an overall picture of memory use:

To enable gc logging, start JIRA with the option -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:gc.log. Replace gc.log with an absolute path to a gc.log file.

For example, with a Windows service, run:

tomcat5 //US//JIRA ++JvmOptions="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:c:\jira\logs\gc.log"

or in bin/setenv.sh, set:

export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:${CATALINA_BASE}/logs/gc.log"

If you modify bin/setenv.sh, you will need to restart JIRA for the changes to take effect.

Access logs

It is important to know what requests are being made, so unusual usage can be identified. For instance, perhaps someone has configured their RSS reader to request a 10Mb RSS file once a minute, and this is killing JIRA.

If you are using Tomcat, access logging can be enabled by adding the following to conf/server.xml, below the </Host> tag:

<Valve className="org.apache.catalina.valves.AccessLogValve"
          pattern="%h %l %u %t &quot;%r&quot; %s %b %T %S %D" resolveHosts="false" />

The %S logs the session ID, allowing requests from distinct users to be grouped. The %D logs the request time in milliseconds. Logs will appear in logs/access_log.<date>, and look like this:

127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /secure/Dashboard.jspa HTTP/1.1" 200 15287 2.835 A2CF5618100BFC43A867261F9054FCB0 2835
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined-printable.css HTTP/1.1" 200 111 0.030 A2CF5618100BFC43A867261F9054FCB0 30
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined.css HTTP/1.1" 200 38142 0.136 A2CF5618100BFC43A867261F9054FCB0 136
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/global.css HTTP/1.1" 200 548 0.046 A2CF5618100BFC43A867261F9054FCB0 46
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/combined-javascript.js HTTP/1.1" 200 65508 0.281 A2CF5618100BFC43A867261F9054FCB0 281
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar.js HTTP/1.1" 200 49414 0.004 A2CF5618100BFC43A867261F9054FCB0 4
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/lang/calendar-en.js HTTP/1.1" 200 3600 0.000 A2CF5618100BFC43A867261F9054FCB0 0
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar-setup.js HTTP/1.1" 200 8851 0.002 A2CF5618100BFC43A867261F9054FCB0 2
127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/cookieUtil.js HTTP/1.1" 200 1506 0.001 A2CF5618100BFC43A867261F9054FCB0 1

Alternatively, or if you are not using Tomcat or can't modify the app server config, JIRA has a built-in user access logging which can be enabled from the admin section, and produces terser logs like:

2006-09-27 10:35:50,561 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102065-4979 1266
2006-09-27 10:35:58,002 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102806-4402 1035
2006-09-27 10:36:05,774 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/browse/EAO-2 97058+3717 1730

Thread dumps

If JIRA has hung with an OutOfMemoryError, the currently running threads often point to the culprit. Please take a thread dump of the JVM, and send us the logs containing it.

References

Monitoring and Managing Java SE 6 Platform Applications


winmem.png (image/png)
gcviewer.png (image/png)
Document generated by Confluence on Oct 06, 2009 00:26