JIRA 4.0 : Causes of OutOfMemoryErrors
This page last changed on Jul 06, 2009 by asaintprix@atlassian.com.
To our knowledge, JIRA does not have any memory leaks. We know of various public high-usage JIRA instances (eg. 40k issues, 100+ new issues/day, 22 pages/min in 750Mb of memory) that run for months without problems. When memory problems do occur, the following checklist can help you identify the cause. Too little memory allocated?
Too much memory allocated?
Bugs in older JIRA versionsPlease make sure you are using the latest version of JIRA. There are often memory leaks fixed in JIRA. Here are some recent ones:
Errors were reported by the JIRA trusted connection.
Too many webapps (out of PermGen space)People running multiple JSP-based web applications (eg. JIRA and Confluence) in one Java server are likely to see this error: java.lang.OutOfMemoryError: PermGen space Java reserves a fixed 64Mb block for loading class files, and with more than one webapp this is often exceeded. You can fix this by setting the -XX:MaxPermSize=128m property. See the Increasing JIRA memory page for details. Tomcat memory leak
Other webapps
Plugins
Millions of notificationinstance records
Services (custom, CVS, etc)
Unusual JIRA usage
Memory diagnosticsIf you have been through the list above, there are a few further diagnostics which may provide clues. Getting memory dumpsBy far the most powerful and effective way of identifying memory problems is to have JIRA dump the contents of its memory on exit (when exiting due to an OutOfMemoryError hang). These run with no noticeable performance impact. This can be done in one of two ways:
Please reduce your maximum heap size (-Xmx) to 750m or so, so that the generated heap dump is of manageable size. You can turn -Xmx up once a heap dump has been taken. Enable gc loggingGarbage collection logging looks like this: 0.000: [GC [PSYoungGen: 3072K->501K(3584K)] 3072K->609K(4992K), 0.0054580 secs] 0.785: [GC [PSYoungGen: 3573K->503K(3584K)] 3681K->883K(4992K), 0.0050140 secs] 1.211: [GC [PSYoungGen: 3575K->511K(3584K)] 3955K->1196K(4992K), 0.0043800 secs] 1.734: [GC [PSYoungGen: 3583K->496K(3584K)] 4268K->1450K(4992K), 0.0045770 secs] 2.437: [GC [PSYoungGen: 3568K->499K(3520K)] 4522K->1770K(4928K), 0.0042520 secs] 2.442: [Full GC [PSYoungGen: 499K->181K(3520K)] [PSOldGen: 1270K->1407K(4224K)] 1770K->1589K(7744K) [PSPermGen: 6658K->6658K(16384K)], 0.0480810 secs] 3.046: [GC [PSYoungGen: 3008K->535K(3968K)] 4415K->1943K(8192K), 0.0103590 secs] 3.466: [GC [PSYoungGen: 3543K->874K(3968K)] 4951K->2282K(8192K), 0.0051330 secs] 3.856: [GC [PSYoungGen: 3882K->1011K(5248K)] 5290K->2507K(9472K), 0.0094050 secs] This can be parsed with tools like gcviewer to get an overall picture of memory use: To enable gc logging, start JIRA with the option -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:gc.log. Replace gc.log with an absolute path to a gc.log file. For example, with a Windows service, run: tomcat5 //US//JIRA ++JvmOptions="-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:c:\jira\logs\gc.log" or in bin/setenv.sh, set: export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -verbose:gc -Xloggc:${CATALINA_BASE}/logs/gc.log" If you modify bin/setenv.sh, you will need to restart JIRA for the changes to take effect. Access logsIt is important to know what requests are being made, so unusual usage can be identified. For instance, perhaps someone has configured their RSS reader to request a 10Mb RSS file once a minute, and this is killing JIRA. If you are using Tomcat, access logging can be enabled by adding the following to conf/server.xml, below the </Host> tag: <Valve className="org.apache.catalina.valves.AccessLogValve" pattern="%h %l %u %t "%r" %s %b %T %S %D" resolveHosts="false" /> The %S logs the session ID, allowing requests from distinct users to be grouped. The %D logs the request time in milliseconds. Logs will appear in logs/access_log.<date>, and look like this: 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /secure/Dashboard.jspa HTTP/1.1" 200 15287 2.835 A2CF5618100BFC43A867261F9054FCB0 2835 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined-printable.css HTTP/1.1" 200 111 0.030 A2CF5618100BFC43A867261F9054FCB0 30 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/combined.css HTTP/1.1" 200 38142 0.136 A2CF5618100BFC43A867261F9054FCB0 136 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /styles/global.css HTTP/1.1" 200 548 0.046 A2CF5618100BFC43A867261F9054FCB0 46 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/combined-javascript.js HTTP/1.1" 200 65508 0.281 A2CF5618100BFC43A867261F9054FCB0 281 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar.js HTTP/1.1" 200 49414 0.004 A2CF5618100BFC43A867261F9054FCB0 4 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/lang/calendar-en.js HTTP/1.1" 200 3600 0.000 A2CF5618100BFC43A867261F9054FCB0 0 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/calendar/calendar-setup.js HTTP/1.1" 200 8851 0.002 A2CF5618100BFC43A867261F9054FCB0 2 127.0.0.1 - - [23/Nov/2006:18:37:48 +1000] "GET /includes/js/cookieUtil.js HTTP/1.1" 200 1506 0.001 A2CF5618100BFC43A867261F9054FCB0 1 Alternatively, or if you are not using Tomcat or can't modify the app server config, JIRA has a built-in user access logging which can be enabled from the admin section, and produces terser logs like: 2006-09-27 10:35:50,561 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102065-4979 1266 2006-09-27 10:35:58,002 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/secure/IssueNavigator.jspa 102806-4402 1035 2006-09-27 10:36:05,774 INFO [jira.web.filters.AccessLogFilter] bob http://localhost:8080/browse/EAO-2 97058+3717 1730 Thread dumpsIf JIRA has hung with an OutOfMemoryError, the currently running threads often point to the culprit. Please take a thread dump of the JVM, and send us the logs containing it. ReferencesMonitoring and Managing Java SE 6 Platform Applications |
![]() |
Document generated by Confluence on Oct 06, 2009 00:26 |