[java] "java.lang.OutOfMemoryError : unable to create new native Thread"

We are getting "java.lang.OutOfMemoryError : unable to create new native Thread" on 8GB RAM VM after 32k threads (ps -eLF| grep -c java)

However, "top" and "free -m" shows 50% free memory available. JDk is 64 bit and tried with both HotSpot and JRockit.Server has Linux 2.6.18

We also tried OS stack size (ulimit -s) tweaking and max process(ulimit -u) limits, limit.conf increase but all in vain.

Also we tried almost all possible of heap size combinations, keeping it low, high etc.

The script we use to run application is

/opt/jrockit-jdk1.6/bin/java -Xms512m -Xmx512m -Xss128k -jar JavaNatSimulator.jar /opt/tools/jnatclients/natSimulator.properties

Thanks for the reply.

We have tried editing /etc/security/limits.conf and ulimit but still that same

[root@jboss02 ~]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 72192
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65535
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 72192
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

This question is related to java out-of-memory

The answer is


I had the same problem due to ghost processes that didn't show up when using top in bash. This prevented the JVM to spawn more threads.

For me, it resolved when listing all java processes with jps (just execute jps in your shell) and killed them separately using the kill -9 pid bash command for each ghost process.

This might help in some scenarios.


your JBoss configuration has some issues, /opt/jrockit-jdk1.6/bin/java -Xms512m -Xmx512m Xms and Xmx are limiting your JBoss memory usage, to the configured value, so from the 8Gb you have the server is only ussing 512M + some extra for his own purpose, increase that number, remember to leave some free for the OS and other stuff running there and may be you get it running despite de unsavoury code. Fixing the code would be nice too, if you can.


To find which processes are creating threads try:

ps huH

I normally redirect output to a file and analysis the file offline (is thread count for each process is as expected or not)


First of all I wouldn't blame that much the OS/VM.. rather the developer who wrote the code that creates sooo many Threads. Basically somewhere in your code (or 3rd party) a lot of threads are created without control.

Carefully review the stacktraces/code and control the number of threads that get created. Normally your app shouldn't need a large amount of threads, if it does it's a different problem.


This error can surface because of following two reasons:

  • There is no room in the memory to accommodate new threads.

  • The number of threads exceeds the Operating System limit.

I doubt that number of thread have exceeded the limit for the java process

So possibly chances are the issue is because of memory One point to consider is

threads are not created within the JVM heap. They are created outside the JVM heap. So if there is less room left in the RAM, after the JVM heap allocation, application will run into “java.lang.OutOfMemoryError: unable to create new native thread”.

Possible Solution is to reduce the heap memory or increase the overall ram size


If your Job is failing because of OutOfMemmory on nodes you can tweek your number of max maps and reducers and the JVM opts for each. mapred.child.java.opts (the default is 200Xmx) usually has to be increased based on your data nodes specific hardware.

This link might be helpful... pls check


I had this same issue and it turned out to be an improper usage of an java API. I was initializing a builder in a batch processing method that was that not supposed to be initiallized more than once.

Basically I was doing something like:

for (batch in batches) {
    process_batch(batch)
}

def process_batch(batch) {
    var client = TransportClient.builder().build()
    client.processList(batch)
}

when I should have done this:

for (batch in batches) {
    var client = TransportClient.builder().build()
    process_batch(batch, client)
}

def process_batch(batch, client) {
    client.processList(batch)
}

If jvm is started via systemd, there might be a maxTasks per process limit (tasks actually mean threads) in some linux OS.

You can check this by running "service status" and check if there is a maxTasks limit. If there is, you can remove it by editing /etc/systemd/system.conf, adding a config: DefaultTasksMax=infinity


I would recommend to also look at the Thread Stack Size and see if you get more threads created. The default Thread Stack Size for JRockit 1.5/1.6 is 1 MB for 64-bit VM on Linux OS. 32K threads will require a significant amount of physical and virtual memory to honor this requirement.

Try to reduce the Stack Size to 512 KB as a starting point and see if it helps creating more threads for your application. I also recommend to explore horizontal scaling e.g. splitting your application processing across more physical or virtual machines.

When using a 64-bit VM, the true limit will depend on the OS physical and virtual memory availability and OS tuning parameters such as ulimitc. I also recommend the following article as a reference:

OutOfMemoryError: unable to create new native thread – Problem Demystified


You have a chance to face the java.lang.OutOfMemoryError: Unable to create new native thread whenever the JVM asks for a new thread from the OS. Whenever the underlying OS cannot allocate a new native thread, this OutOfMemoryError will be thrown. The exact limit for native threads is very platform-dependent thus its recommend to find out those limits by running a test similar to the below link example. But, in general, the situation causing java.lang.OutOfMemoryError: Unable to create new native thread goes through the following phases:

  1. A new Java thread is requested by an application running inside the JVM
  2. JVM native code proxies the request to create a new native thread to the OS The OS tries to create a new native thread which requires memory to be allocated to the thread
  3. The OS will refuse native memory allocation either because the 32-bit Java process size has depleted its memory address space – e.g. (2-4) GB process size limit has been hit – or the virtual memory of the OS has been fully depleted
  4. The java.lang.OutOfMemoryError: Unable to create new native thread error is thrown.

Reference: https://plumbr.eu/outofmemoryerror/unable-to-create-new-native-thread


This is not a memory problem even though the exception name highly suggests so, but an operating system resource problem. You are running out of native threads, i.e. how many threads the operating system will allow your JVM to use.

This is an uncommon problem, because you rarely need that many. Do you have a lot of unconditional thread spawning where the threads should but doesn't finish?

You might consider rewriting into using Callable/Runnables under the control of an Executor if at all possible. There are plenty of standard executors with various behavior which your code can easily control.

(There are many reasons why the number of threads is limited, but they vary from operating system to operating system)


It's likely that your OS does not allow the number of threads you're trying to create, or you're hitting some limit in the JVM. Especially if it's such a round number as 32k, a limit of one kind or another is a very likely culprit.

Are you sure you truly need 32k threads? Most modern languages have some kind of support for pools of reusable threads - I'm sure Java has something in place too (like ExecutorService, as user Jesper mentioned). Perhaps you could request threads from such a pool, instead of manually creating new ones.


I encountered same issue during the load test, the reason is because of JVM is unable to create a new Java thread further. Below is the JVM source code

if (native_thread->osthread() == NULL) {    
// No one should hold a reference to the 'native_thread'.    
    delete native_thread;   
if (JvmtiExport::should_post_resource_exhausted()) {      
    JvmtiExport::post_resource_exhausted(        
        JVMTI_RESOURCE_EXHAUSTED_OOM_ERROR | 
        JVMTI_RESOURCE_EXHAUSTED_THREADS, 
        "unable to create new native thread");    
    } THROW_MSG(vmSymbols::java_lang_OutOfMemoryError(), "unable to create new native thread");  
} Thread::start(native_thread);`

Root cause : JVM throws this exception when JVMTI_RESOURCE_EXHAUSTED_OOM_ERROR (resources exhausted (means memory exhausted) ) or JVMTI_RESOURCE_EXHAUSTED_THREADS (Threads exhausted).

In my case Jboss is creating too many threads , to serve the request, but all the threads are blocked . Because of this, JVM is exhausted with threads as well with memory (each thread holds memory , which is not released , because each thread is blocked).

Analyzed the java thread dumps observed nearly 61K threads are blocked by one of our method, which is causing this issue . Below is the portion of Thread dump

"SimpleAsyncTaskExecutor-16562" #38070 prio=5 os_prio=0 tid=0x00007f9985440000 nid=0x2ca6 waiting for monitor entry [0x00007f9d58c2d000]
   java.lang.Thread.State: BLOCKED (on object monitor)