[node.js] How, in general, does Node.js handle 10,000 concurrent requests?

I understand that Node.js uses a single-thread and an event loop to process requests only processing one at a time (which is non-blocking). But still, how does that work, lets say 10,000 concurrent requests. The event loop will process all the requests? Would not that take too long?

I can not understand (yet) how it can be faster than a multi-threaded web server. I understand that multi-threaded web server will be more expensive in resources (memory, CPU), but would not it still be faster? I am probably wrong; please explain how this single-thread is faster in lots of requests, and what it typically does (in high level) when servicing lots of requests like 10,000.

And also, will that single-thread scale well with that large amount? Please bear in mind that I am just starting to learn Node.js.

This question is related to node.js

The answer is


Adding to slebetman's answer for more clarity on what happens while executing the code.

The internal thread pool in nodeJs just has 4 threads by default. and its not like the whole request is attached to a new thread from the thread pool the whole execution of request happens just like any normal request (without any blocking task) , just that whenever a request has any long running or a heavy operation like db call ,a file operation or a http request the task is queued to the internal thread pool which is provided by libuv. And as nodeJs provides 4 threads in internal thread pool by default every 5th or next concurrent request waits until a thread is free and once these operations are over the callback is pushed to the callback queue. and is picked up by event loop and sends back the response.

Now here comes another information that its not once single callback queue, there are many queues.

  1. NextTick queue
  2. Micro task queue
  3. Timers Queue
  4. IO callback queue (Requests, File ops, db ops)
  5. IO Poll queue
  6. Check Phase queue or SetImmediate
  7. close handlers queue

Whenever a request comes the code gets executing in this order of callbacks queued.

It is not like when there is a blocking request it is attached to a new thread. There are only 4 threads by default. So there is another queueing happening there.

Whenever in a code a blocking process like file read occurs , then calls a function which utilises thread from thread pool and then once the operation is done , the callback is passed to the respective queue and then executed in the order.

Everything gets queued based on the the type of callback and processed in the order mentioned above.


I understand that Node.js uses a single-thread and an event loop to process requests only processing one at a time (which is non-blocking).

I could be misunderstanding what you've said here, but "one at a time" sounds like you may not be fully understanding the event-based architecture.

In a "conventional" (non event-driven) application architecture, the process spends a lot of time sitting around waiting for something to happen. In an event-based architecture such as Node.js the process doesn't just wait, it can get on with other work.

For example: you get a connection from a client, you accept it, you read the request headers (in the case of http), then you start to act on the request. You might read the request body, you will generally end up sending some data back to the client (this is a deliberate simplification of the procedure, just to demonstrate the point).

At each of these stages, most of the time is spent waiting for some data to arrive from the other end - the actual time spent processing in the main JS thread is usually fairly minimal.

When the state of an I/O object (such as a network connection) changes such that it needs processing (e.g. data is received on a socket, a socket becomes writable, etc) the main Node.js JS thread is woken with a list of items needing to be processed.

It finds the relevant data structure and emits some event on that structure which causes callbacks to be run, which process the incoming data, or write more data to a socket, etc. Once all of the I/O objects in need of processing have been processed, the main Node.js JS thread will wait again until it's told that more data is available (or some other operation has completed or timed out).

The next time that it is woken, it could well be due to a different I/O object needing to be processed - for example a different network connection. Each time, the relevant callbacks are run and then it goes back to sleep waiting for something else to happen.

The important point is that the processing of different requests is interleaved, it doesn't process one request from start to end and then move onto the next.

To my mind, the main advantage of this is that a slow request (e.g. you're trying to send 1MB of response data to a mobile phone device over a 2G data connection, or you're doing a really slow database query) won't block faster ones.

In a conventional multi-threaded web server, you will typically have a thread for each request being handled, and it will process ONLY that request until it's finished. What happens if you have a lot of slow requests? You end up with a lot of your threads hanging around processing these requests, and other requests (which might be very simple requests that could be handled very quickly) get queued behind them.

There are plenty of others event-based systems apart from Node.js, and they tend to have similar advantages and disadvantages compared with the conventional model.

I wouldn't claim that event-based systems are faster in every situation or with every workload - they tend to work well for I/O-bound workloads, not so well for CPU-bound ones.


Single Threaded Event Loop Model Processing Steps:

  • Clients Send request to Web Server.

  • Node JS Web Server internally maintains a Limited Thread pool to provide services to the Client Requests.

  • Node JS Web Server receives those requests and places them into a Queue. It is known as “Event Queue”.

  • Node JS Web Server internally has a Component, known as “Event Loop”. Why it got this name is that it uses indefinite loop to receive requests and process them.

  • Event Loop uses Single Thread only. It is main heart of Node JS Platform Processing Model.

  • Event Loop checks any Client Request is placed in Event Queue. If not then wait for incoming requests for indefinitely.

  • If yes, then pick up one Client Request from Event Queue

    1. Starts process that Client Request
    2. If that Client Request Does Not requires any Blocking IO Operations, then process everything, prepare response and send it back to client.
    3. If that Client Request requires some Blocking IO Operations like interacting with Database, File System, External Services then it will follow different approach
  • Checks Threads availability from Internal Thread Pool
  • Picks up one Thread and assign this Client Request to that thread.
  • That Thread is responsible for taking that request, process it, perform Blocking IO operations, prepare response and send it back to the Event Loop

    very nicely explained by @Rambabu Posa for more explanation go throw this Link


What you seem to be thinking is that most of the processing is handled in the node event loop. Node actually farms off the I/O work to threads. I/O operations typically take orders of magnitude longer than CPU operations so why have the CPU wait for that? Besides, the OS can handle I/O tasks very well already. In fact, because Node does not wait around it achieves much higher CPU utilisation.

By way of analogy, think of NodeJS as a waiter taking the customer orders while the I/O chefs prepare them in the kitchen. Other systems have multiple chefs, who take a customers order, prepare the meal, clear the table and only then attend to the next customer.


Adding to slebetman answer: When you say Node.JS can handle 10,000 concurrent requests they are essentially non-blocking requests i.e. these requests are majorly pertaining to database query.

Internally, event loop of Node.JS is handling a thread pool, where each thread handles a non-blocking request and event loop continues to listen to more request after delegating work to one of the thread of the thread pool. When one of the thread completes the work, it send a signal to the event loop that it has finished aka callback. Event loop then process this callback and send the response back.

As you are new to NodeJS, do read more about nextTick to understand how event loop works internally. Read blogs on http://javascriptissexy.com, they were really helpful for me when I started with JavaScript/NodeJS.