There is Always Data which has to be consumed, when it is small can be consumed in one go, but as data grow, we need something which helps the reader read at it own speed from the source , until the source is depleted or closed and the same applies for writing the data to.
We Have used Streams before. in Node.Js well we dint know we are actually using it some use cases included reading/writing from/to a socket connection , Reading/Writing from/to a file. The Http Connection object in Node we use give us two objects Req and Res , both of them being streams Analogy we will use in this Article.
In the above image , we see the Writer Writing to the Source or Reading From a Source, A Source Could be a TCP Connection , a Disk or Anything Similar. So One thing we need to Understand is , on the other end of the Consumer or Writer there is always an other writer or Consumer, and these Writers do not write directly to the end source but write it to the Stream, and Consumer read it from the stream, the stream here is basically an Abstraction which Node Provides.
Why Do We Need Streams Consumer Cannot Keep Up with Speed of Data Produced From the Source or Source Can’t handle writes from the Writers fast enough to solve this problem there is a need of a backoff or control mechanism something similar to a TCP protocol , so here comes “Streams” which is an Application level abstraction.
Node Streams Can be categorized into few types.
When you write res.write("Hello world") in your Node.js application, have you ever wondered what actually happens to that string? Where does it go, and how does it eventually reach the browser or client on the other end?
Let’s trace the journey of your data — from a single JavaScript function call all the way across the network. Whether you’re using res.write() for HTTP responses, socket.write() for raw TCP connections, or any other Writablestream, the underlying mechanics are surprisingly similar.
res.write("Hello world"); // or socket.write(buffer);
javascript
It all begins with you. When your code calls a write method, you hand Node.js a chunk of data (string, Buffer, or other supported type).
Node places this data into the writable stream’s internal buffer. Think of this as a small staging area: “Got it — I’ll take it from here.”
Node itself doesn’t send bytes over the network. Instead, it relies on libuv, the C++ layer that powers all of Node’s low-level I/O.
Libuv makes a system call (like write() or send()) to the operating system. The OS then copies your data into its TCP send buffer inside the kernel.
At this moment, your data has moved from JavaScript-land into the OS’s domain.
The operating system takes over the heavy lifting. It:
From Node.js’s perspective, the data has been “written.” But physically, it’s the OS and the network stack that push the bytes onto the wire and ensure they reach the recipient.
Importantly, packets begin leaving the machine as soon as you call write() — they don’t wait for you to call end().
Node.js doesn’t just fire-and-forget when you write. Each stream.write(chunk) call returns a boolean:
This mechanism is called backpressure. It prevents your application from writing data faster than the OS (and the network) can handle.
When the pressure eases — meaning space frees up in the buffer — Node.js emits a 'drain' event to signal: “Okay, you can resume writing now.”
const http = require('http'); const fs = require('fs'); http.createServer((req, res) => { // Create a readable file stream const fileStream = fs.createReadStream('bigfile.txt'); // When the file emits 'data', try writing it to response fileStream.on('data', chunk => { const ok = res.write(chunk); // try to write if (!ok) { // Pause the file stream if res buffer is full fileStream.pause(); // Wait for 'drain' before resuming res.once('drain', () => { fileStream.resume(); }); } }); fileStream.on('end', () => { res.end(); // finish HTTP response }); }).listen(3000, () => console.log('Server running on http://localhost:3000'));
javascript
When you’re finished, you call stream.end(). Node then:
This graceful shutdown ensures all data is delivered and that both sides know the stream has closed.
Streams in Node.js may look simple on the surface, but under the hood they provide a powerful abstraction for handling data efficiently. By buffering data, applying backpressure, and coordinating with the operating system’s TCP layer, streams make it possible to work with anything from small files to huge network payloads without overwhelming your application.
Whether you’re reading from a request, writing a response, or piping data from one place to another, streams ensure that producers and consumers can work at their own pace. Understanding this flow — and the mechanisms like buffering, backpressure, and drain — helps you write more efficient and resilient Node.js applications.
Key takeaway: Streams let Node.js handle data piece by piece, at the speed each side can manage.