Every article about Node.js tends to repeat the same ideas.
- It is single-threaded.
- JavaScript execution is synchronous.
- It is not ideal for CPU-heavy work.
- It shines at I/O-bound workloads.
All the features listed above are true and more makes Node.js an awesome tool for building highly performant web application servers.
But none of that really prepares you for this question:
What happens when you are not building a web server, but a logging system?
When logging becomes a systems problem
At first, logging feels trivial.
process.stdout.write("hello world\n");
You take an input and write it to stdout. Done.
Except it is not done.
That single line is a syscall. It crosses from user space to kernel space. It is expensive. The OS helps by buffering and batching those writes, but now your system is no longer behaving the way your code suggests.
You think you are writing logs immediately. The OS is deciding when that actually happens.
That mismatch is where the problems start.
The naive approach
If you scale this up, the issue becomes obvious.
for (let i = 0; i < 10000; i++) {
process.stdout.write(`log ${i}\n`);
}
It works, but you are now:
- issuing thousands of syscalls - we already established that this is expensive
- competing with your own application for I/O - the OS performs timesharing between answering your
syscallsand handling application requests - relying entirely on the OS to smooth things out
At some point, this stops being harmless.
Introducing a buffer
The first instinct is to batch.
Instead of writing immediately, you accumulate logs and flush them later. You can peform the flush periodically or based on size limit
type LogEvent = {
message: string;
timestamp: number;
};
const queue: LogEvent[] = [];
function log(event: LogEvent) {
queue.push(event);
}
function flush() {
while (queue.length > 0) {
const event = queue.shift();
process.stdout.write(JSON.stringify(event) + "\n");
}
}
This reduces the number of syscalls and gives you control.
But now you have created a new problem.
When the producer outruns the consumer
Your application can produce logs much faster than stdout can consume them.
So you introduce limits.
let queuedBytes = 0;
const MAX_QUEUE_BYTES = 4 _ 1024 _ 1024;
function canAccept(size: number) {
return queuedBytes + size <= MAX_QUEUE_BYTES;
}
Now you have to decide what happens when the queue is full.
Do you drop logs? Do you remove older logs? Do you slow down the producer?
There is no perfect answer. Each choice trades reliability for latency or memory.
At this point, you are no longer just writing logs. You are designing a system.
Streams and backpressure
Node.js Streams help a bit because they maintain a configurable internal buffer which can help in efficiently batching writes, but they do not solve the problem for you because they do not have a backpressure handling semantics.
const ok = stream.write(data);
If you attempt to write to a stream andok is false, the buffer is full. That is all you get.
Streams do not decide what to do next. They just signal pressure. You have to handle it.
Building a real pipeline
This is where everything comes together. Instead of relying on implicit behavior, you build an explicit pipeline
export class LogsPipeline {
private queue: LogEvent[] = [];
private isDraining = false;
handle(event: LogEvent) {
this.queue.push(event);
this.flush();
}
private flush() {
if (this.isDraining) return;
while (this.queue.length > 0) {
const payload = JSON.stringify(this.queue[0]) + "\n";
const ok = process.stdout.write(payload);
if (!ok) {
this.isDraining = true;
process.stdout.once("drain", () => {
this.isDraining = false;
this.flush();
});
return;
}
this.queue.shift();
}
}
}
This already changes everything.
You now:
- control when data is written
- react to backpressure explicitly
- decouple log production from log flushing
But one problem still remains.
The shutdown problem
Even with a queue and backpressure handling, logs can still be lost.
Not because your logic is wrong, but because the process can exit while:
- logs are still in your queue
- or sitting inside the OS buffer
That means a successful write does not guarantee persistence.
So the real question becomes:
how do you ensure all logs are flushed before shutdown?
The FlushWaiters pattern
This is where the FlushWaiters pattern comes in.
The problem is subtle.
Your flushing logic is synchronous in structure, but shutdown is asynchronous. You cannot just call a function and wait for everything to be done because you do not control when the system becomes idle.
So instead, you register intent.
private flushWaiters: Array<() => void> = [];
async flushAll(): Promise<void> {
if (this.queue.length === 0 && !this.isDraining) {
return;
}
return new Promise((resolve) => {
this.flushWaiters.push(resolve);
this.flush();
});
}
You are not forcing completion. You are saying:
when the system becomes idle, resolve this promise
Then, inside your pipeline, you check when it is safe.
private resolveFlushWaitersIfIdle() {
if (this.queue.length === 0 && !this.isDraining) {
const waiters = this.flushWaiters;
this.flushWaiters = [];
for (const resolve of waiters) resolve();
}
}
That is the core idea.
Register now. Resolve later.
Putting everything together
At this point, the pipeline evolves into something more complete.
export class LogsPipeline {
private producedCount = 0;
private flushedCount = 0;
private isDraining = false;
private queue: QueueItem[] = [];
private queuedBytes = 0;
private peakQueuedBytes = 0;
private flushWaiters: Array<() => void> = [];
private isShuttingDown = false;
private hardMaxQueueBytes: number;
constructor(private config: PipelineConfig) {
this.hardMaxQueueBytes = config.maxQueueBytes;
}
handle(event: LogEvent): boolean {
this.producedCount++;
if (this.isShuttingDown) {
return false;
}
const size = this.estimateSize(event);
if (this.queuedBytes + size > this.hardMaxQueueBytes) {
return false;
}
this.queue.push({ event, estimatedPayloadBytes: size });
this.queuedBytes += size;
this.peakQueuedBytes = Math.max(this.peakQueuedBytes, this.queuedBytes);
this.flush();
return true;
}
async flushAll(): Promise<void> {
this.isShuttingDown = true;
if (this.queue.length === 0 && !this.isDraining) {
return;
}
return new Promise((resolve) => {
this.flushWaiters.push(resolve);
this.flush();
});
}
private flush() {
if (this.isDraining) return;
while (this.queue.length > 0) {
const item = this.queue[0];
const payload = JSON.stringify(item.event) + "\n";
const ok = this.config.writer.write(payload);
if (!ok) {
this.isDraining = true;
this.config.writer.once("drain", () => {
this.isDraining = false;
this.flush();
this.resolveFlushWaitersIfIdle();
});
return;
}
this.queue.shift();
this.queuedBytes -= item.estimatedPayloadBytes;
this.flushedCount++;
}
this.resolveFlushWaitersIfIdle();
}
private resolveFlushWaitersIfIdle() {
if (this.queue.length === 0 && !this.isDraining) {
const waiters = this.flushWaiters;
this.flushWaiters = [];
for (const resolve of waiters) resolve();
}
}
private estimateSize(event: LogEvent) {
return 64 + event.message.length + JSON.stringify(event.context ?? {}).length;
}
}
Wiring it to process lifecycle
Now you connect it to shutdown.
process.on("SIGINT", async () => {
await pipeline.flushAll();
// process.exit()
});
The key detail here is restraint.
You do not call process.exit() immediately. You let the application decide when to exit. Your responsibility is to make sure that when shutdown is requested, the system has a way to safely drain.
What changed
What started as a simple logging function turned into:
- a buffered system
- a backpressure-aware pipeline
- a bounded queue with policies
- a lifecycle-aware shutdown mechanism
The most important idea was in flushWaiters pattern is not batching or buffering.
It is this:
register intent now, resolve it when the system is truly done
That is the FlushWaiters pattern.
