Picture this.
The Product Manager just dropped another message in the Slack channel. You can almost feel the frustration through the screen.
Why wouldn’t she be frustrated?
This is probably the 50th time she has sent the same message and it is not even Wednesday yet.
The question always follows the same pattern:
A customer just reached out and said it seems our server is down.
@John Doe, why is it always the customer telling us first?
Is there no way for us to know before they do?
John is an engineer who genuinely cares about ownership. He already knows a little about observability from a previous role, but back then he was mostly focused on shipping features, closing Jira tickets, and moving fast. Reliability was someone else’s problem.
The DevOps engineer on his old team would occasionally send messages like:
CPU and memory usage are sitting at 80%. Something looks wrong. Did anyone deploy recently?
John never really cared how those numbers were collected. To him, observability sounded like DevOps jargon. His job was to build APIs, not think about dashboards, traces, or infrastructure telemetry.
Later, while preparing for interviews, he started reading about observability and OpenTelemetry. He understood the concepts at a high level and even tried instrumenting a personal Express project.
The experience was rough.
There were too many moving parts. Different packages for metrics. Different packages for tracing. More configuration. More setup. More things to stitch together.
Like most people, he abandoned the experiment halfway through.
Now he is at a new company, watching the same production problems repeat themselves over and over again.
He knows the solution is observability, but there are problems:
- Wiring observability into an existing application takes time
- The team prioritises feature delivery over reliability
- Existing hosted tools are expensive
- Previous observability setup attempts felt painful and fragmented
- Nobody wants to pause feature work for “infrastructure improvements”
So when the Product Manager drops the same Slack message again on Monday morning, John stays quiet and hopes someone else picks up the task.
And honestly, this is how many teams operate.
Especially lean teams.
They constantly have to choose between:
- feature development
- reliability engineering
Most times, feature development wins.
Observability gets pushed into the backlog. Then it stays there for months. Sometimes years.
Meanwhile:
- downtime increases
- debugging becomes harder
- customer trust slowly erodes
- the product experience gets worse
Eventually, customers start looking for alternatives
Why I Built Corelens
Corelens exists because I ran into this exact problem at work.
I manage several distributed microservices across both backend and infrastructure responsibilities. Most of the services are built with Node.js.
I needed visibility into:
- service health
- request duration
- inter-service communication
- runtime performance
- failures across async boundaries
So I tried using the OpenTelemetry ecosystem for Node.js.
While it worked… kind of… the setup experience felt fragmented.
To get a reasonably complete setup working, I found myself stitching together:
- metrics packages
- tracing packages
- exporters
- framework instrumentation libraries
- collector configuration
- lifecycle management logic
Eventually, I understood how everything fit together and built an internal library to standardise observability across our services.
And once it worked, I realised something:
If I struggled with this, many other Node.js engineers probably have too.
That became the foundation for Corelens.
What Is Corelens?
Corelens is an observability SDK for Node.js applications.
It gives you:
- structured logs
- metrics
- traces
- runtime monitoring
- exporters
- batching
- retry handling
- graceful shutdown behaviour
without forcing you to wire everything together manually from day one.
The goal is simple:
Make production-grade observability easier to adopt for Node.js teams.
What Corelens Gives You
Structured Logging
Structured logs with optional trace correlation.
This makes it easier to connect:
- logs
- requests
- traces
- failures
across distributed services.
Metrics
Corelens ships with built-in support for:
- counters
- gauges
- histograms
No external metrics library required.
You can use it for:
- request rates
- latency tracking
- queue depth
- internal service metrics
- business metrics
HTTP Metrics and Tracing
Corelens currently supports:
- Express
- Fastify
- Hono
Once you register the framework adapter, Corelens automatically records:
- total HTTP requests
- request duration
- request traces
Example configuration:
metrics: {
http: {
enabled: true;
}
}
traces: {
http: {
enabled: true;
}
}
Runtime Metrics
Monitor process-level health metrics such as:
- memory usage
- event loop delay
- uptime
- process health
Simple setup:
metrics: {
runtime: {
enabled: true;
}
}
Prometheus Export
Corelens supports Prometheus-compatible text rendering.
Expose it through an endpoint and let Prometheus scrape it at your preferred interval.
Multiple Export Destinations
Corelens currently supports:
- Console export
- File export
- OTLP HTTP export
This makes it easy to:
- debug locally
- store telemetry in files
- send telemetry to OpenTelemetry-compatible backends
Production-Aware Export Pipeline
Corelens was designed with production systems in mind.
It includes:
- bounded queues
- configurable overflow policies
- retry behaviour
- circuit breakers
- graceful shutdown flushing
This helps prevent observability infrastructure from becoming the reason your application crashes.
Built-In Debug Visibility
Observability tools should also be observable.
Corelens exposes internal debug stats for:
- dropped telemetry
- exporter failures
- queue pressure
- shutdown results
- retry behaviour
So you can actually understand what the SDK itself is doing.
When Should You Use Corelens?
Corelens fits well when:
- you run Node.js APIs, workers, or gateways
- you want logs, metrics, and traces from one SDK
- you want OTLP export without assembling a large OpenTelemetry setup
- you prefer explicit framework adapters over global auto-instrumentation
- you care about bounded queues and graceful shutdown behaviour
- you want production-focused defaults
Core Mental Model
Corelens has four major parts:
Logs
Events describing what is happening inside your application.
Metrics
Numerical measurements for monitoring application and infrastructure health.
Tracing
Visibility into execution flow across:
- async boundaries
- service calls
- request lifecycles
using spans and trace correlation.
Export
Controls where telemetry data gets sent.
Corelens currently supports:
- Console
- File
- OpenTelemetry Collector via OTLP HTTP
Supported Framework Adapters
Corelens currently ships adapters for:
- Express
- Fastify
- Hono
Adapters are optional.
Install only what your application needs.
Future Plans
Corelens will continue receiving active development from Daniel Okoronkwo.
Some planned improvements include:
- OTLP gRPC export support
- A Nest.js adapter built around Nest’s dependency injection system
- Additional performance improvements
- More exporters and integrations
- Better distributed tracing tooling
Performance is extremely important to me.
If you use Corelens and notice:
- performance regressions
- memory issues
- queue bottlenecks
- exporter problems
please reach out or open an issue.
Final Thoughts
Corelens started as an internal solution to a real production problem.
Now it is open source.
You can explore the codebase here:
If you run into issues while using Corelens, feel free to open an issue:
And if you want to contribute, you are absolutely welcome.
I care deeply about:
- performance
- reliability
- developer experience
- maintainability
and I hope Corelens becomes genuinely useful to teams building production Node.js systems.
