Measure, monitor, and optimize Node.js performance.
Structured logging is an essential part of building observability into your Node.js applications. Today, we'll explore two popular logging libraries: Winston and Pino.
Winston is one of the most popular logging libraries for Node.js. It provides a simple and flexible way to log messages with different levels.
const winston = require('winston');
// Create a logger instance
const logger = winston.createLogger({
level: 'info',
transports: [
new winston.transports.Console(),
new winston.transports.File({ filename: 'logs.log' })
]
});
Winston uses a standard set of log levels, each with its own severity. Here's how they work:
logger.error('Error occurred');
logger.warn('Warning message');
logger.info('Informational message');
logger.debug('Debugging information');
Keeping your logs manageable is crucial. Here's how to set up log rotation:
const { transports } = require('winston');
new transports.File({
filename: 'logs.log',
maxsize: 5000000, // 5MB
maxFiles: 3,
rotateOnReopen: true
})
Pino is another excellent logging library known for its performance and simplicity. It's particularly popular in production environments.
const pino = require('pino');
// Create a logger instance
const logger = pino({
transports: [
new pino.transport.Console(),
new pino.transport.File({ filename: 'logs.log' })
]
});
Pino offers similar functionality to Winston but with a simpler API:
logger.error('Error occurred');
logger.warn('Warning message');
logger.info('Informational message');
logger.debug('Debugging information');
When choosing between Winston and Pino, consider your use case: - Use Winston for its flexibility and ecosystem of plugins. - Use Pino for high-performance logging needs.
Welcome to Metrics & Monitoring! In this chapter, you'll learn how to use Prometheus, Grafana, and OpenTelemetry to monitor your Node.js applications effectively. You'll discover how to expose metrics, trace requests, and visualize performance data.
Monitoring involves observing your application's behavior and performance in real-time. It helps you detect issues early, optimize performance, and ensure availability.
const express = require('express');
const { promisify } = require('node-prometheus-metrics');
// Create a new counter metric
const requestsTotal = promisify({
name: 'http_requests_total',
help: 'Total HTTP requests processed',
type: 'counter'
});
const app = express();
app.get('/', async (req, res) => {
requestsTotal.inc(); // Increment the counter
res.send('Hello World!');
});
// Start Prometheus endpoint
const metricsRouter = require('node-prometheus-metrics').router;
app.use('/metrics', metricsRouter);
Grafana is a powerful visualization tool that allows you to create dashboards for your metrics. You can connect it to Prometheus as a data source.
grafana.ini
[server]
root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana
[data_source]
type = prometheus
url = http://localhost:9090
const { TracerProvider, ConsoleSpanExporter } = require('@opentelemetry/node');
const { registerInstrumentations } = require('@opentelemetry/instrumentation');
// Initialize the tracer provider
const provider = new TracerProvider();
provider.addSpanProcessor(new ConsoleSpanExporter());
registerInstrumentations({
instrumentations: [
new require('@opentelemetry/instrumentation-express')(),
new require('@opentelemetry/instrumentation-http')()
]
});
In production environments, monitoring is essential for maintaining performance and availability. For example: - Track API request latencies - Monitor database query times - Observe memory usage patterns - Alert on unexpected error spikes
Performance tuning is a crucial skill for optimizing Node.js applications. In this chapter, we'll explore techniques to profile CPU/memory usage, analyze flame graphs, and implement effective Redis-based caching strategies.
--inspect
and --prof
for CPU profiling.# Generate CPU profile
node --prof my-app.js
cat out.cpuprofile | speedscope
Flame graphs are visual representations of performance data that show the relationship between different functions in your code. They help identify bottlenecks and optimize hotspots.
# Install and run speedscope
npm install -g speedscope
speedscope out.cpuprofile
Effective caching can drastically improve application performance. Here are common strategies:
// In-memory cache example
const cache = new Map();
function getCachedData(key) {
if (cache.has(key)) {
return cache.get(key);
}
const data = fetchFromDatabase(key);
cache.set(key, data);
return data;
}
npm install redis
.// Redis-based cache example
const Redis = require('redis');
const client = Redis.createClient();
async function getCachedData(key) {
try {
const value = await client.get(key);
if (value !== null) {
return JSON.parse(value);
}
const data = await fetchFromDatabase(key);
await client.setex(key, 3600, JSON.stringify(data));
return data;
} catch (error) {
console.error('Redis error:', error);
throw error;
}
}
Question 1 of 15