📊Observability & Performance

Measure, monitor, and optimize Node.js performance.

📉 Logging with Winston & Pino

Structured logging is an essential part of building observability into your Node.js applications. Today, we'll explore two popular logging libraries: Winston and Pino.

💡 Getting Started with Winston

Winston is one of the most popular logging libraries for Node.js. It provides a simple and flexible way to log messages with different levels.

const winston = require('winston');

// Create a logger instance
const logger = winston.createLogger({
  level: 'info',
  transports: [
    new winston.transports.Console(),
    new winston.transports.File({ filename: 'logs.log' })
  ]
});

💡 Key Features of Winston

  • Multiple transport support (Console, File, HTTP, etc.)
  • Different log levels (error, warn, info, debug)
  • Custom formatters and transports
  • Easy to configure and extend

💡 Logging Levels in Winston

Winston uses a standard set of log levels, each with its own severity. Here's how they work:

logger.error('Error occurred');
logger.warn('Warning message');
logger.info('Informational message');
logger.debug('Debugging information');

💡 Log Rotation with Winston

Keeping your logs manageable is crucial. Here's how to set up log rotation:

const { transports } = require('winston');

new transports.File({
  filename: 'logs.log',
  maxsize: 5000000, // 5MB
  maxFiles: 3,
  rotateOnReopen: true
})

💡 Getting Started with Pino

Pino is another excellent logging library known for its performance and simplicity. It's particularly popular in production environments.

const pino = require('pino');

// Create a logger instance
const logger = pino({
  transports: [
    new pino.transport.Console(),
    new pino.transport.File({ filename: 'logs.log' })
  ]
});

💡 Why Choose Pino?

  • High performance with low overhead
  • Structured logging by default
  • Built-in support for JSON formatting
  • Easy integration with monitoring tools

💡 Pino Logging Levels and Transport Options

Pino offers similar functionality to Winston but with a simpler API:

logger.error('Error occurred');
logger.warn('Warning message');
logger.info('Informational message');
logger.debug('Debugging information');

💡 Comparison: Winston vs. Pino

When choosing between Winston and Pino, consider your use case: - Use Winston for its flexibility and ecosystem of plugins. - Use Pino for high-performance logging needs.

📈 Metrics & Monitoring

Welcome to Metrics & Monitoring! In this chapter, you'll learn how to use Prometheus, Grafana, and OpenTelemetry to monitor your Node.js applications effectively. You'll discover how to expose metrics, trace requests, and visualize performance data.

💡 What is Monitoring?

Monitoring involves observing your application's behavior and performance in real-time. It helps you detect issues early, optimize performance, and ensure availability.

Key Concepts

  • Metrics: Numerical values that describe your system's state (e.g., CPU usage, request latency)
  • Logging: Detailed records of events and errors for post-mortem analysis
  • Tracing: Tracking requests as they flow through your system to identify bottlenecks
  • Monitoring Tools: Software used to collect, analyze, and visualize data

💡 Setting Up Prometheus

const express = require('express');
const { promisify } = require('node-prometheus-metrics');

// Create a new counter metric
const requestsTotal = promisify({
  name: 'http_requests_total',
  help: 'Total HTTP requests processed',
  type: 'counter'
});

const app = express();

app.get('/', async (req, res) => {
  requestsTotal.inc(); // Increment the counter
  res.send('Hello World!');
});

// Start Prometheus endpoint
const metricsRouter = require('node-prometheus-metrics').router;
app.use('/metrics', metricsRouter);

Visualizing with Grafana

Grafana is a powerful visualization tool that allows you to create dashboards for your metrics. You can connect it to Prometheus as a data source.

 grafana.ini
[server]
  root_url = %(protocol)s://%(domain)s:%(http_port)s/grafana

[data_source]
  type = prometheus
  url = http://localhost:9090

💡 Getting Started with OpenTelemetry

const { TracerProvider, ConsoleSpanExporter } = require('@opentelemetry/node');
const { registerInstrumentations } = require('@opentelemetry/instrumentation');

// Initialize the tracer provider
const provider = new TracerProvider();
provider.addSpanProcessor(new ConsoleSpanExporter());

registerInstrumentations({
  instrumentations: [
    new require('@opentelemetry/instrumentation-express')(),
    new require('@opentelemetry/instrumentation-http')()
  ]
});

Best Practices for Monitoring

  • Always monitor response times, error rates, and throughput
  • Set up alerting thresholds to notify you of critical issues
  • Use sampling strategies for large-scale applications
  • Keep your monitoring tools performant by optimizing queries

Common Mistakes to Avoid

  • Don't ignore logging best practices alongside monitoring
  • Avoid collecting too much data (keep it relevant)
  • Don't forget to monitor your monitoring tools
  • Avoid using default thresholds without understanding your application's behavior

💡 Real-World Applications

In production environments, monitoring is essential for maintaining performance and availability. For example: - Track API request latencies - Monitor database query times - Observe memory usage patterns - Alert on unexpected error spikes

🚦 Performance Tuning & Caching

Performance tuning is a crucial skill for optimizing Node.js applications. In this chapter, we'll explore techniques to profile CPU/memory usage, analyze flame graphs, and implement effective Redis-based caching strategies.

💡 Why Performance Matters

  • Better user experience through faster response times.
  • Reduced infrastructure costs by optimizing resource usage.
  • Improved scalability to handle higher workloads.

Profiling Tools in Node.js

  • Use --inspect and --prof for CPU profiling.
  • Generate flame graphs with tools like speedscope.
# Generate CPU profile
node --prof my-app.js

cat out.cpuprofile | speedscope

💡 Flame Graphs: Your New Best Friend

Flame graphs are visual representations of performance data that show the relationship between different functions in your code. They help identify bottlenecks and optimize hotspots.

# Install and run speedscope
npm install -g speedscope
speedscope out.cpuprofile

Identifying Bottlenecks

  • Look for hot paths in your application.
  • Analyze memory allocation patterns with the heap profiler.
  • Check for event loop delays using tools like chrome://tracing.

💡 Caching Strategies

Effective caching can drastically improve application performance. Here are common strategies:

  • §In-memory cache°: Fastest option but limited by memory constraints.
  • §Redis-based cache°: Scalable and widely used in production environments.
  • §HTTP caching°: Offload work from your application using browser and proxy caches.
// In-memory cache example
const cache = new Map();

function getCachedData(key) {
  if (cache.has(key)) {
    return cache.get(key);
  }
  const data = fetchFromDatabase(key);
  cache.set(key, data);
  return data;
}

Implementing Redis-Based Caching

  • Install Redis client: npm install redis.
  • Configure caching for frequently accessed data.
  • Use appropriate Redis data structures like Strings°, , §Hashes, and §Sorted Sets°.
// Redis-based cache example
const Redis = require('redis');
const client = Redis.createClient();

async function getCachedData(key) {
  try {
    const value = await client.get(key);
    if (value !== null) {
      return JSON.parse(value);
    }
    const data = await fetchFromDatabase(key);
    await client.setex(key, 3600, JSON.stringify(data));
    return data;
  } catch (error) {
    console.error('Redis error:', error);
    throw error;
  }
}

Best Practices for Caching

  • Always ⌘cache frequently accessed data°.
  • Set appropriate cache expiration times.
  • Implement cache invalidation strategies.
  • Monitor cache performance and hit rates.

Common Pitfalls to Avoid

  • Don't ⌘cache everything° - analyze access patterns first.
  • Avoid ⌘over-relying on in-memory caches° without eviction policies.
  • Don't ⌘ignore cache expiration° - stale data can cause issues.

Quiz

Question 1 of 15

Which logging library is known for its high performance and simplicity?

  • Winston
  • Pino
  • Console
  • File