top of page

Enhance Debugging and Profiling: Node.js Logging Best Practices with Winston

Updated: Aug 28, 2024



Good logging practices are crucial for monitoring and troubleshooting your Node.js servers. They help you track errors in the application, discover performance optimisation opportunities, and carry out different kinds of analysis on the system (such as in the case of outages or security issues) to make critical product decisions.


Even though logging is an essential aspect of building robust web applications, it's often ignored or glossed over in discussions about development best practices. Knowing what and how to log is tricky because it's often difficult to understand what information you’ll need during troubleshooting.


Since we understand how essential logging is, we need to practice good logging habits. We should accumulate logs that make it easier to diagnose the root cause of problems and solve potential issues before they impact end-users. This article will outline some best practices to follow when writing logs in a Node.js application.



Use a Node.js Logging Library

Node.js developers tend to rely on the runtime's console methods (such as console.log()) to log events since it is built into the runtime, and offers a familiar API similar to the JavaScript console mechanism provided by web browsers. While console.log() has its uses, it is not an adequate solution for implementing logging in a production application. It lacks support for features and configuration options that are considered essential to a good logging setup. For example, console methods do not support log levels like warn, error or debug, despite the provision of methods like console.warn(), console.error(), and console.debug(), respectively. Those are simply functions that print to the standard output or standard error without indicating log severity.


A good logging library provides a robust feature set that makes it easier to centralise, format, and distribute the logs to fit your needs. For example, a typical logging framework will provide various options for where to output log data (such as the terminal, filesystem, or a database), while also supporting the ability to send the logs over HTTP if you want to transport your log entries to a log management service.


There are three major concerns for choosing a suitable logging library: recording, formatting, and storing messages. You need to make sure that your library of choice addresses all three concerns in a satisfactory manner. Another critical consideration for selecting a logging library is performance. Since the logger will be used a lot throughout the codebase, it can harm your application's runtime performance. Therefore, you should also investigate the performance characteristics of a library, and see how it compares to alternatives.


In the Node.js ecosystem, there are several popular options to consider. Most of them offer similar features, but they also have their differences — so it's vital that you try them out yourself and see which one caters to your use case the best:


 
Winston — The most popular logging library, with support for multiple transports. This allows you to easily configure your preferred storage location for your logs.
Pino — Pino's biggest attraction is its speed. It claims to be up to five times faster than alternatives, in many cases.
Bunyan — Another feature-rich logging framework that outputs in JSON by default and provides a CLI tool for viewing your logs.
Roarr — Roarr is a different kind of logger that works in Node.js and the browser. For the rest of the article, we'll use Winston to demonstrate the different concepts.
 

This is not necessarily an endorsement of Winston; it's only being used here because it's the most popular logging framework for Node.js.



Implement the Appropriate Log Levels


As an experienced software developer, especially one who's been involved in production application support, you're likely familiar with log levels. These provide a means to differentiate event types in a system, highlighting their importance and context. Proper utilisation of log levels in your application ensures a clear distinction between critical, immediate-action-required events and merely informative ones.

Though different logging systems may name severity levels differently, the underlying concepts remain the same. Here are the most common log levels you're likely to encounter across logging frameworks, listed in descending order of severity:


 
  • FATAL: Denotes a disastrous event that your application can't recover from. Logging at this level often marks the program's end.

  • ERROR: Represents a system error that disrupts a specific operation but not the entire system. Log at this level if a third-party API returns errors, for example.

  • WARN: Signals unusual or undesirable runtime conditions, not necessarily errors. For instance, when a backup data source is used because the primary source is unavailable.

  • INFO: Informational messages only. Log at this level for user-driven or application-specific events, such as the startup or shutdown of a service.

  • DEBUG: Captures diagnostic information useful for troubleshooting.

  • TRACE: Records every possible detail about the application's behaviour during development.

 

By default, the Winston library uses the following log levels, with "error" being the most severe and "silly" the least:

{
  "error": 0,
  "warn": 1,
  "info": 2,
  "http": 3,
  "verbose": 4,
  "debug": 5,
  "silly": 6
}


Should these defaults not meet your needs, you can alter them when initializing a custom logger. For instance, you may use the log levels discussed above.


const { createLogger, format, transports } = require("winston");
 
const logLevels = {
  fatal: 0,
  error: 1,
  warn: 2,
  info: 3,
  debug: 4,
  trace: 5,
};
 
const logger = createLogger({
  levels: logLevels,
  transports: [new transports.Console()],
});

To log a message, simply reference the desired level directly on the custom logger:

logger.info("System launch"); 
// {"message":"System launch","level":"info"}
logger.fatal("A critical failure!"); 
// {"message":"A critical failure!","level":"fatal"}

Winston also allows you to define a level property on the logger and each transport, which specifies the highest level of messages to be logged. For instance, you could run your programs at the info level by default, then switch to debug or trace level when troubleshooting an issue or deploying to a test environment. This setting should ideally be controlled through an environmental variable.


const logger = winston.createLogger({
  level: "warn",
  levels: logLevels,
  transports: [new transports.Console({ level: "info" })],
});


Adopt Structured Logging


When formulating your log messages, priority should be given to ensuring that your log entries are easily comprehensible by both humans and machines.


One of the fundamental objectives of logging is to facilitate post-mortem debugging, which often involves deciphering log entries to recreate the sequence of events leading to a system event. Logs that are human-readable simplify this arduous task for developers and system administrators. It's equally critical to use a structured format that machines can easily parse, allowing automated log processing (such as for alerts or audits).


JSON is a widely favored format for structured log entries due to its ubiquity and human-readability. It's highly machine-readable and can be effortlessly converted into other formats, even across different programming languages. When employing JSON for logging, it's essential to adopt a standard schema to ensure the semantics of each field are well-defined, thus streamlining the log analysis process.


By default, Winston outputs a JSON string with two fields: "message" and "level". The "message" field contains the text being logged, and the "level" field denotes the log level. Customizing the output is easily achieved using winston.format, which employs logform to implement various formats. For instance, to add a timestamp to each log entry, you can combine the timestamp and json formats as follows:


const { createLogger, format, transports } = require("winston");
 
const logger = createLogger({
  format: format.combine(format.timestamp(), format.json()),
  transports: [new transports.Console({})],
});

This produces log entries in the following format:



{"message":"Connected to DB!","level":"info","timestamp":"2021-07-28T22:35:27.758Z"}
{"message":"Payment received","level":"info","timestamp":"2021-07-28T22:45:27.758Z"}

Ensure to review the logform documentation to understand the predefined formats and learn how to create your own custom formats.



Create Descriptive Messages


Logs should accurately reflect the events they represent. Each message should be situation-specific and clearly detail the event that took place at that moment. In case of a crisis, your log entries might be the only source of information to help you understand what occurred, making it crucial to perfect this aspect of logging.


Consider the following insufficient log entry indicating a request failure:

Request failed, will retry.

The above message fails to disclose:

  • The specific request that failed

  • The reason behind the failure

  • The interval before retrying the request

While some answers may be inferred from other sources such as other log entries or the code itself, it's more beneficial to enhance the standalone value of the log entry with a more descriptive message:


"POST" request to "https://example.com/api" failed. Response code: "429", response message: "too many requests". Retrying after "60" seconds.

The second message is superior because it provides sufficient information about the failed request, including the status code and response message, and also indicates that the request will be retried after 60 seconds. With equally descriptive messages, deciphering your logs will be much simpler. Other examples of useful log messages include:



Status of task id "1234" changed from "IN_PROGRESS" to "COMPLETED".
SomeMethod() processed "100" records in "35ms".
User registration failed: field "email" is not valid email address; field "password" is below the minimum 8 characters.

When composing log messages, ensure you include all pertinent details related to the event without becoming excessively verbose. This will prevent other log readers (possibly your future self) from being overwhelmed with information. Your log messages should also be self-sufficient; don't rely on the content of a previous message to provide context for a subsequent entry.




Include Adequate Context in Logs


Besides providing a descriptive log message, you should also incorporate a suitable amount of context in the log entry. Context enables a quick reconstruction of the actions leading to an event. Include basic information in the log, such as the event's timestamp and the location it happened (or a stack trace for errors). Also, add data points that are relevant to the flow of the operation triggering the event. These data points can be generated at different stages of the operation flow and compiled at the point of logging.


In a billing service scenario, log entries generated by the system could include several data points:

  • Session identifiers

  • Usernames and IDs

  • Product or transaction identifiers

  • The current page the user is on

Each data point allows you to trace a user's journey through the entire checkout process. If a significant event occurs, the available data will be automatically appended to the log output, facilitating identification of:

  • The circumstances leading to the event

  • The location of the event

  • The transaction and product id that triggered the event

These data points also allow you to filter log entries based on a common identifier like user id or product id.


Winston enables the addition of global metadata (like the component or service where an event occurred) to every log entry generated. This information is valuable for troubleshooting issues in complex applications as it guides you directly to the point of failure.


You can configure this when creating the logger for the component or service:

const logger = createLogger({
  format: format.combine(format.timestamp(), format.json()),
  defaultMeta: {
    service: "billing-service",
  },
  transports: [new transports.Console({})],
});

A "service" field will be included in all logs created by the logger object:

{"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T10:56:14.651Z"}

To add metadata to individual entries, you need to create a context or metadata object that can be passed around throughout the operation's flow, so that the data is accessible at logging points. Child loggers can be used to add metadata at logging points:

const ctx = {
  userId: "090121",
  productId: "creme-de-la-creme",
};
 
logger.child({ context: ctx }).info('Order "1234" was processed successfully');
// {"context":{"userId":"090121","productId":"creme-de-la-creme"},"message":"Order \"1234\" was processed successfully","level":"info","service":"billing-service","timestamp":"2021-07-29T12:20:13.249Z"}


Avoid Logging Sensitive Information


Whether you're in a heavily regulated industry (like healthcare or finance) or not, it's critical to avoid incorporating sensitive information in your logs.


Sensitive information comprises social security numbers, addresses, passwords, credit card details, access tokens, etc. As log messages are often stored as plain text, such data would be exposed if the logs are accessed by unauthorized individuals. Ensure you're not violating regulations like GDPR by recording specific pieces of information.


To avoid inadvertently exposing sensitive data in your logs, limit the system components that handle this data. For example, credit card details should be exclusive to the billing component of your system, and sensitive data should be removed from URLs where possible.


Although not entirely fail-safe, you can also use a blocklist to prevent certain fields from appearing in the logs.




Log for Auditing and Profiling Reasons


Although logs are primarily used to diagnose issues and identify the root cause of bugs, they can also be invaluable for auditing or profiling a system or even generating interesting statistics about system behaviour.


For instance, you can log details about user activities on the system (such as user sign-ins, the resources they created or accessed, etc.). In the absence of specialised tools, you can use logging to profile your code by reporting how long an operation took or how many times a function was executed. The insights derived from this data can assist you in improving the performance of the hotspots in your application.


Winston offers a simple profiling mechanism you can use:

logger.profile("test", { level: "info" });
 
// Returns an object corresponding to a specific timing
const profiler = logger.startTimer();
 
setTimeout(() => {
  // End the timer and log the duration
  profiler.done({ message: "Logging message" });
}, 1000);


A durationMs field will be included in the output:

{"message":"Logging message","level":"info","durationMs":1001,"timestamp":"2021-07-29T14:17:55.097Z"}


Automatically Log Uncaught Exceptions and Unhandled Promise Rejections


In the event of an uncaught exception or unhandled promise rejection, the standard practice is to crash the program. A process manager like PM2 can be used to automatically restart the process and restore the program to a clean state.


To understand why such an event occurred, it's crucial to log the details of the exception or promise rejection before exiting. Winston provides handlers for both situations, which can be configured on a logger instance:


const logger = createLogger({
  transports: [new transports.File({ filename: "file.log" })],
  exceptionHandlers: [new transports.File({ filename: "exceptions.log" })],
  rejectionHandlers: [new transports.File({ filename: "rejections.log" })],
});

In the above example, uncaught exceptions are logged into an exceptions.log file, while unhandled rejections are logged into a rejections.log file. Entries will automatically include the full stack trace as well as information about process arguments and memory usage associated with the exception, providing you with all the details you need to identify the root of the problem. For more details on how to configure Winston's unhandled exception handler, you can refer to their official documentation.


In the dynamic landscape of technology, it's common to face unique challenges. Recently, one of our esteemed clients approached us with concerns about enhancing their logging capabilities for improved debugging and profiling in their Node.js applications. They sought to utilise logs not only for issue diagnosis but also to extract meaningful insights about system behaviour.


At Atsky, we believe that every challenge is an opportunity to learn and grow. As we worked closely with the client, we embarked on an enriching journey to elevate their logging practices using the powerful capabilities of the Winston library in their Node.js environment.


This blog post is a distilled essence of our recent experience. It enumerates the best practices we applied, the structured approach we adopted, and the lessons we learned, all aimed at maximising the potential of logging with Winston. We're sharing these insights in the hope that they will aid others in similar endeavors, helping to make debugging, profiling, and auditing processes smoother and more efficient."


We hope you find these tips useful and invite you to share your thoughts and experiences with us.

Comments


bottom of page