Logging in Python: What Every Developer Should Know
Python logging is a foundational concept in building reliable and maintainable software. When developing applications, it is essential to keep track of events, errors, and operational data. Python provides a built-in module called logging, which is designed to offer flexible, granular control over the logging process. This tutorial introduces the essentials of Python logging, exploring its features, configuration, levels, and usage in software development.
Logging allows developers to capture program behavior, detect issues, and monitor the application in both development and production environments. With appropriate configuration, logs can be directed to the console, files, or even remote servers. Python’s logging module simplifies this process, providing methods to create and configure loggers, set levels, define formats, and more.
Logging in Python refers to the process of recording information about a program’s execution. This includes general events, error messages, warning notifications, and more. The logging module enables this by offering various logging functions and configuration tools.
Logging is not only helpful during the development phase but also crucial for debugging and monitoring deployed applications. It allows developers to analyze the flow of execution and investigate issues post-deployment without interrupting the application.
To start using logging in Python, the module must first be imported. Once imported, a logger object can be created. This object is responsible for managing the creation and dispatching of log messages.
There are three basic steps involved in using Python logging effectively:
The logger is the core component that handles the logging process. It acts as an interface to create and manage log messages.
Loggers can be configured to handle specific tasks such as determining which messages to log, formatting those messages, and deciding where to output them. This configuration can be done using Python code or external configuration files.
Once configured, the logger object can be used to log messages at different severity levels. These messages are then processed and directed based on the configuration settings.
Logging levels in Python are predefined categories that indicate the severity of a message. These levels help in filtering log messages based on importance.
This level provides detailed information, typically of interest only when diagnosing problems.
The INFO level is used to confirm that things are working as expected.
This level indicates that something unexpected happened, or is indicative of some problem shortly.
Used when a more serious problem occurs, and the program might not be able to continue running properly.
This level represents very serious errors that may cause the program to terminate.
The basicConfig() function is a simple way to configure the logging system. By default, it logs to the console and uses the WARNING level. This function can be customized to change output destination, logging level, format, and more.
import logging
logging.basicConfig(filename=’example.log’, level=logging.DEBUG)
This example configures logging to write messages to a file named ‘example.log’ and sets the logging level to DEBUG, meaning all messages at this level and higher will be logged.
Specifies the file to which logs will be written.
Defines the mode in which the log file will be opened. The default is append (‘a), but it can be changed to write (‘w’).
Determines the layout of the log messages. The default format includes the severity level and the actual message.
Specifies the format for timestamps in log messages.
Sets the minimum severity level of messages to be logged. Messages below this level are ignored.
The logging module in Python offers extensive capabilities for application monitoring. It allows messages to be logged to different destinations and in various formats.
This module supports multiple logging levels, enabling you to manage the verbosity of log output. Whether it’s for quick debugging during development or for extensive logging in production, the module is adaptable.
Logs can be directed to several outputs:
Formatting enhances readability and utility. You can include timestamps, log levels, message contents, and other relevant metadata in each log entry.
Custom handlers enable further flexibility, such as logging to databases or integrating with external monitoring systems. Python allows the creation of these handlers using built-in or custom classes.
Basic configuration is ideal for simple use cases. It typically involves setting the log level and output destination using basicConfig().
Advanced configuration provides more control. It supports multiple handlers, customized formats, and dynamic configuration from external files like JSON, YAML, or INI.
In advanced setups, developers often define different loggers for different modules or components, assign specific handlers, and apply filters for more refined control.
To achieve consistent and readable logs, formatting is essential. Although the mention of external libraries like “log4j” is common in other languages, Python’s logging module includes powerful formatting capabilities natively.
You can specify formats using format strings. These format strings can include:
logging.basicConfig(format=’%(asctime)s – %(levelname)s – %(message)s’)
Logging variable data is a practical way to monitor dynamic behavior in applications. Variables can be included in log messages using string formatting.
value = 42
logging.debug(‘The current value is %d’, value)
This method is preferred because it defers string formatting until it is confirmed that the message will be logged.
Variable data logging is especially useful for performance monitoring, debugging complex logic, and tracking state changes over time.
Stack traces provide a snapshot of the program’s call stack at a given moment, particularly useful during exceptions.
Python allows capturing stack traces directly within the logging system using the exc_info parameter.
try:
result = 10 / 0
Except ZeroDivisionError:
Logging.error(“An error occurred”, exc_info=True)
This will include the full stack trace in the log output, aiding in diagnosing and resolving issues.
Logging can be implemented using classes or functions, depending on the application’s complexity and modularity.
In class-based logging, a logger is instantiated as part of a class. This is ideal for larger applications where modularity is key.
For simpler scripts or smaller programs, logging directly within functions can be more straightforward.
Both methods utilize the same underlying logging principles and can be mixed as needed.
Creates and manages loggers, which are entry points for logging.
Sends log records to appropriate destinations like files, streams, or remote servers.
Adds additional filtering logic to determine whether a log record should be processed.
Defines the layout of the log message output.
These classes can be extended to build highly customized and robust logging systems.
Python’s logging module is a comprehensive and adaptable tool for managing application logs. From basic setups using basicConfig() to complex, multi-handler configurations, it supports a wide range of use cases. Understanding how to effectively use logging can greatly improve your ability to develop, debug, and maintain high-quality software. In the next part, we will explore advanced configurations, handlers, filters, and custom logging techniques.
Advanced logging configuration allows developers to implement more refined and flexible logging systems. While basic configuration is useful for small projects, larger applications often require multiple loggers, handlers, and formatters working together to provide comprehensive logging capabilities. Python’s logging module supports all of these through configuration dictionaries or files.
One of the most powerful features of Python logging is the ability to define configuration externally. This can be done using dictionary-based configuration or configuration files like INI, JSON, or YAML.
Python supports configuration through a standard INI-style file. The fileConfig() function reads the configuration from the file.
[loggers]
keys=root
[handlers]
keys=consoleHandler
[formatters]
keys=simpleFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler
[handler_consoleHandler]
class=StreamHandler
level=DEBUG
formatter=simpleFormatter
args=(sys.stdout,)
[formatter_simpleFormatter]
format=%(asctime)s – %(name)s – %(levelname)s – %(message)s
Using a JSON file allows for more structured configuration. The dictConfig() function from the logging module.config module reads the configuration.
{
“version”: 1,
“formatters”: {
“detailed”: {
“format”: “%(asctime)s %(levelname)s %(message)s”
}
},
“handlers”: {
“console”: {
“class”: “logging.StreamHandler”,
“formatter”: “detailed”,
“level”: “DEBUG”,
“stream”: “ext://sys.stdout”
}
},
“root”: {
“level”: “DEBUG”,
“handlers”: [“console”]
}
}
Instead of using external files, dictionary configuration can be embedded directly in the code.
Import logging.config
LOGGING_CONFIG = {
‘version’: 1,
‘formatters’: {
‘detailed’: {
‘format’: ‘%(asctime)s %(levelname)s %(message)s’
},
},
‘handlers’: {
‘console’: {
‘class’: ‘logging.StreamHandler’,
‘formatter’: ‘detailed’,
‘level’: ‘DEBUG’,
‘stream’: ‘ext://sys.stdout’
},
},
‘root’: {
‘level’: ‘DEBUG’,
‘handlers’: [‘console’]
},
}
Logging.config.dictConfig(LOGGING_CONFIG)
Handlers are responsible for dispatching the log messages to the appropriate destination. Each handler instance can be associated with a different output target, such as files, streams, or remote destinations.
Sends log output to streams like sys. Stdout or sys. Stderr. Useful for console logging.
Sends log messages to a file. Ideal for recording long-term logs.
Manages log files by rotating them once they reach a certain size.
Rotates logs based on time intervals such as daily or weekly.
Sends log messages via email. Useful for alerting on critical errors.
Sends logs to a web server using HTTP.
Handlers must be added to a logger object after being configured.
Logger = logging.getLogger(__name__)
handler = logging.FileHandler(‘app.log’)
formatter = logging.Formatter(‘%(asctime)s – %(levelname)s – %(message)s’)
handler.setFormatter(formatter)
logger.addHandler(handler)
Filters allow you to control which log records are passed from the logger to the handler. They provide a way to filter log messages based on custom rules.
Filters are created by extending the logging. Filter class and implementing a filter() method.
class InfoOnlyFilter(logging.Filter):
def filter(self, record):
return record.levelno == logging.INFO
This filter only allows INFO-level messages to pass.
handler.addFilter(InfoOnlyFilter())
Python allows the creation of multiple loggers, each with its configuration. This is useful in larger applications with different modules.
app_logger = logging.getLogger(‘app’)
db_logger = logging.getLogger(‘database’)
Each logger can have unique handlers and formatters, allowing better separation of concerns.
Python logging is thread-safe, but it is still essential to configure loggers properly in a multi-threaded context. Using a shared logger object across threads is safe and ensures that messages are logged in the correct order.
Use the threadName attribute in the formatter to include thread information in the logs.
formatter = logging.Formatter(‘%(asctime)s – %(threadName)s – %(levelname)s – %(message)s’)
In high-performance environments, logging can become a bottleneck. Consider the following:
Advanced logging configuration in Python provides powerful tools to manage and analyze logs across complex applications. By using handlers, filters, and custom formatters, developers can build robust logging systems tailored to their needs. The next part will focus on formatting options, integrating with third-party tools, and using logging in web applications and microservices.
Formatting log messages is a key aspect of building a useful and readable logging system. Python provides extensive capabilities for customizing how log messages appear. Proper formatting ensures that logs are easy to read, useful for debugging, and consistent across different environments.
By default, the logging module outputs messages with a simple format, usually including just the log level and the message. However, this can be extended to include timestamps, module names, function names, and other contextual information.
import logging
logging.basicConfig(format=’%(levelname)s:%(message)s’, level=logging.DEBUG)
logging.debug(‘This is a debug message’)
The format string passed to the logging configuration controls how each log message is displayed. Python’s logging module supports several predefined attributes that can be used in the format string:
logging.basicConfig(format=’%(asctime)s – %(name)s – %(levelname)s – %(message)s’, level=logging.INFO)
logger = logging.getLogger(‘example’)
logger.info(‘Custom format example’)
When using handlers, you can set formatters explicitly rather than using basicConfig. This provides more flexibility.
formatter = logging.Formatter(‘%(asctime)s – %(levelname)s – %(message)s’)
console_handler = logging.StreamHandler()
console_handler.setFormatter(formatter)
Logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(console_handler)
logger.debug(‘Formatted message using handler’)
Often, it is necessary to include variable data in log messages. The recommended way is to use the formatting syntax provided by logging.
user_id = 42
logger.info(‘Processing request for user ID: %s’, user_id)
This approach is preferred over traditional string formatting because it defers the interpolation until it’s confirmed that the message will be logged.
For specialized needs, you can create custom formatter classes by extending logging. Formatter.
class CustomFormatter(logging.Formatter):
def format(self, record):
record.custom_attribute = ‘ExtraInfo’
return super().format(record)
formatter = CustomFormatter(‘%(asctime)s – %(custom_attribute)s – %(message)s’)
JSON formatting is commonly used in web applications and microservices to integrate easily with logging infrastructure.
import json
class JsonFormatter(logging.Formatter):
def format(self, record):
log_record = {
‘timestamp’: self.formatTime(record, self.datefmt),
‘level’: record.levelname,
‘name’: record.name,
‘message’: record.getMessage()
}
return json.dumps(log_record)
The logging module can automatically capture and log stack traces.
Try:
1 / 0
Except ZeroDivisionError:
logger.exception(‘Exception occurred’)
Using logger.exception() automatically includes traceback information in the log.
Sometimes, logs need to span multiple lines. This is useful for large data dumps or complex error messages.
multi_line_message = ”’
Line 1
Line 2
Line 3
”’
logger.info(multi_line_message)
While functional, multi-line logs should be used judiciously as they can reduce readability in streaming systems.
The datefmt argument allows you to control the format of timestamps in logs.
logging.basicConfig(format=’%(asctime)s – %(message)s’, datefmt=’%Y-%m-%d %H:%M:%S’)
For multilingual applications, you may need to log messages in different languages. This can be accomplished using Python’s gettext module in conjunction with logging.
import gettext
_ = gettext.gettext
logger.info(_(‘Application started’))
The LoggerAdapter class allows you to inject additional context into log messages.
class ContextFilter(logging.Filter):
def filter(self, record):
record.ip = ‘127.0.0.1’
record.user = ‘john.doe’
return True
logger.addFilter(ContextFilter())
formatter = logging.Formatter(‘%(asctime)s – %(user)s – %(ip)s – %(message)s’)
Structured logging involves outputting logs in a consistent and machine-readable format, making it easier to analyze and index.
import logging
import json
class StructuredFormatter(logging.Formatter):
def format(self, record):
log_record = {
‘time’: self.formatTime(record, self.datefmt),
‘level’: record.levelname,
‘module’: record.module,
‘message’: record.getMessage()
}
return json.dumps(log_record)
For cloud-based or distributed applications, logs are often sent to centralized systems such as log aggregators or monitoring tools. These integrations typically involve:
handler = SysLogHandler(address=(‘localhost’, 514))
logger.addHandler(handler)
Always be cautious not to log sensitive information. Avoid including passwords, API keys, or personal data in formatted logs.
Formatting in Python logging provides powerful mechanisms for improving readability, usability, and integration of logs. Whether you’re building a simple script or a distributed application, understanding how to format log messages effectively is critical. In the next section, we will explore capturing stack traces, using class-based loggers, and implementing logging in real-world scenarios such as web frameworks and microservices.
Capturing stack traces is an essential part of debugging and diagnosing issues in software development. A stack trace provides a snapshot of the call stack at a specific point in time, typically when an exception occurs. It allows developers to see the path the code took before reaching the error, making it easier to pinpoint and fix issues.
A stack trace is a report that shows the sequence of function calls made in a program leading up to a certain point, often an error. This report includes function names, file names, and line numbers, giving a clear view of how the application arrived at a specific state.
Python’s logging module can automatically capture and log stack traces when exceptions occur. This is particularly useful during error handling.
Try:
1 / 0
Except ZeroDivisionError:
logger.exception(‘An error occurred’)
Using logger.exception() is the most straightforward way to capture and log the traceback. This method should only be called from within an except block.
Alternatively, you can use the exc_info parameter to include exception information.
import logging
Logger = logging.getLogger(__name__)
try:
open(‘non_existent_file.txt’)
Except FileNotFoundError as e:
logger.error(‘Failed to open file’, exc_info=True)
This approach is useful when using other logging methods like logger.error() or logger.warning().
For more control over how stack traces are displayed, you can override formatting behavior using custom formatters. These formatters can enhance readability or adapt logs for specific monitoring tools.
class CustomTraceFormatter(logging.Formatter):
def formatException(self, ei):
return ‘CUSTOM TRACE: ‘ + super().formatException(ei)
Class-based logging is a structured way to integrate logging functionality directly into classes. It allows each class to maintain its logger instance, making logs more granular and traceable.
The logger can be initialized in the class constructor and used throughout the class.
Class Calculator:
def __init__(self):
self.logger = logging.getLogger(self.__class__.__name__)
self.logger.setLevel(logging.DEBUG)
def divide(self, x, y):
try:
result = x / y
self.logger.info(f’Result: {result}’)
return result
Except ZeroDivisionError:
self.logger.exception(‘Division by zero error’)
Loggers follow a hierarchy based on their name. This allows fine-grained control over logging behavior across different components.
logger = logging.getLogger(‘project.module.ClassName’)
Setting levels at parent loggers affects all child loggers unless they are explicitly overridden.
Function-based logging provides a simple way to add logging to small scripts or functions. While it may lack the structure of class-based logging, it is useful for straightforward use cases.
def process_data(data):
Logger = logging.getLogger(‘dataProcessor’)
logger.info(‘Processing started’)
# process logic
logger.info(‘Processing finished’)
Function decorators can be used to automatically log function entry, exit, and exceptions.
def log_function_call(func):
def wrapper(*args, **kwargs):
logger.info(f’Calling {func.__name__}’)
Try:
result = func(*args, **kwargs)
logger.info(f'{func.__name__} returned successfully’)
return result
Except Exception as e:
logger.exception(f’Error in {func.__name__}’)
raise
return wrapper
Handlers are responsible for sending log messages to their final destination. This can be the console, a file, a remote server, or other outputs. Handlers allow flexible and extensible log management.
file_handler = logging.FileHandler(‘app.log’)
file_handler.setLevel(logging.WARNING)
formatter = logging.Formatter(‘%(asctime)s – %(levelname)s – %(message)s’)
file_handler.setFormatter(formatter)
logger.addHandler(file_handler)
You can add multiple handlers to a logger to send the same log message to different destinations.
console_handler = logging.StreamHandler()
file_handler = logging.FileHandler(‘debug.log’)
logger.addHandler(console_handler)
logger.addHandler(file_handler)
You can create custom handlers by subclassing logging. Handler.
class CustomHandler(logging.Handler):
def emit(self, record):
log_entry = self.format(record)
# Custom logic, e.g., send to external dashboard
print(f’Custom Handler: {log_entry}’)
Logging in real-world applications involves combining multiple features and best practices to create a robust and maintainable logging system.
In web applications, logging helps monitor user activity, API calls, and backend operations.
from flask import Flask, request
app = Flask(__name__)
@app.route(‘/’)
def home():
logger.info(f’Home page accessed by {request.remote_addr}’)
return ‘Welcome!’
Each microservice should maintain its log files and possibly forward logs to a centralized service.
Background tasks and schedulers must also log their activities.
import schedule
def job():
logger.info(‘Scheduled job executed’)
schedule.every(10).minutes.do(job)
Capture and redirect logs from external libraries by configuring their loggers.
logging.getLogger(‘requests’).setLevel(logging.WARNING)
Capturing stack traces, implementing logging using classes and functions, and properly configuring handlers form the backbone of a professional logging strategy in Python. These techniques ensure your logs are informative, consistent, and useful across different environments and applications. The next step is understanding how to secure and optimize logs for performance, which we will explore in the final section.
Python logging is more than just a tool for writing messages to a file or console—it is a foundational capability that influences the observability, maintainability, and reliability of your applications. From development to production, a solid logging strategy is key to understanding your system’s behavior, identifying root causes of issues, and gaining insights for performance tuning.
A well-structured logging system begins with intention. It’s not just about adding log statements but knowing what, where, and how to log. Logging should complement your application’s architecture, be cohesive with your monitoring strategy, and serve the specific needs of your development and operations teams.
Thoughtful logging practices ensure that logs are not just noise. Too many irrelevant logs can clutter your output, while too few logs can obscure essential details. The goal is to achieve a balance where logs provide clarity without overwhelming the reader.
Key considerations include:
When implemented properly, logs become a vital form of documentation that tells the story of what happened, when, and why.
Logging plays a central role in application diagnostics. During development, it aids in real-time debugging. In production, logs help engineers identify trends, analyze unexpected behavior, and detect system anomalies.
For example, when an error occurs in a microservice, a detailed log can provide the complete trace of a request, the specific inputs that caused the issue, and any underlying exceptions. Without logging, diagnosing such problems would involve guesswork and could result in longer downtimes or data inconsistencies.
Furthermore, integrating logs with monitoring platforms enhances visibility. Centralized logging systems, combined with real-time dashboards and alerting, can help detect early warning signs of failures, like memory leaks or slow responses.
Tools like ELK stack, Prometheus, and Grafana complement Python logging by allowing aggregation, filtering, and visualization. With such integrations, logs transition from static records to actionable insights.
While logs are useful, they also pose security risks if not handled carefully. Logs may unintentionally expose sensitive data, including authentication tokens, database credentials, or internal error structures that could be exploited by attackers.
To secure your logging system:
Also, be cautious with stack traces in production logs. While helpful for developers, detailed tracebacks can expose internal logic. Consider using error aggregation tools to manage this trade-off.
Excessive logging can degrade application performance. Writing logs, especially to disk or over a network, consumes resources. As a result, it is essential to optimize the volume and frequency of logging activities.
Strategies for managing performance impact include:
In addition, dynamic log level adjustment allows systems to run with minimal logs by default and increase verbosity temporarily when needed for diagnostics.
Over time, as applications evolve, logging needs to adapt. Here are some long-term best practices for sustainable logging systems:
Consistency in logging design ensures that as your team grows or changes, logs remain a reliable source of truth.
With the rise of DevOps, microservices, and distributed systems, logging is no longer just a developer concern. It intersects with operations, security, and business intelligence. Logs power audit trails, performance benchmarking, usage analytics, and more.
In serverless architectures or containerized environments like Kubernetes, ephemeral instances make centralized and reliable logging even more critical. Distributed tracing, which builds on logs and metrics, further elevates the role of logging by offering a complete picture of request flows.
As artificial intelligence and machine learning are increasingly applied to system observability, logs serve as critical input for anomaly detection and predictive maintenance.
Mastering logging in Python is a journey that grows with your software. From writing your first logger.info() to configuring multi-tier handlers and integrating with observability platforms, logging empowers developers to build resilient, transparent, and well-understood applications.
Remember, good logs don’t just answer questions—they help you ask the right ones. Whether you’re debugging a bug, monitoring system health, or auditing user activity, a solid logging system is your first and best line of insight.
By embracing logging not just as a technical requirement but as a strategic asset, you can significantly elevate the quality, performance, and security of your Python applications.
Let your logs be your allies, your audit trail, and your continuous feedback loop. Invest in them with the same care and intention you put into your application logic.
Popular posts
Recent Posts