Learn structured logging fundamentals in Go advanced patterns performance tradeoffs and how to avoid critical production pitfalls
Logging in Go has come a long way. For years, the community relied on the simple standard log
package or turned to powerful third-party libraries like zap and zerolog.
With the introduction of log/slog in Go 1.21, the language now has a native, high-performance, structured logging solution designed to be the new standard.
slog
isn’t just another logger; it’s a new foundation that provides a common API (the frontend) that separates logging logic from the final output, which is controlled by various logging implementations (the backend).
This guide will take you through slog
from its fundamentals to advanced patterns, showing you how to make logging a useful signal for observing your applications.
The log/slog
package is built around three core types: the Logger
, the Handler
, and the Record
. The Logger
is the frontend you’ll interact with, the Handler
is the backend that does the actual logging work, and the Record
is the data passed between them.
A Record
represents a single log event. It contains all the necessary information about the event including:
INFO
, WARN
, etc.).Essentially, a Record
is the raw data for each log entry before it’s formatted.
A Handler
is an interface that’s responsible for processing Records. It’s the engine that determines how and where logs are written. It’s responsible for:
Record
into a specific output, like JSON or plain text.The log/slog
package includes built-in concrete TextHandler
and JSONHandler
implementations, but you can create custom handlers to meet any requirement. This interface is what makes slog
so flexible.
The Logger
is the entry point for creating logs, and it’s what provides the user-facing API with methods like Info()
, Debug()
, and Error()
.
When you call one of these methods, the Logger
creates a Record
with the message, level, and attributes you provided. It then passes that Record
to its configured Handler
for processing.
Here’s how the entire process works:
12345
logger := slog.New(slog.NewJSONHandler(os.Stdout,nil))
logger.Info("user logged in","user_id",123)
Since the JSONHandler
is configured to log to the stdout
, this yields:
1
{"time":"...","level":"INFO","msg":"user logged in","user_id":123}
The slog.Logger
type offers a flexible API that’s designed to handle various logging scenarios, from simple messages to complex, context-aware events. Let’s explore its key methods below.
The most common way to log is through the four level-based methods: Debug()
, Info()
, Warn()
, and Error()
which correspond to a specific severity level:
1
logger.Info("an info message")
output1
{"time":"...","level":"INFO","msg":"an info message"}
slog
also provides a context-aware version for each level, such as InfoContext()
. These variants accept a context.Context
type as their first argument, allowing context-aware handlers (if configured) to extract and log values carried within the context:
1
logger.InfoContext(context.Background(),"an info message")
Note that slog
’s context-aware methods will not automatically pull values from the provided context when using the built-in handlers. You must use a context-aware handler for this pattern to work.
For more programmatic control or when using custom levels, you can use the generic Log()
and LogAttrs()
methods, which require you to specify the level explicitly:
12
logger.Log(context.Background(), slog.LevelInfo,"an info message")
logger.LogAttrs(context.Background(), slog.LevelInfo,"an info message")
After choosing a level and a log message for an event, the next step is to add contextual attributes which allow you to enrich your log entries with structured, queryable data.
slog
provides a few ways to do this. The most convenient way is to pass them as a sequence of alternating keys and values after the log message:
1
logger.Info("incoming request","method","GET","status",200)
output1234567
"msg": "incoming request",
This convenience comes with a significant drawback. If you provide an odd number of arguments (e.g., a key without a value), slog
doesn’t panic or return an error. Instead, it silently creates a broken log entry by pairing the value-less field with a special !BADKEY
key:
12
logger.Warn("permission denied","user_id",12345,"resource")
output1234
This silent failure is an API footgun that can corrupt your logging data, and you might only discover the problem during a critical incident when your observability tools fail you.
To guarantee correctness, you must use the strongly-typed slog.Attr
helpers. They makes it impossible to create an unbalanced pair by catching errors at compile time:
12345
slog.Int("user_id", 12345),
slog.String("resource", "/api/admin"),
While slightly more verbose, using slog.Attr
is the only right way to log in Go. It ensures your logs are always well-formed, reliable, and safe from runtime surprises.
While using slog.Attr
is the safer approach, there’s nothing stopping anyone from using the simpler key, value
style in a different part of the codebase.
The solution is to make this best practice into an automated, enforceable rule using a linter. For slog
, the best tool for this is sloglint.
You’ll typically integrate it into your development environment and CI/CD pipeline through golangci-lint:
.golangci.yml123456789
By adding this simple check, you will guarantee that every log statement in your project adheres to the safest and most consistent style, preventing !BADKEY
occurrences across the entire project.
slog
operates with four severity levels. Internally, each level is just an int
, and the gaps between them are intentional to leave room for custom levels:
slog.LevelDebug
(-4)slog.LevelInfo
(0)slog.LevelWarn
(4)slog.LevelError
(8)All loggers are configured to log at slog.LevelInfo
by default, meaning that DEBUG
messages will be suppressed:
1234
logger.Debug("a debug message")
logger.Info("an info message")
logger.Warn("a warning message")
logger.Error("an error message")
output123
{"time":"2025-07-17T10:32:26.364917642+01:00","level":"INFO","msg":"an info message"}
{"time":"2025-07-17T10:32:26.364966625+01:00","level":"WARN","msg":"a warning message"}
{"time":"2025-07-17T10:32:26.36496905+01:00","level":"ERROR","msg":"an error message"}
If you have some expensive operations to prepare some data before logging, you’ll want to check logger.Enabled()
to confirm if the desired log level is active before performing the expensive work:
1234
if logger.Enabled(context.Background(), slog.LevelDebug) {
logger.Debug("operation complete", "data", getExpensiveDebugData())
This simple check ensures that expensive operations only run when their output is guaranteed to be logged, thus preventing an unnecessary performance hit.
You can control the minimum level that will be processed through slog.HandlerOptions
:
1234
handler := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
logger := slog.New(handler)
To set the level based on an environmental variable, you may use this pattern:
1234567891011121314151617181920
func getLogLevelFromEnv() slog.Level {
levelStr := os.Getenv("LOG_LEVEL")
switch strings.ToLower(levelStr) {
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
Level: getLogLevelFromEnv(),
For production services where you might need to change log verbosity without a restart, slog provides the slog.LevelVar
type. It is a dynamic container for the log level that allows you to change it concurrently and safely at any time with Set()
.:
12345678
var logLevel slog.LevelVar
logLevel.Set(getLogLevelFromEnv())
logger := slog.New(slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
For even greater control of severity levels on a per-package basis, you can use the slog-env package which provides a handler that allows setting the log level via the GO_LOG
environmental variable:
1
logger := slog.New(slogenv.NewHandler(slog.NewJSONHandler(os.Stderr,nil)))
Let’s say your program defaults to the INFO
level and you're seeing the following logs:
123
{"time":"...","level":"INFO","msg":"main: an info message"}
{"time":"...","level":"WARN","msg":"main: a warning message"}
{"time":"...","level":"ERROR","msg":"main: an error message"}
You can enable DEBUG
messages with:
1
12345
{"time":"...","level":"DEBUG","msg":"app: a debug message"}
{"time":"...","level":"DEBUG","msg":"main: a debug message"}
{"time":"...","level":"INFO","msg":"main: an info message"}
{"time":"...","level":"WARN","msg":"main: a warning message"}
{"time":"...","level":"ERROR","msg":"main: an error message"}
You can then raise the minimum level for the main
package alone with:
1
GO_LOG=debug,main=error go run main.go
The DEBUG
logs still show up for other packages, but package main
is now raised to the ERROR
level:
12
{"time":"...","level":"DEBUG","msg":"app: a debug message"}
{"time":"...","level":"ERROR","msg":"main: an error message"}
If you’re missing a log level like TRACE
or FATAL
, you can easily create them by defining new constants:
1234
LevelTrace = slog.Level(-8)
LevelFatal = slog.Level(12)
To use these custom levels, you must use the generic logger.Log()
method:
1
logger.Log(context.Background(), LevelFatal,"database connection lost")
However, their default output name isn’t ideal (DEBUG-4
, ERROR+4
):
1
{"time":"...","level":"ERROR+4","msg":"database connection lost"}
You can fix this by providing a ReplaceAttr()
function in your HandlerOptions
to map the level’s integer value to a custom string:
1234567891011121314
opts := &slog.HandlerOptions{
ReplaceAttr: func(groups []string, a slog.Attr) slog.Attr {
if a.Key == slog.LevelKey {
level := a.Value.Any().(slog.Level)
a.Value = slog.StringValue("TRACE")
a.Value = slog.StringValue("FATAL")
You’ll see a more normal output now:
1
{"time":"...","level":"FATAL","msg":"database connection lost"}
Note that the ReplaceAttr()
is called once for every attribute on every log, so always keep its logic as fast as possible to avoid performance degradation.
The Handler
is the backend of the logging system that’s responsible for taking a Record
, formatting it, and writing it to a destination.
A key feature of slog
handlers is their composability. Since handlers are just interfaces, it’s easy to create “middleware” handlers that wrap other handlers.
This allows you to build a processing pipeline to enrich, filter, or modify log records before they are finally written. You’ll see some examples of this pattern as we go along.
The log/slog
package ships with two built-in handlers:
JSONHandler
, which formats logs as JSON.TextHandler
, which formats logs as key=value
pairs.12345
jsonLogger := slog.New(slog.NewJSONHandler(os.Stdout,nil))
textLogger := slog.New(slog.NewTextHandler(os.Stdout,nil))
jsonLogger.Info("database connected","db_host","localhost","port",5432)
textLogger.Info("database connected","db_host","localhost","port",5432)
output12
{"time":"...","level":"INFO","msg":"database connected","db_host":"localhost","port":5432}
time=... level=INFO msg="database connected" db_host=localhost port=5432
This article will focus primarily on JSON logging since its the de facto standard for production logging.
HandlerOptions
You can configure the behavior of the built-in handlers using slog.HandlerOptions
, and you’ve already seen this approach for setting the Level
and using ReplaceAttrs
to provide custom level names.
The final option is AddSource
, which automatically includes the source code file, function, and line number in the log output:
123456
opts := &slog.HandlerOptions{
logger := slog.New(slog.NewJSONHandler(os.Stdout, opts))
logger.Warn("storage space is low")
output12345678910
"file": "/path/to/your/project/main.go",
"msg": "storage space is low"
While source information is handy to have, it comes with a performance penalty because slog must call runtime.Caller()
to get the source code information, so keep that in mind.
That’s pretty much all you can do to customize the built-in handlers. To go farther than this, you’ll need to utilize third-party handlers created by the community or create a custom one by implementing the Handler interface.
Some notable handlers you might find useful include:
One notable behavior of the built-in handlers is that they do not de-duplicate keys which can cause unpredictable or undefined behavior in telemetry pipelines and observability tools:
123
jsonLogger := slog.New(slog.NewJSONHandler(os.Stdout,nil))
childLogger := jsonLogger.With("app","my-service")
childLogger.Info("User logged in", slog.String("app","auth-module"))
123456
There’s currently no consensus on the “correct” behavior, though the relevant GitHub issue remains open and could still evolve.
For now, if de-duplication is needed, you must use a third-party “middleware” handler, like slog-dedup, to fix the keys before they are written.
It supports various strategies, including overwriting, ignoring, appending, and incrementing the duplicate keys. For example, you could overwrite duplicate keys as follows:
1
jsonLogger := slog.New(slogdedup.NewOverwriteHandler(slog.NewJSONHandler(os.Stdout,nil),nil))
output123456
The best practice for modern applications is often to log to stdout
or stderr
, and allow the runtime environment to manage the log stream.
However, if your application needs to write directly to a file, you can simply pass an *os.File
instance to the slog
handler:
123456789101112
logFile, err := os.OpenFile("app.log", os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0666)
logger := slog.New(slog.NewJSONHandler(logFile, nil))
logger.Info("Starting server...", "port", 8080)
logger.Warn("Storage space is low", "remaining_gb", 15)
logger.Error("Database connection failed", "db_host", "10.0.0.5")
For managing the rotation of log files, you can use the standard logrotate utility or the lumberjack package.
Choosing how to make a logger available across your application is a key architectural decision. This involves trade-offs between convenience, testability, and explicitness. While there’s no single “right” answer, understanding the common patterns will help you select the best approach for your project.
This guide explores the three most common patterns for contextual logging in Go: using a global logger, embedding the logger in the context, and passing the logger explicitly as a dependency.
Using the global logger via slog.Info()
is a convenient approach as it avoids the need to pass a logger instance through every function call.
You only need to configure the default logger once at the entry point of the program, and then you’re free to use it anywhere in your application:
1234567891011
slog.SetDefault(slog.New(slog.NewJSONHandler(os.Stdout, nil)))
slog.Info("doing something")
When you want to log contextual attributes across scopes, you only need to use the context.Context
type to wrap the attributes and then use the Context
variants of the level methods accordingly.
This requires the use of a context-aware handler, and there are a few of these already created by the community. One example is slog-context which allows you place slog attributes into the context and have them show up anywhere that context is used.
Here’s a detailed example showing this pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
slogctx "github.com/veqryn/slog-context"
correlationHeader = "X-Correlation-ID"
func requestID(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
correlationID := r.Header.Get(correlationHeader)
correlationID = uuid.New().String()
ctx = slogctx.Prepend(ctx, slog.String("correlation_id", correlationID))
w.Header().Set(correlationHeader, correlationID)
func requestLogger(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
slog.String("method", r.Method),
slog.String("path", r.RequestURI),
slog.String("referrer", r.Referer()),
slog.String("user_agent", r.UserAgent()),
func hello(w http.ResponseWriter, r *http.Request) {
slog.InfoContext(r.Context(), "hello world!")
h := slogctx.NewHandler(slog.NewJSONHandler(os.Stdout, nil), nil)
slog.SetDefault(slog.New(h))
mux := http.NewServeMux()
mux.HandleFunc("/", hello)
wrappedMux := requestID(requestLogger(mux))
http.ListenAndServe(":3000", wrappedMux)
The requestID()
middleware intercepts every incoming request, generates a unique correlation_id
, and uses slogctx.Prepend()
to attach this ID as a logging attribute to the request’s context.
The requestLogger()
middleware and the final hello()
handler both use slog.InfoContext()
. They don’t need to know about the correlation_id
explicitly; they just pass the request’s context to the global logger.
When slog.InfoContext()
is called, the configured slogctx.Handler
intercepts the call, inspects the provided context, finds the correlation_id
attribute, and automatically adds it to the log record before it’s written out by the JSONHandler
:
output12
{"time":"...","level":"INFO","msg":"incoming request","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d","method":"GET","path":"/","referrer":"","user_agent":"curl/8.5.0"}
{"time":"...","level":"INFO","msg":"hello world!","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d"}
This pattern ensures that every log statement related to a single HTTP request is tagged with the same correlation_id
, making it possible to connect a set of logs to a single request.
Another common pattern is placing the logger itself in a context.Context
instance. You can also use the slog-context
package to implement this pattern:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
slogctx "github.com/veqryn/slog-context"
correlationHeader = "X-Correlation-ID"
func requestID(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
correlationID := r.Header.Get(correlationHeader)
correlationID = uuid.New().String()
ctx = slogctx.With(ctx, slog.String("correlation_id", correlationID))
w.Header().Set(correlationHeader, correlationID)
func requestLogger(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
logger := slogctx.FromCtx(r.Context())
slog.String("method", r.Method),
slog.String("path", r.RequestURI),
slog.String("referrer", r.Referer()),
slog.String("user_agent", r.UserAgent()),
func ctxLogger(logger *slog.Logger, next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
ctx := slogctx.NewCtx(r.Context(), logger)
func hello(w http.ResponseWriter, r *http.Request) {
logger := slogctx.FromCtx(r.Context())
logger.Info("hello world!")
h := slogctx.NewHandler(slog.NewJSONHandler(os.Stdout, nil), nil)
mux := http.NewServeMux()
mux.HandleFunc("/", hello)
wrappedMux := ctxLogger(logger, requestID(requestLogger(mux)))
http.ListenAndServe(":3000", wrappedMux)
Here, The outermost middleware, ctxLogger()
, takes the application’s base logger and uses slogctx.NewCtx()
to place it into the request’s context. This makes the logger available to all subsequent handlers.
Next, the requestID
middleware retrieves the logger from the context. It then uses slogctx.With
to create a new child logger that includes the correlation_id
. This new, more contextual logger is then placed back into the context, replacing the base logger.
Any subsequent middleware or handler, like requestLogger()
and hello()
, can now retrieve the fully contextualized child logger using slogctx.FromCtx()
. They can log messages without needing to know anything about the correlation_id
; it’s automatically included because it’s part of the logger instance that was retrieved.
The result is exactly the same as before:
output12
{"time":"...","level":"INFO","msg":"incoming request","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d","method":"GET","path":"/","referrer":"","user_agent":"curl/8.5.0"}
{"time":"...","level":"INFO","msg":"hello world!","correlation_id":"59230d79-a206-44e3-a02c-e7acf5bad28d"}
What happens if you use slogctx.FromCtx()
but there’s no associated logger? The default logger (slog.Default()
) will be returned.
This approach treats the logger as a formal dependency, which is provided to components either through function parameters or as a field in a struct.
The logger is provided once when the struct is created, and all its methods can then access it via the receiver:
123456789101112131415161718192021
type UserService struct {
func NewUserService(logger *slog.Logger, db *sql.DB) *UserService {
logger: logger.With(slog.String("component", "UserService")),
func (s *UserService) CreateUser(ctx context.Context, user *User) {
l := s.logger.With(slog.Any("user", user))
l.InfoContext(ctx, "creating new user")
l.InfoContext(ctx, "user created successfully")
For context-aware logging, you would then rely on adding attributes to the context with slogctx.Prepend()
as shown earlier.
slog
’s design encourages handlers to read contextual values from a context.Context
. This makes putting the Logger
instance itself in the context unnecessary, and thus not recommended.
The initial slog
proposal originally included helper functions like slog.NewContext()
and slog.FromContext()
for adding and retrieving the logger from the context, but they were removed from the final version due to strong community opposition from the “anti-pattern” camp.
The key decision is thus between two patterns: using a global logger or using dependency injection. The former is extremely convenient but adds a hidden dependency that’s hard to test, while the latter is more verbose but makes dependencies explicit, resulting in highly testable and flexible code.
You can use sloglint
to enforce whatever style you choose to implement throughout your codebase, so do check out the full list of options that it provides.
LogValuer
interfaceThe LogValuer
interface provides a powerful mechanism for controlling how your custom types appear in log output.
This becomes particularly important when dealing with sensitive data, complex structures, or when you want to provide consistent representation of domain objects across your logging.
The interface is elegantly simple:
123
type LogValuer interface {
When slog
encounters a value that implements LogValuer
, it calls the LogValue()
method instead of using the default representation. This gives you complete control over what information appears in your logs.
Consider an application where you frequently log user information. Without implementing LogValuer
, logging a User
struct directly might expose more information than intended:
12345678910111213141516171819202122232425262728
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
Email: "john@example.com",
PasswordHash: "encrypted-password-hash",
LastLogin: time.Now().Add(-24 * time.Hour),
logger.Info("user operation", slog.Any("user", user))
output123456789101112131415
"time": "2025-07-17T17:18:22.090974193+01:00",
"Email": "john@example.com",
"PasswordHash": "encrypted-password-hash",
"CreatedAt": "2025-07-17T17:18:22.090965054+01:00",
"LastLogin": "2025-07-16T17:18:22.090965107+01:00",
By implementing LogValuer
, you can control exactly what information appears. For example, you can limit it to just the id
:
123456
func (u *User) LogValue() slog.Value {
This now produces clean, controlled output that hides all sensitive or unnecessary fields:
output12345678
"time": "2024-01-15T10:30:45.123Z",
If you add a new field later on, it won’t be logged until you specifically add it to the LogValue()
method. While this adds some extra work, it guarantees that sensitive data won’t be accidentally logged.
Error logging in slog
requires thoughtful consideration of what information will be most valuable during debugging. Unlike simple string-based logging, structured error logging allows you to capture rich context alongside the error itself.
The most straightforward approach uses slog.Any()
to log error values:
1234
err := errors.New("payment gateway unreachable")
logger.Error("Payment processing failed", slog.Any("error", err))
You’ll see the error message accordingly:
123456
"time": "2025-07-17T17:25:05.356666995+01:00",
"msg": "Payment processing failed",
"error": "payment gateway unreachable"
If you’re using a custom error type, you can implement the LogValuer
interface to enrich your error logs:
123456789101112131415161718192021222324252627282930
type PaymentError struct {
func (pe PaymentError) Error() string {
func (pe PaymentError) LogValue() slog.Value {
slog.String("code", pe.Code),
slog.String("message", pe.Message),
slog.String("cause", pe.Cause.Error()),
logger := slog.New(slog.NewJSONHandler(os.Stdout, nil))
causeErr := errors.New("network timeout")
Code: "GATEWAY_UNREACHABLE",
Message: "Failed to reach payment gateway",
logger.Error("Payment operation failed", slog.Any("error", err))
output12345678910
"time": "2025-07-17T17:25:05.356666995+01:00",
"msg": "Payment processing failed",
"code": "GATEWAY_UNREACHABLE",
"message": "Failed to reach payment gateway",
"cause": "network timeout"
This approach provides structured error information that’s much more valuable than simple error strings when analyzing failures in production systems.
You can go even farther by capturing the structured stack trace of an error in your logs. You’ll need to integrate with a third-party package like go-errors or go-xerrors to achieve this:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
xerrors "github.com/mdobak/go-xerrors"
func replaceAttr(_ []string, a slog.Attr) slog.Attr {
if err, ok := a.Value.Any().(error); ok {
if trace := xerrors.StackTrace(err); len(trace) > 0 {
errGroup := slog.GroupValue(
slog.String("msg", err.Error()),
slog.Any("trace", formatStackTrace(trace)),
func formatStackTrace(trace xerrors.Callers) []map[string]any {
s := make([]map[string]any, len(frames))
for i, v := range frames {
h := slog.NewJSONHandler(os.Stdout, &slog.HandlerOptions{
ReplaceAttr: replaceAttr,
ctx := context.Background()
err := xerrors.New("something happened")
logger.ErrorContext(ctx, "image uploaded", slog.Any("error", err))
output12345678910111213141516171819202122232425
"time": "2025-07-18T09:16:14.870855023+01:00",
"msg": "something happened",
"source": "/home/ayo/dev/dash0/demo/golang-slog/main.go"
"source": "/home/ayo/.local/share/mise/installs/go/1.24.2/src/runtime/proc.go"
"func": "runtime.goexit",
"source": "/home/ayo/.local/share/mise/installs/go/1.24.2/src/runtime/asm_amd64.s"
While slog
was designed with performance in mind, it consistently benchmarks as slower than some highly optimized third-party libraries such as zerolog and zap.
While absolute numbers may vary based on the specific benchmark conditions, the relative rankings have been shown to be consistent:
Package | Time | % Slower | Objects allocated |
---|---|---|---|
zerolog | 380 ns/op | +0% | 1 allocs/op |
zap | 656 ns/op | +73% | 5 allocs/op |
zap (sugared) | 935 ns/op | +146% | 10 allocs/op |
slog (LogAttrs) | 2479 ns/op | +552% | 40 allocs/op |
slog | 2481 ns/op | +553% | 42 allocs/op |
logrus | 11654 ns/op | +2967% | 79 allocs/op |
This performance profile is not an accident but a result of deliberate design choices. The Go team’s own analysis revealed that their optimization efforts were focused on the most common logging patterns they observed in open-source projects where calls with five or fewer attributes accounted for over 95% of use cases.
Only you can decide if this performance gap is relevant for your use case. If you need bridge this gap for a high-throughput or latency-sensitive use case, you have two practical options:
slog
as the frontend API and wire it to a high-performance third-party logging handler for modest gains.slog
entirely and log directly with zerolog or zap to squeeze out every last nanosecond.As always, ensure to run your own benchmarks before committing either way.
Once your Go application is producing high-quality, structured logs with slog
, the next step is to get them off individual servers and into a centralized observability pipeline.
Centralizing your logs transforms them from simple diagnostic records into a powerful, queryable dataset. More importantly, it allows you to correlate slog entries with other critical telemetry signals, like distributed traces and metrics, to get a complete picture of your system’s health.
Modern observability platforms can ingest the structured JSON output from slog’s JSONHandler
. They provide powerful tools for searching, creating dashboards, and alerting on your log data.
To unlock true correlation, however, your logs must share a common context (like a TraceID
) with your traces. The standard way to achieve this is by integrating slog with OpenTelemetry using the otelslog bridge.
A full demonstration is beyond the scope of this guide, but you can consult the official OpenTelemetry documentation to learn how to configure the log bridge accordingly.
Once your OpenTelemetry-enriched log data is fed into an OpenTelemetry-native platform like Dash0, your slog
entries will appear alongside traces and metrics in a unified view, giving you end-to-end visibility into every request across your distributed system.
The introduction of log/slog
was a pivotal moment for the Go ecosystem that finally acknowledged the need for robust tooling to support building highly observable systems right out of the box.
Throughout this guide, we’ve journeyed from the core concepts of Logger
, Handler
, and Record
to patterns for contextual and error logging. While the API has a few rough edges and isn’t the most elegant, its establishment reduces the fragmentation of past approaches and provides the Go community with a consistent, shared language for structured logging.
By treating logging not as an afterthought but as a fundamental signal for observability, you’ll transform your services from opaque black boxes into systems that are transparent, diagnosable, and easier to troubleshoot.
Thanks for reading!
It's fine for application logging but I have two gripes with slog:
1) If you're writing a library that can be used by many different applications and want to emit logs, you'll still need to write a generic log interface with adapters for slog, zap, charmlog, etc. That the golang team refuses to bless a single interface for everyone to settle on both makes sense given their ideological standpoint on shipping interfaces and also causes endless mild annoyance and code duplication.
2) I believe it's still impossible to see the correct callsite in test logs when using slog as the logger. For more information, see https://github.com/neilotoole/slogt?tab=readme-ov-file#defic.... It's possible I'm out of date here — please correct me if this is wrong, it's actually a much larger annoyance for me and one of the reasons I still use uber/zap or charmbracelet/log.
Overall, especially given that it performs worse than uber/zap and everyone has basically standardized on that and it provides essentially the same interface, I recommend using uber/zap instead.
EDIT: just to expand further, take a look at the recommended method of wrapping helper methods that call logs. Compare to the `t.Helper()` approach. And some previous discussion. Frustrating!
- https://pkg.go.dev/log/slog#example-package-Wrapping
- https://github.com/golang/go/issues/59145#issuecomment-14770...
The blessed interface for libraries is to accept a slog.Logger.
The blessed interface for logging backends is slog.Handler.
Applications can then wire that up with a handler they like, for example
zap: https://pkg.go.dev/go.uber.org/zap/exp/zapslog#Handler
charm https://github.com/charmbracelet/log?tab=readme-ov-file#slog...
I used slog.Logger for an OSS project and I will not do it again. The interface is terrible, and far more verbose and less expressive than something like Zap or zerolog. e.g. there's not really anything as good as zerolog's log.Dict() for dealing with complex structures.
1) The idea is that your library should accept the slog logger and use it. The caller would create a logger with a handler that defines how log messages are handled. But there are problems with supported types; see my other comments.
2) It is improved in 1.25. See https://github.com/golang/go/issues/59928 and https://pkg.go.dev/testing#T.Output. Now it is possible to update slogt to provide correct callsite – the stack depth should be the same.
1) Right, but this is complicated and annoying. Imagine a world where you could just pass your existing logger in, because my library references an interface like `stdlib/logging.GenericLoggerInterface` and slog, zap, zerolog, etc. all implement that! Would be nice!
2) TIL about `T.Output`, thank you, that's great to know about. Still annoying and would be nice if the slog package showed an example of logging from tests with correct callsites. Golang gets so many things right about testing, so the fact that logging in tests is difficult really stands out and bothers me.
But that is exactly what slog provides? The a unified interface that can be implemented by other logger libraries. Yes the Logger itself is not the interface, but the Handler it is backed by is.
We switched to zerolog a while back and didn't look back.
> That the golang team refuses to bless a single interface for everyone to settle on
Uh... https://pkg.go.dev/golang.org/x/exp/slog#Handler
If zap, charmlog, etc. don't provide conformance to the interface, that's not really on the Go team. It wouldn't be that hard to write your own adapter around your unidiomatic logger of choice if you're really stuck, though. This isn't an actual problem unless you think someone else owes you free labor for some reason.
That's close, but not what I meant — that's specific to this package, and is the interface for processing log records produced by a slog.Logger. What I mean is that there should be a single interface for Logging that is implemented by slog.Logger, uber/zap.Logger, etc. that library authors can use without needing to reinvent the wheel every time.
For an example from one of my own libraries, see
https://github.com/peterldowns/pgmigrate/blob/d3ecf8e4e8af87...
> What I mean is that there should be a single interface for Logging that is implemented by slog.Logger, uber/zap.Logger, etc.
There is: https://pkg.go.dev/golang.org/x/exp/slog#Handler
If, say, zap was conformant, you'd slog.New(zap.NewHandler()) or whatever and away you go. It seems the only problem here is that the logging packages you want to use are not following the blessed, idiomatic path.
> For an example from one of my own libraries
There are a lot of problem with that approach at scale. That might not matter for your pet projects, but slog also has to serve those who are pushing computers to their limits. Your idea didn't escape anyone.
I know it didn't escape anyone; I'm explaining the downside to the choices made by the stdlib authors, from my perspective. When performance is a concern, people pick uber/zap.Logger or zerolog. When performance isn't a huge concern, slog is overly complicated and annoying. I believe you understand my complaint.
> I believe you understand my complaint.
I don't, really. If performance is of utmost concern, you're not going to accept the overhead of passing the logger through an interface anyway, so that's moot. A library concerned about performance as a top priority has to pick one and only one.
But if a library has decided that flexibility is more important than raw efficiency, then the interface is already defined.
zaplogger.Info("Calling third-party library")
thirdparty.Call(slog.New(zaplogger)) // Logs whatever the package logs to zaplogger
zaplogger.Info("Called third-party library")
The only 'problem' I can see is if `zaplogger` hasn't implemented the interface. But there isn't much the Go team can do about implementations not playing nicely.Your library should just take a *slog.Logger, and using *slog.Logger is an orthogonal choice to zap/zerolog/whatever. Those compete with slog.TextHandler or slog.JSONHandler, and sure, if you’re performance sensitive don’t pick them. In my newer projects I use zap under the hood with an application-facing *slog.Logger through go.uber.org/zap/exp/zapslog just fine (actually further locked down with my own interface so that coworkers can’t go crazy, but that’s beside the point). Your bespoke interface, or that standard interface you want isn’t going to be any more performant than going through slog.Handler interface anyway.
The thing that gets me about slog is that the output key for the slog JSON handler is msg, but that's not compatible with Googles own GCP Stackdriver logging. Since that key is a constant I now need to use an attribute replacer to change it from msg to message (or whatever it is stackdriver wants). Good work Google.
We had the same annoyance, and wrote https://pkg.go.dev/github.com/chainguard-dev/clog/gcp to bridge the gap.
It's a slog handler that formats everything the way GCP wants, including with trace contexts, etc.
We've had this in production for months, and it's been pretty great.
You can add this at your main.go
import _ "github.com/chainguard-dev/clog/gcp/init"
(the rest of the library is about attaching a logger to a context.Context, but you don't need to use that to use the GCP logger)My biggest gripe with slog is that there is no clear guidance on supported types of attributes.
One could argue that supported types are the ones provided by Attr "construct" functions (like slog.String, slog.Duration, etc), but it is not enough. For example, there is no function for int32 – does it mean it is not supported? Then there is slog.Any and some support in some handlers for error and fmt.Stringer interfaces. The end result is a bit of a mess.
All values are supported.
Well, is fmt.Stringer supported? The result might surprise you:
req := expvar.NewInt("requests")
req.Add(1)
attr := slog.Any("requests", req)
slog.New(slog.NewTextHandler(os.Stderr, nil)).Info("text", attr)
slog.New(slog.NewJSONHandler(os.Stderr, nil)).Info("json", attr)
This code produces time=2025-09-12T13:15:42.125+02:00 level=INFO msg=text requests=1
{"time":"2025-09-12T13:15:42.125555+02:00","level":"INFO","msg":"json","requests":{}}
So the code that uses slog but does not know what handler will be used can't rely on it lazily calling the `String() string` method: half of the standard handlers do that, half don't.If you need more control, you can create a wrapper type that implements `slog.LogValuer`
type StringerValue struct {
fmt.Stringer
}
func (v StringerValue) LogValue() slog.Value {
return slog.StringValue(v.String())
}
Usage example: slog.Any("requests", StringerValue{req})
There might be a case for making the expvar types implement `slog.LogValuer` directly.So clearly not all values are supported.
And I know that I can create a wrapper for unsupported types. My problem is exactly that – I don't know what types are supported. Is error supported, for example? Should I create a wrapper for it? And, as a handler author, should I support it directly or not?
Not sure what your definition of "supported" is, but I'm afraid you're going to have to bite the bullet and ... gasp ... read the documentation https://pkg.go.dev/log/slog
Not sure I understand your sarcasm. I read the documentation, source code, handler writing guide, and issues in the Go repository multiple times over two years, and I use slog extensively. Go is my primary language since r60. I think I know how to read Go docs.
Now, please point me to the place in the documentation that says if I can or can't use a value implementing the error interface as an attribute value, and will the handler or something else would call the `Error() string` method.
My definition of "supported" is simple – I could pass a supported value to the logger and get a reasonable representation from any handler. In my example, the JSON handler does not provide it for the fmt.Stringer.
https://pkg.go.dev/log/slog#JSONHandler.Handle
> Values are formatted as with an encoding/json.Encoder with SetEscapeHTML(false), with two exceptions.
> First, an Attr whose Value is of type error is formatted as a string, by calling its Error method. Only errors in Attrs receive this special treatment, not errors embedded in structs, slices, maps or other data structures that are processed by the encoding/json package.
So the json handler more or less works as if you called json.Marshal, which sounds pretty reasonable.
I think you missed the “any handler” part. Currently, the types that my library package could use depend on the handler used by the caller. This limits types to an unspecified subset, making things quite impractical.
any handler is too broad, maybe my custom handler only logs strings and ignores ints.
for a reasonable substitute subset, use the core language types, and implement LogValuer for anything complex.
That seems to work as expected?
The output of data is handled by the handler. Such behaviour is clearly outlined in the documentation by the JSONHandler. I wouldn't expect a JSONHandler to use Stringer. I'd expect it to use the existing JSON interfaces, which it does.
I'd expect the Text handler to use TextMarshaller. Which it does. Or Stringer, which it does implicitly via fmt.Sprintf.
My problem with that is that it makes it impossible to use slog logger safely without knowing what handler is being used. Which kind of defeats the purpose of defining the common structured logging interface.
> Which kind of defeats the purpose of defining the common structured logging interface.
Does it, though? Why would the log producer care about how the log entires are formatted? Only the log consumer cares about that.
As a producer of the response, if I didn't care about being understood, I would use a made-up language. As a consumer, you may care about understanding my response, but you cannot do anything about it.
Hence the design. The producer coming up with a made up language that makes sense to the producer, but probably doesn't make sense to the consumer — especially when you have many different consumers with very different needs — is far more problematic than the producer providing an abstract representation and allowing the consumer to dig into the specific details it wants.
As with everything in life, there are tradeoffs to that approach, of course, and it might be hard to grasp if you come from languages which different idioms that prioritize producer over consumer, but if you look closely everything about Go is designed to prioritize the needs of the consumer over the needs of the producer. That does seem to confuse a lot of people, interestingly. I expect because it isn't idiomatic to prioritize the consumer in a lot of other languages and people get caught up in trying to write code in those other languages using Go syntax instead of actually learning Go.
A good, if a bit strange, example. A CPI logger wouldn't need to log the same thing as an access logger, but the producer need not care about who is consuming the logs. Consumers might even want to see both and, given the design, can have both.
Certainly logs lose their value if they are wrong. And either approach is ripe for getting things wrong. But the idea is that the consumer is more in tune with getting what the consumer needs right. The producer's assumptions are most likely to be wrong, fundamentally, not having the full picture of what is needed. What is the counter suggestion?
We've banned this account for continually posting unsubtantive comments and ignoring our previous request to observe the guidelines.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
[dead]