Prompt of the Day: Set Up Structured Logging for Production
Part 15 of 30 — Prompt of the Day Series
Let me tell you about a checkout bug that only showed up in production — and only for Safari users.
A developer documented the incident in detail on AppSignal's blog in July 2025: the checkout button silently failed for some users. Not a crash. Not a console error in dev. A silent failure in the wild. The team spent hours trying to reproduce it locally, adding console.log statements, wrapping things in try/catch, staring at the network tab. Nothing. It was only after they added structured logging with correlation IDs that the picture finally came into focus — the specific request path, the user agent, the exact failure point, all searchable in one query.
This is the story of every production debugging session without structured logging. You have text. What you need is data.
According to Grafana's 2026 Observability Survey — the largest community survey on the state of observability, with over 1,300 respondents across 76 countries — 77% of teams report saving time and money through centralized observability. Among those who called observability "essential," that number jumps to 84%. The tooling has matured. The practices have matured. What hasn't changed is that too many apps still ship with print() statements and console.log as the primary debugging interface.
OpenObserve's March 2026 field guide puts the problem concretely: "You cannot GROUP BY a sentence. You cannot join a free-text string to a distributed trace. You cannot build an alert on a substring match at scale without burning money on regex filters that break the moment a developer changes their log message wording." Plain text logs are a write-only medium. Structured logs are a database.
This prompt gives you the structured logging setup you should have had from day one.
The Prompt
Set up production-grade structured logging for my [Python / Node.js / TypeScript]
application. The implementation must:
1. JSON FORMAT: Every log line must emit valid JSON with consistent fields:
timestamp (ISO 8601), level, message, service name, environment,
and a request_id / trace_id on every log within a request context.
Use structlog (Python) or pino (Node.js/TypeScript) — not the bare
standard library or console.log.
2. LOG LEVELS: Configure DEBUG, INFO, WARN, ERROR, and FATAL levels.
Set production log level to INFO. DEBUG must be disabled in production
by default and enabled only via environment variable LOG_LEVEL=debug.
Never hardcode log levels.
3. REQUEST CONTEXT PROPAGATION: Use async context (contextvars in Python,
AsyncLocalStorage in Node.js) to automatically inject a trace_id into
every log line within a request lifecycle — without passing it manually
to every function.
4. SENSITIVE DATA GUARDS: Add a processor/middleware that redacts or
omits the following fields before emission: password, token, secret,
authorization, credit_card, ssn. Raise an error in tests if any of
these keys appear unredacted in log output.
5. TRANSPORT CONFIGURATION: Write to stdout in production (not to files —
let the container orchestrator handle forwarding). In development,
emit pretty-printed console output. Switch behavior via NODE_ENV /
APP_ENV environment variable.
6. ERROR LOGGING: On exceptions, include: error message, error type,
full stack trace as a structured field (not a multiline string), and
the originating request_id. Never swallow exceptions silently.
Provide:
- The full logger setup module (logger.py or logger.ts)
- Middleware/decorator to inject trace_id per request (FastAPI / Express)
- Three concrete usage examples: an INFO log on successful payment,
a WARN log on a retried external API call, and an ERROR log on a
failed database write
- A test that verifies sensitive fields are redacted
Why It Works
This prompt forces four behaviors that separate production logging from debug logging:
Structured output by default. By specifying structlog or pino explicitly, you get JSON from line one. Both libraries are battle-tested for production throughput — pino in particular is designed for high-volume Node.js APIs where logging overhead matters. You're not asking the AI to roll a custom formatter.
Context propagation without plumbing. The biggest pain point in microservices debugging isn't the log format — it's the missing trace_id that would let you follow a request across five services. By naming contextvars (Python) and AsyncLocalStorage (Node.js) specifically, you get automatic injection without threading the context through every function signature.
Sensitive data as a first-class concern. Most logging setups add redaction as an afterthought, after a security review finds a plaintext password in the logs. This prompt builds it in from the start, and adds a test to enforce it. The test is the key — it makes redaction part of the CI contract, not a post-incident remediation.
Stdout, not files. Writing logs to disk inside a container is a trap. It fills the disk, requires log rotation config, and doesn't forward to your aggregation layer automatically. Writing to stdout lets Docker, Kubernetes, or your cloud platform capture and ship logs without any additional tooling.
Here's what the Python output looks like:
# logger.py (Python / structlog)
import structlog
import logging
import os
def configure_logging():
env = os.getenv("APP_ENV", "development")
log_level = os.getenv("LOG_LEVEL", "INFO").upper()
shared_processors = [
structlog.contextvars.merge_contextvars,
structlog.processors.add_log_level,
structlog.processors.TimeStamper(fmt="iso"),
_redact_sensitive_fields,
]
if env == "production":
structlog.configure(
processors=shared_processors + [structlog.processors.JSONRenderer()],
wrapper_class=structlog.make_filtering_bound_logger(
getattr(logging, log_level)
),
)
else:
structlog.configure(
processors=shared_processors + [structlog.dev.ConsoleRenderer()],
wrapper_class=structlog.make_filtering_bound_logger(logging.DEBUG),
)
SENSITIVE_KEYS = {"password", "token", "secret", "authorization", "credit_card", "ssn"}
def _redact_sensitive_fields(logger, method, event_dict):
for key in SENSITIVE_KEYS:
if key in event_dict:
event_dict[key] = "[REDACTED]"
return event_dict
configure_logging()
log = structlog.get_logger()
# FastAPI middleware for trace_id injection
import uuid
from fastapi import Request
import structlog.contextvars
async def logging_middleware(request: Request, call_next):
trace_id = request.headers.get("X-Trace-Id", str(uuid.uuid4()))
structlog.contextvars.bind_contextvars(trace_id=trace_id)
response = await call_next(request)
structlog.contextvars.clear_contextvars()
return response
And the TypeScript equivalent with pino:
// logger.ts (Node.js / pino)
import pino from 'pino';
const SENSITIVE_KEYS = new Set(['password', 'token', 'secret', 'authorization', 'credit_card', 'ssn']);
const redactPaths = [...SENSITIVE_KEYS].map(k => k);
export const logger = pino({
level: process.env.LOG_LEVEL || (process.env.NODE_ENV === 'production' ? 'info' : 'debug'),
redact: {
paths: redactPaths,
censor: '[REDACTED]',
},
...(process.env.NODE_ENV !== 'production' && {
transport: { target: 'pino-pretty' },
}),
base: {
service: process.env.SERVICE_NAME || 'api',
env: process.env.NODE_ENV || 'development',
},
});
// Express middleware for trace_id injection
import { AsyncLocalStorage } from 'async_hooks';
import { v4 as uuidv4 } from 'uuid';
export const requestContext = new AsyncLocalStorage<{ traceId: string }>();
export function loggingMiddleware(req: any, res: any, next: any) {
const traceId = req.headers['x-trace-id'] || uuidv4();
requestContext.run({ traceId }, () => {
req.traceId = traceId;
next();
});
}
// Usage — trace_id available anywhere in the call stack:
export function getLogger() {
const ctx = requestContext.getStore();
return logger.child({ trace_id: ctx?.traceId });
}
In production, every log line looks like this:
{
"timestamp": "2026-03-30T06:11:00.000Z",
"level": "error",
"service": "checkout-api",
"env": "production",
"trace_id": "a3f7c2e1-9b44-4d2a-8c01-ff3b2e110abc",
"message": "Database write failed",
"error_type": "OperationalError",
"user_id": "usr_8821",
"order_id": "ord_44192"
}
That single JSON object tells you what failed, who was affected, and where to look in your distributed trace — in one query, without grepping through gigabytes of text.
The Anti-Prompt
Add some logging to my app so I can debug issues in production.
Why it fails: This is the prompt that produces console.log('here') and print(f"Error: {e}"). The AI has no guidance on format, level discipline, context propagation, sensitive data handling, or transport strategy. You'll get something that works in a terminal during development and fails silently — or exposes passwords — in production.
Every one of the five most common structured logging mistakes documented in 2026 — DEBUG logs in production burning your log budget, PII in plaintext, inconsistent field names breaking cross-service queries, contextless errors with no trace ID, and logs-as-metrics anti-patterns — can be traced back to vague prompts that produce vague implementations.
Vague prompt → vague code → 2 a.m. debugging session with no useful information.
Variations
For FastAPI + structlog (Python):
[Use the base prompt above with] FastAPI application using structlog.
Include a background task variant that preserves trace_id across
asyncio task boundaries using structlog.contextvars.copy_context().
For Express + pino (TypeScript):
[Use the base prompt above with] Express.js API using pino and
pino-http for automatic request/response logging. Include child
loggers scoped to individual route handlers.
For adding to an existing codebase:
I have an existing [Python/Node.js] app using [print/console.log].
Migrate it to structured logging using [structlog/pino] without
breaking existing log output. Add a compatibility shim so existing
string-format log calls still work during the migration period.
Provide a migration checklist and a grep pattern to find remaining
unstructured log calls.
For OpenTelemetry integration:
[Use the base prompt above, then add:] Also integrate with
OpenTelemetry to automatically inject the active span's trace_id
and span_id into every log line using the OTel logging bridge.
Export logs via OTLP to [Grafana Loki / Datadog / Honeycomb].
(OpenTelemetry log adoption reached 48% in production workloads as of the 2026 Grafana Observability Survey — up significantly year over year, and the fastest path to correlating logs with distributed traces.)
Your Action Checklist
- Replace all
console.log/printcalls in production code with a structured logger - Set
LOG_LEVEL=infoin your production environment config and verify DEBUG logs are suppressed - Add a
trace_idto every log line via middleware/context propagation — not manually - Run
grep -r 'password\|token\|secret' logs/— if you get hits, add redaction now - Verify your logger writes to stdout, not to a file path inside the container
- Add one test that asserts sensitive keys are redacted before any log is emitted
- Confirm logs land in your centralized aggregation platform (Datadog, Loki, CloudWatch, etc.) and are queryable by
trace_id
Ask The Guild
What's the worst debugging session you've survived because of missing or unstructured logs? Drop your war story in the comments — the context you wished you'd had, the field that would have saved you three hours, the console.log('here2') trail that led nowhere. Bonus points if you share the before/after of your logging setup.