Observability
Know what's happening in your DynamoDB operations. pydynox gives you metrics on every call - duration, capacity consumed, items returned. No extra code needed.
Why observability matters
DynamoDB bills by capacity consumed. Without metrics, you're flying blind:
- Is that query using 1 RCU or 100?
- Why is this Lambda timing out?
- Which operation is eating all my capacity?
pydynox answers these questions automatically. Every operation returns metrics, and logs are built-in.
Key features
- Metrics on every operation (duration, RCU/WCU, item counts)
- Automatic logging at INFO level
- Custom logger support (Powertools, structlog)
- Correlation ID for request tracing
- AWS SDK debug logs when you need them
Getting started
Metrics on every operation
Every pydynox operation returns metrics. You don't need to enable anything.
For get_item, the returned dict has a .metrics attribute:
For write operations, metrics are returned directly:
from pydynox import DynamoDBClient
client = DynamoDBClient()
# put_item returns OperationMetrics directly
metrics = client.put_item("users", {"pk": "USER#1", "name": "John"})
print(metrics.duration_ms) # 8.2
print(metrics.consumed_wcu) # 1.0
# Same for delete_item and update_item
metrics = client.delete_item("users", {"pk": "USER#1"})
print(metrics.duration_ms)
For queries, access metrics after iteration:
from pydynox import DynamoDBClient
client = DynamoDBClient()
# Query returns a result object with .metrics
result = client.query(
"users",
key_condition_expression="#pk = :pk",
expression_attribute_names={"#pk": "pk"},
expression_attribute_values={":pk": "ORG#123"},
)
# Iterate over results
for item in result:
print(item["name"])
# Access metrics after iteration
print(result.metrics.duration_ms) # 45.2
print(result.metrics.consumed_rcu) # 2.5
print(result.metrics.items_count) # 10
What's in metrics
| Field | Type | Description |
|---|---|---|
duration_ms |
float | How long the operation took |
consumed_rcu |
float or None | Read capacity units used |
consumed_wcu |
float or None | Write capacity units used |
items_count |
int or None | Items returned (query/scan) |
scanned_count |
int or None | Items scanned before filtering |
request_id |
str or None | AWS request ID for support tickets |
Automatic logging
pydynox logs every operation at INFO level. Just configure Python logging:
import logging
from pydynox import DynamoDBClient
# Enable INFO level logs for pydynox
logging.basicConfig(level=logging.INFO)
client = DynamoDBClient()
# All operations are logged automatically
client.put_item("users", {"pk": "USER#1", "name": "John"})
# INFO:pydynox:put_item table=users duration_ms=8.2 wcu=1.0
client.get_item("users", {"pk": "USER#1"})
# INFO:pydynox:get_item table=users duration_ms=12.1 rcu=0.5
Output:
INFO:pydynox:put_item table=users duration_ms=8.2 wcu=1.0
INFO:pydynox:get_item table=users duration_ms=12.1 rcu=0.5
INFO:pydynox:query table=users duration_ms=45.2 rcu=2.5 items=10
Slow operations (>100ms) get a warning:
Disable logging
If you don't want pydynox logs at all:
Advanced
Custom logger
Send pydynox logs to your own logger with set_logger():
Works with any logger that has debug, info, warning, error methods. Great for AWS Lambda Powertools or structlog.
Correlation ID
Track requests across your logs with set_correlation_id():
from pydynox import DynamoDBClient, set_correlation_id
client = DynamoDBClient()
def handler(event, context):
# Set correlation ID from Lambda context
set_correlation_id(context.aws_request_id)
# All pydynox logs will include this ID
client.put_item("users", {"pk": "USER#1", "name": "John"})
# INFO:pydynox:put_item table=users duration_ms=8.2 wcu=1.0 correlation_id=abc-123
return {"statusCode": 200}
All pydynox logs will include the correlation ID. Useful in Lambda where you want to trace a request through multiple DynamoDB calls.
SDK debug logs
For deep debugging, enable AWS SDK logs:
import logging
from pydynox import DynamoDBClient, set_logger
# Create a logger
logger = logging.getLogger("pydynox")
logger.setLevel(logging.DEBUG)
# Enable SDK debug logs
set_logger(logger, sdk_debug=True)
# Now you'll see detailed AWS SDK logs
client = DynamoDBClient()
client.get_item("users", {"pk": "USER#1"})
Or via environment variable:
# Basic SDK logs
RUST_LOG=aws_sdk_dynamodb=debug python app.py
# Full detail (HTTP bodies, retries, credentials)
RUST_LOG=aws_sdk_dynamodb=trace,aws_smithy_runtime=trace python app.py
Warning
SDK debug logs are verbose. Only enable when debugging specific issues.
Log levels
| Level | What's logged |
|---|---|
| ERROR | Exceptions, failed operations |
| WARNING | Slow queries (>100ms) |
| INFO | Operation summary (table, duration, rcu/wcu) |
| DEBUG | Detailed request/response info |
Use cases
Cost monitoring
Track capacity consumption per operation:
result = client.get_item("users", {"pk": "USER#123"})
print(f"This read cost {result.metrics.consumed_rcu} RCU")
# Over time, aggregate these to understand costs
Performance debugging
Find slow operations:
result = client.query(...)
if result.metrics.duration_ms > 100:
logger.warning(f"Slow query: {result.metrics.duration_ms}ms")
Lambda optimization
In Lambda, every millisecond counts: