Monitoring: Why Logs Matter More Than You Think 

Regarding observability in distributed systems, metrics often get all the attention. Dashboards, alerts, percentiles — these are the tools we reach for first. But when something goes wrong, or when you need to understand why a system behaves the way it does, you quickly realize something else: 

Logs tell the real story. 

This is especially true in time-series databases, where workloads are bursty, latency-sensitive, and often driven by external pipelines or real-time applications. While metrics provide the big picture, logs give you the crucial details that can’t be aggregated away: individual events, timing, client origin, and, most importantly, context. 

In this post, we’ll explore how Quasar approaches logging as a first-class citizen of observability — and how our new user properties feature allows you to connect your application stack with the database layer to make debugging and root cause analysis not just possible but fast. 

Metrics Give You the “What.” Logs Give You the “Why.” 

A metrics dashboard might show you: 

  • An unexpected spike in active sessions 
  • A rise in query latency 
  • A drop in ingestion throughput

All of these are important. But none of them answer questions like: 

  • Which IP addresses were responsible? 
  • Was it a specific user, application, or workload? 
  • What query was running at that exact moment? 

This is where logs come in — particularly Quasar’s structured logs, which are machine-readable, timestamped to the millisecond, and rich in contextual information. 

With the introduction of user properties in Quasar 3.14.2, your application can now tag every request it makes to the database with arbitrary metadata — and this metadata automatically appears in relevant logs. 

Introducing User Properties: Context Across the Wire 

User properties are arbitrary key/value pairs attached to a client session. They are set once per connection and remain in effect throughout the entire session. When the database writes log entries in the context of that session (e.g., slow operations, anomalous insert patterns, etc.), these properties are embedded directly in the log record. 

This means: 

  • You can trace log entries back to specific workloads 
  • You can filter logs by application, service, or user 
  • You can correlate logs across layers of your infrastructure 

Example: Tagging Sessions in Python 

with Quasar.Cluster(“qdb://127.0.0.1:2836”) as conn: 

    conn.properties().put(“application_id”, “infra_ingest”) 

Every log generated by this session (e.g., a slow query or inefficient write) will now include the tag application_id=infra_ingest. 

This small addition can drastically improve traceability and observability in environments with shared database clusters or multiple ingest pipelines. 

Real-World Examples: Using Logs to Pinpoint Problems 

Let’s walk through two real examples where logs provide insight that metrics alone can’t offer. 

  1. Slow Operations

You enable this configuration on your cluster: 

“log_slow_operation_ms”: 250 

Now, every operation that takes longer than 250 ms is logged. These logs include: 

  • The exact query text (in case of a query) 
  • The user ID (if authentication is enabled) 
  • The user properties attached to the session 

If your ingestion dashboard shows a latency spike, these logs can immediately show you: 

  • Which client caused the delay 
  • What operation was being executed 
  • Whether it’s systemic or isolated 
  1. Small Append Patterns

Quasar’s storage engine uses a transactional, copy-on-write model. This is optimized for large, batched writes — but penalizes small, repeated inserts. To detect these, you can enable: 

“log_small_append_percentage”: 5 

When a client inserts data with batch sizes too small relative to the shard size (in this case, an increment of 5% or less), Quasar will emit a warning log. If the application is misusing the synchronous insertion API, or if the table’s shard size is misconfigured, these logs will tell you exactly: 

  • Which table is affected 
  • Which client is responsible 
  • What user metadata is associated 

With user properties enabled, this becomes incredibly actionable. You no longer have to guess which ingest service or which deployment sends inefficient writes — the logs will tell you. 

Correlating Metrics and Logs: The Missing Link 

Logs and metrics are not mutually exclusive — they complement each other: 

  • Metrics help you detect that something is off 
  • Logs help you explain why it happened 

For example, suppose your dashboard shows session usage spiking every day at 11:00. The metrics can confirm the what, but the logs — enriched with IP addresses, queries, and user properties — reveal the who and how. 

With millisecond-level timestamps, you can correlate these with your stack’s external application logs, ingestion events, or other systems. It’s real end-to-end observability. 

A Feature That’s Simple Yet Transformative 

The power of user properties is in their simplicity. No special configuration is needed — the feature is enabled by default in all supported APIs. You attach the metadata that matters to you: workload name, request source, tenant ID, service version — whatever helps you make sense of your system. 

This bridges the gap between client-side instrumentation and server-side insight — something that’s historically difficult in distributed systems. 

Conclusion 

If you’re running Quasar in production, don’t overlook logs. They’re not just for debugging — they’re a critical part of understanding performance, catching inefficiencies, and making your systems observable end-to-end. 

By combining metrics for high-level visibility and logs for ground-level detail — and by enriching those logs with user properties — you gain the tools to go from “something’s wrong” to “here’s exactly what happened” in seconds, not hours. 

Observability isn’t just about watching your systems and understanding them. And with Quasar 3.14.2, you’ve got one more powerful tool to make that possible. 

 

Privacy Preference Center