Tutorial · Observability

    Monitoring strategies with accurate timestamps

    Quick answer: Separate event time (when something happened) from ingest time (when your collector saw it). Prometheus samples expose timestamps in milliseconds since epoch; alert rules evaluate over range vectors with explicit [5m:] windows. For traces, propagate trace/span ids with nanosecond-resolution start times but never index raw span timestamps as metric labels — cardinality explodes.

    Prometheus conventions

    Scrapers attach a scrape timestamp to each target pull. Counter resets appear as gaps — use rate() or increase() with at least four times the scrape interval to absorb jitter. Recording rules should align to minute boundaries only if product managers truly need calendar buckets; arbitrary alignment reduces flapping when scrapes slip.

    - alert: HighErrorBudgetBurn
      expr: |
        sum(rate(http_requests_total{status=~"5.."}[5m]))
        /
        sum(rate(http_requests_total[5m]))
        > 0.05
      for: 10m

    Alerting windows

    Duration math in alert DSLs mirrors Unix subtraction: both ends are instants; the range vector slides with evaluation time. Document whether alerts use data lagged by remote write delays. For SLO burn alerts, translate error budgets into rates with explicit windows (hourly, daily) tied to business calendars — still store raw events as epoch milliseconds for replay.

    SLA calculation sketch

    // Good: events carry event_time_ms from producers
    const windowStart = Date.UTC(2026, 3, 22, 0, 0, 0);
    const windowEnd = windowStart + 86400000;
    const bad = events.filter(
      (e) => e.event_time_ms >= windowStart && e.event_time_ms < windowEnd && !e.ok
    );
    const slo = 1 - bad.length / totalInWindow;

    Grafana time ranges

    Dashboard variables like now-6h translate to absolute bounds server-side. When comparing deploy markers, pass epoch milliseconds in annotations so teams across timezones agree on vertical slices.

    Distributed tracing correlation

    OpenTelemetry spans include start/end times with nanosecond fields, but backend exporters quantize for storage. When joining spans to logs, inject trace id into structured log fields and compare using ingest-time skew budgets — do not expect microsecond equality across hosts.

    SignalTimestamp fieldPitfall
    Metricsscrape + sample tsClock skew across HA pairs
    Logs@timestamp vs ingested_atParser timezone missing
    Tracesspan start nsLabeling spans as metrics

    Key takeaways

    • Define event vs ingest columns before schema locking.
    • Use histograms with sane buckets — literal timestamps make bad buckets.
    • Alert for: durations absorb transient spikes.
    • Trace timestamps complement logs; they do not replace canonical event clocks.
    • Test daylight saving week in staging with synthetic data.

    Written by Unix Calculator Editorial Team — Last verified May 2026.

    Get the Unix Timestamp Cheatsheet

    One email. Instant cheatsheet. No drip sequence.

    Advertisement