Community · Case studies
Unix Calculator Case Studies
Detailed, anonymized engineering narratives from teams that fixed calendar chaos. Each story includes concrete SQL or infrastructure patterns — not marketing fluff — so your architects can cite them in RFCs.
Case study 1
Eliminating Timestamp Bugs in a Payment Processing System
A Series B fintech startup · Cards & ACH · Multi-region
Challenge
The ledger stored payment timestamps as VARCHAR in MySQL using the literal pattern MM/DD/YYYY HH:MM:SS in America/New_York local time. Reconciliation jobs compared those strings to European acquiring bank files that were normalized to CET without documenting the assumption. Overnight batch diffing flagged transactions as occurring "in the future" relative to statement windows, forcing 847 manual reconciliation reversals in a single quarter. Query plans degraded because range filters could not use indexes on textual dates, and engineers burned weeks arguing whether a payment "happened" on the credit date or posting date — both represented as ambiguous strings.
Mobile clients occasionally sent ISO strings while gateways still emitted legacy varchar columns, so API middleware double-parsed and produced silent duplicates. Fraud alerts keyed on string equality missed velocity patterns that numeric epoch windows would have caught. Leadership demanded a migration before opening a Paris entity; the risk assessment explicitly listed regulatory mis-reporting if timestamps drifted across jurisdictions.
Solution
The platform adopted BIGINT Unix epoch seconds in UTC for all immutable payment events. Application servers wrote through a single conversion module tested against NIST reference vectors; UI continued rendering localized strings, but persistence never saw raw offset-free local literals again. Dual-write validation compared old varchar-derived epochs to new BIGINT columns before cutover.
-- Migration sketch: VARCHAR local → Unix BIGINT (run per shard)
ALTER TABLE payments ADD COLUMN created_unix BIGINT NULL;
UPDATE payments
SET created_unix = UNIX_TIMESTAMP(
CONVERT_TZ(
STR_TO_DATE(created_at_str, '%m/%d/%Y %H:%i:%s'),
'America/New_York',
'UTC'
)
);
SELECT COUNT(*) AS bad FROM payments WHERE created_unix IS NULL;
-- Verify histograms match before dropping legacy columnOutcome
- Zero reconciliation failures attributed to timestamp skew in the six months post-migration.
- Hot path lookups improved 3.2× measurable in p95 latency (integer PK-style filters vs varchar casts).
- Timezone expansion to twelve countries required no schema change — presentation layer only.
Case study 2
Migrating 50M Rows from INTEGER Epoch to TIMESTAMPTZ in PostgreSQL
Enterprise billing SaaS · PostgreSQL 15 · Multi-tenant
Challenge
Invoice events used 32-bit integer seconds stored in application-defined "UTC" without database enforcement. Analysts joining to vendor datasets needed microsecond fidelity for interest accrual, but ORM models cast everything through JavaScript numbers, occasionally losing precision on bulk imports. As contract templates introduced forward-dated adjustments, risk flagged proximity to Y2038 overflow for signed 32-bit fields used in derivative schedules. Read replicas saw hot bloat from partial indexes that could not prune on true temporal types.
Compliance required audit trails listing civil timestamps with provenance; storing opaque integers forced every report to re-implement leap-second policy. The DBA team resisted another integer width bump without native TIMESTAMPTZ semantics tying IANA zones to session settings.
Solution
They added generated columns and backfilled during rolling maintenance windows using to_timestamp(epoch). Verification batches hashed ordered (id, epoch) pairs against golden extracts. A read-only cutover flag routed ORMs to the new column family while keeping the legacy BIGINT online for rollback.
BEGIN;
ALTER TABLE invoice_events
ADD COLUMN occurred_at timestamptz GENERATED ALWAYS AS
(to_timestamp(occurred_epoch) AT TIME ZONE 'UTC') STORED;
CREATE INDEX CONCURRENTLY idx_invoice_events_occurred_at
ON invoice_events (occurred_at DESC, tenant_id);
-- Backfill-only phase if not using generated column:
-- UPDATE invoice_events SET occurred_at = to_timestamp(occurred_epoch)
-- WHERE occurred_at IS NULL;
COMMIT;Outcome
- 50.4M rows migrated with zero checksum mismatches across dual-write validation.
- Production cutover duration: 38 minutes under controlled failover (announced window).
- Accrual jobs dropped out-of-order guard rails previously compensating for weak typing.
Case study 3
Building a Distributed Rate Limiter for a Public API
Edge-heavy API platform · Redis Cluster · Twelve PoPs
Challenge
The company needed sub-millisecond rate limit decisions synchronized across twelve geographic regions. Local LRU caches caused fairness violations — power users discovered which PoP allowed bursts — while centralizing entirely on one Redis region added 40ms RTT for APAC clients. Abuse desks demanded proof of consistent epoch windows for regulatory evidence packs. Prior token-bucket prototypes drifted because each edge subtracted refill amounts using unsynchronized wall clocks.
Solution
They deployed a hybrid: regional Redis primaries with last-write-wins epoch metadata and vector clocks for debugging, while limiter math used a sliding window implemented as a sorted set of integer-second marks. Trust boundaries required all compares against TIME returned from the coordinator shard, not from edge hosts.
const windowSec = 60;
const key = `rl:${tenantId}`;
async function allow(nowSec, cost = 1) {
const oldest = nowSec - windowSec;
await redis.zRemRangeByScore(key, 0, oldest);
const recent = await redis.zCard(key);
if (recent + cost > LIMIT) return { allow: false, reset: nowSec + windowSec };
await redis.zAdd(key, { score: nowSec, value: `${nowSec}-${randomUUID()}` });
await redis.expire(key, windowSec * 2);
return { allow: true, remaining: LIMIT - recent - cost };
}Outcome
- 99.97% SLO on limiter availability measured per month (weighted by request volume).
- p99 check latency under 2ms within region; cross-region fallbacks documented.
- Audit exports included Redis shard clock proofs for SOC2 reviewers.