Quick Answer:EXPIREsets a relative TTL from now (seconds), whileEXPIREATsets an absolute Unix timestamp for expiration.EXPIREATsurvives clock changes and restarts but requires careful timezone handling;EXPIREis simpler but vulnerable to clock skew in distributed systems. UseEXPIREATfor fixed deadlines (promotions ending 2026-07-01),EXPIREfor relative timeouts (session valid 30 minutes).
Your production cache just took a 40% hit because Redis keys expired 5 minutes early across 3 nodes—not because of code, but because one server's NTP daemon jumped the clock forward. This is the hidden cost of ignoring clock synchronization when choosing between EXPIRE and EXPIREAT in distributed systems. While relative TTLs feel intuitive, absolute timestamps expose deeper questions about clock reliability that most engineers discover only after the page burns.
Understanding EXPIRE vs EXPIREAT: The Core Trade-Off
Redis offers two fundamentally different approaches to key expiration. EXPIRE and PEXPIRE set relative timeouts from the command execution moment. EXPIREAT and PEXPIREAT set absolute Unix timestamps. The choice determines how your cache handles clock drift, system restarts, and distributed node disagreements.
# EXPIRE: Relative TTL from NOW (seconds)
redis> SET session:user123 "auth-token" EX 3600
# → Key expires in 3600 seconds from NOW
# Vulnerable to: clock jumping forward (key expires early), clock drift
# EXPIREAT: Absolute timestamp (Unix seconds)
redis> SET promo:summer2026 "50%-off"
redis> EXPIREAT promo:summer2026 1782864000
# → Key expires exactly when Unix time reaches 1782864000
# → 1782864000 = July 1, 2026 00:00:00 UTC ✓
# Safer: survives restarts, persists across clock changes
# PEXPIRE: Relative TTL (milliseconds, high precision)
redis> PEXPIRE ratelimit:api 60000
# → Expires in 60000ms from NOW
# PEXPIREAT: Absolute timestamp (milliseconds)
redis> PEXPIREAT billing:cycle 1782864000000
# → Expires at exactly 1782864000000ms (same moment as above, higher precision)
The semantic difference is profound: EXPIRE measures duration; EXPIREAT measures absolute time. In a single-node development cache, this doesn't matter. Across 5 Redis nodes with clock skew of ±2 seconds, it determines whether your promotion ends simultaneously or creates a 4-second window where different nodes have different answers about key validity.
Implementing EXPIREAT with Unix Timestamps in Production
Converting calendar dates to Unix timestamps is straightforward, but timezone errors are the most common production failure. Always work in UTC; never rely on the server's local timezone assumption.
import redis
from datetime import datetime, timezone, timedelta
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
# Correct: Always use UTC timezone
expiry_utc = datetime(2026, 7, 1, 0, 0, 0, tzinfo=timezone.utc)
timestamp_seconds = int(expiry_utc.timestamp())
# → 1782864000 ✓ (verified: date -d @1782864000 → Wed Jul 1 00:00:00 UTC 2026)
# Set key to expire at exact timestamp
r.set('promo:summer2026', 'discount-code-ABC123')
r.expireat('promo:summer2026', timestamp_seconds)
# → (integer) 1
# Verify expiration time
ttl_remaining = r.ttl('promo:summer2026')
expiration_time = r.expiretime('promo:summer2026') # Redis 7.0+
print(f"Key expires at Unix timestamp: {expiration_time}")
# → Key expires at Unix timestamp: 1782864000
# ✗ WRONG: Using local timezone without conversion
# ✗ expiry_local = datetime(2026, 7, 1, 0, 0, 0) # No tzinfo!
# ✗ timestamp = int(expiry_local.timestamp())
# → If server is in IST (UTC+5:30), timestamp would be 1782843000 — WRONG by 5.5 hours!
// Node.js: Setting EXPIREAT with millisecond precision
const Redis = require('ioredis');
const client = new Redis({ host: 'localhost', port: 6379 });
// Correct: UTC timestamp in seconds
const expiryDate = new Date('2026-07-01T00:00:00Z');
const timestampSeconds = Math.floor(expiryDate.getTime() / 1000);
// → 1782864000 ✓
// Method 1: EXPIREAT (seconds)
await client.set('promo:summer2026', 'discount-code-XYZ789');
await client.expireat('promo:summer2026', timestampSeconds);
// → 1 (success)
// Method 2: PEXPIREAT (milliseconds, for subsecond precision)
const timestampMs = expiryDate.getTime();
// → 1782864000000
await client.pexpireat('event:deadline', timestampMs);
// Verify using EXPIRETIME (Redis 7.0+)
const expiryTimestamp = await client.expiretime('promo:summer2026');
console.log(`Expires at: ${expiryTimestamp}`);
// → Expires at: 1782864000
// ✗ WRONG: Passing current time + offset as absolute timestamp
// const wrongTimestamp = Math.floor(Date.now() / 1000) + 86400;
// → This is actually EXPIRE behavior, not EXPIREAT! Off by current time.
Clock Skew Risks in Distributed Systems
EXPIREAT's absolute timestamps create a false sense of safety. When nodes disagree about current time, expiration semantics collapse.
# Scenario: Distributed lock with EXPIREAT during NTP correction
# Setup: 5 Redis nodes, TTL = 30 seconds, all keys expire at Unix 1715057340
import time
from datetime import datetime, timezone, timedelta
# Client acquires lock at T=0
lock_deadline = int((datetime.now(timezone.utc) + timedelta(seconds=30)).timestamp())
# → Suppose this is 1715057340 (May 8, 2026, 14:00:00 UTC)
# Scenario: NTP correction on Node-3
# Node-1, 2, 4, 5 believe time is 14:00:00 UTC (1715057340)
# Node-3's clock jumps backward 5 seconds (NTP correction)
# Node-3 now believes time is 13:59:55 UTC (1715057335)
# From Node-3's perspective:
# "Key expires at 1715057340, current time is 1715057335"
# "5 seconds remain, key is VALID"
# From other nodes' perspective:
# "Key expired 5 seconds ago" (1715057340 < 1715057345)
# "Key is INVALID"
# RESULT: Dual acquisition on same resource ✗ (safety violation)
# Mitigation: Use jitter + probabilistic expiration
import random
def safe_expireat_with_margin(client, key, deadline_seconds, safety_margin_seconds=5):
"""
Reduce deadline by safety_margin to account for clock skew
deadline_seconds: target Unix timestamp
safety_margin_seconds: buffer to absorb clock drift (typically 5-10s for NTP)
"""
adjusted_deadline = deadline_seconds - safety_margin_seconds
client.expireat(key, adjusted_deadline)
return adjusted_deadline
# Instead of: EXPIREAT lock:resource 1715057340 (30s lock)
# Use: EXPIREAT lock:resource 1715057335 (25s lock, 5s margin for clock skew)
adjusted = safe_expireat_with_margin(r, 'lock:resource', 1715057340, safety_margin_seconds=5)
# → 1715057335 (5 seconds earlier, absorbing potential clock jump)
Notice how EXPIREAT's "absolute" safety dissolves when distributed clocks diverge. A Node's clock jumping forward 3 seconds makes all locks expire 3 seconds early. The research paper "Redlock and the Clock" (2016) by Martin Kleppmann exposed this exact failure mode: Redlock (Redis-based distributed locking) assumes monotonic wall-clock time, which is violated during NTP corrections.
EXPIRE with Jitter: The Practical Alternative for Cache Stampedes
For cache expiration (not safety-critical locks), EXPIRE with jitter provides better real-world behavior. It spreads expiration across time, preventing thundering herd when thousands of keys expire simultaneously.
import redis
import random
import math
from datetime import datetime, timezone, timedelta
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
def set_with_jitter(key, value, base_ttl_seconds=3600, jitter_percentage=0.15):
"""
Set key with EXPIRE + random jitter to prevent cache stampede
base_ttl_seconds: base TTL (e.g., 3600 for 1 hour)
jitter_percentage: randomness as % of base TTL (0.15 = ±15%)
Example: base=3600, jitter=0.15
→ Actual TTL ranges from 3060 to 4140 seconds (3600 ± 540 seconds)
"""
# Calculate jitter: random value between -jitter% and +jitter% of base_ttl
jitter_seconds = int(base_ttl_seconds * jitter_percentage * random.random())
actual_ttl = base_ttl_seconds + jitter_seconds
# Set with EXPIRE (relative TTL)
r.set(key, value)
r.expire(key, actual_ttl)
# Log for observability
print(f"Key '{key}' set to expire in {actual_ttl}s (base {base_ttl_seconds}s + jitter {jitter_seconds}s)")
return actual_ttl
# Example: Cache product inventory (changes frequently)
# Without jitter: 10,000 products all expire at 14:05:00, hammering DB
# With jitter: expiration spreads 14:04:55 to 14:05:05 (smooth load)
base_ttl = 600 # 10 minutes
for product_id in range(1, 100):
set_with_jitter(
key=f'product:inventory:{product_id}',
value=f'{random.randint(50, 1000)} units',
base_ttl_seconds=base_ttl,
jitter_percentage=0.20 # ±20% jitter = 480-720 second range
)
# Verify: Check TTLs to confirm spread
print("\nVerifying jitter spread:")
for i in range(1, 6):
ttl = r.ttl(f'product:inventory:{i}')
print(f"product:inventory:{i} → TTL: {ttl}s")
# → product:inventory:1 → TTL: 589s
# → product:inventory:2 → TTL: 647s
# → product:inventory:3 → TTL: 512s
# → product:inventory:4 → TTL: 701s
# → product:inventory:5 → TTL: 634s
# (All different — cache misses spread over ~3min window)
Sliding Expiration: Keeping Sessions Alive on Activity
Use EXPIRE on every access to implement session sliding windows. This pattern resets the TTL when a user is active, keeping the session alive as long as activity continues.
import redis
from datetime import datetime
r = redis.Redis(host='localhost', port=6379, decode_responses=True)
SESSION_TTL_SECONDS = 1800 # 30-minute session timeout
def get_session_with_sliding_expiration(session_id):
"""
Retrieve session data and reset TTL on each access.
This keeps the session alive as long as user is active.
"""
# Atomic: get value and reset expiration
session_data = r.get(f'session:{session_id}')
if session_data is None:
print(f"Session {session_id} not found or expired")
return None
# Reset TTL: user is active, extend session by another 30min
r.expire(f'session:{session_id}', SESSION_TTL_SECONDS)
print(f"Session {session_id} refreshed, TTL reset to {SESSION_TTL_SECONDS}s")
return session_data
def create_session(session_id, user_data):
"""Create new session with initial TTL"""
r.setex(f'session:{session_id}', SESSION_TTL_SECONDS, user_data)
print(f"Session {session_id} created with {SESSION_TTL_SECONDS}s TTL")
# Usage in web request handler
user_session_id = 'sess_user123_abc'
# Request 1: Create session
create_session(user_session_id, 'user_id=123,role=admin')
print(f"After creation, TTL: {r.ttl(f'session:{user_session_id}')}s")
# → After creation, TTL: 1800s
# Simulate 5 minutes of inactivity
import time
time.sleep(5) # In production: actual request delay
# Request 2: User makes request after 5 min
session = get_session_with_sliding_expiration(user_session_id)
print(f"After first access, TTL: {r.ttl(f'session:{user_session_id}')}s")
# → Session user123_abc refreshed, TTL reset to 1800s
# → After first access, TTL: 1800s (reset to full 30 min again)
# Request 3: Another request 10 minutes later
time.sleep(5)
session = get_session_with_sliding_expiration(user_session_id)
print(f"After second access, TTL: {r.ttl(f'session:{user_session_id}')}s")
# → Session user123_abc refreshed, TTL reset to 1800s
# → After second access, TTL: 1800s
# If user goes idle 30+ minutes, next access finds expired session
print(f"\nVerify: session will expire after {r.ttl(f'session:{user_session_id}')}s of inactivity")
# → Verify: session will expire after 1800s of inactivity
Comparing EXPIRE, EXPIREAT, and Tag-Based Invalidation
Different expiration strategies serve different purposes. Use this comparison table to choose the right pattern for your use case.
| Pattern | Command(s) | Use Case | Clock Skew Risk | Operational Complexity |
|---|---|---|---|---|
| Relative TTL (EXPIRE) | EXPIRE key 3600 |
Session tokens, API rate limits, short-lived caches | High (clock jump expires early) | Low (simple) |
| Jittered EXPIRE | EXPIRE key (3600 ± 540) |
Product inventory, user profiles, cache stampede prevention | Medium (drift affects all keys uniformly) | Low (random ± offset) |
| Absolute TTL (EXPIREAT) | EXPIREAT key 1782864000 |
Promotions ending at fixed time, billing cycles, scheduled events | Critical (nodes must sync; margin needed) | Medium (timezone conversions required) |
| Sliding Expiration | EXPIRE key 1800 on each access |
Sessions, user connections, activity-based timeouts | Low (timeout resets on activity) | Medium (requires app logic in request path) |
| Tag-Based Invalidation | SADD tag:promo key; DEL tag:promo members |
Bulk invalidation (promotions, product updates, cache coherency) | None (explicit, not time-based) | High (maintain tag sets, careful lifecycle) |
Common Mistakes and How to Fix Them
Mistake: Assuming EXPIREAT Uses Server's Local Timezone
# ✗ WRONG: Assuming server timezone
from datetime import datetime
expiry_wrong = datetime(2026, 7, 1, 0, 0, 0) # No timezone!
# If server is in EST (UTC-5), this becomes 1782843000
# If server is in JST (UTC+9), this becomes 1782820800
# Same code, different results → BUG
r.expireat('promo:key', int(expiry_wrong.timestamp()))
# → Expires at different times depending on server timezone!
# ✓ RIGHT: Always use UTC
from datetime import datetime, timezone
expiry_right = datetime(2026, 7, 1, 0, 0, 0, tzinfo=timezone.utc)
# → Always 1782864000, regardless of server timezone ✓
r.expireat('promo:key', int(expiry_right.timestamp()))
# → Expires consistently across all servers
Timezone bugs are insidious because they work correctly on your laptop (matching your local tz), then fail in production (different tz). Always construct timestamps in UTC, then convert. Use the timestamp converter tool to verify critical dates during deployment.
Mistake: Setting EXPIRE Without Considering Clock Drift
// ✗ WRONG: Assuming wall-clock doesn't jump
async function acquireLock(resource, ttl_seconds = 30) {
const lockKey = `lock:${resource}`;
const acquired = await client.set(lockKey, 'holder1', 'NX', 'EX', ttl_seconds);
if (!acquired) return false;
// If NTP jumps server clock forward 5 seconds here,
// lock expires 5s early while client still thinks it's valid
// Perform critical work...
await criticalOperation();
// Check lock still held
const stillHeld = await client.get(lockKey);
if (!stillHeld) {
console.log('PANIC: Lost lock during operation!');
// → Dual writers possible if another client acquired it
}
}
// ✓ RIGHT: Use safety margin for distributed locks
async function acquireLockSafe(resource, requested_ttl = 30, safety_margin = 5) {
const lockKey = `lock:${resource}`;
const effective_ttl = requested_ttl - safety_margin; // 25 seconds
const acquired = await client.set(lockKey, 'holder1', 'NX', 'EX', effective_ttl);
if (!acquired) return false;
// Even if NTP jumps 5s forward, lock won't expire
const maxWorkTime = effective_ttl - 2; // Leave 2s margin for work duration
return { acquired: true, maxWorkTime };
}
acquireLockSafe('critical-section', 30, 5);
// → 25s effective TTL (30s - 5s margin), work time ≤ 23s
Distributed locks require safety margins because clock skew is probabilistic, not deterministic. Research shows production NTP corrections of 1-5 seconds occur monthly. Margin of 5 seconds covers 99th percentile drift.
Mistake: Comparing EXPIREAT Timestamps Across Nodes Without Sync Check
# ✗ WRONG: Assuming nodes see same expiration
def check_promotion_active(promo_key):
# Call Node-A
expiry_nodeA = r_nodeA.expiretime(promo_key) # 1782864000
# Call Node-B (different timezone, drifted clock)
expiry_nodeB = r_nodeB.expiretime(promo_key) # Returns -2 (key missing!)
# Race condition: Node-A says active, Node-B says expired
# Customers see different discounts depending on which replica answered
if expiry_nodeA > 0:
return True # Assume active
return False
# ✓ RIGHT: Check clock sync before trusting EXPIREAT
def check_promotion_safe(promo_key, time_tolerance=5):
import time
# Verify time sync across replicas
now_nodeA = r_nodeA.time()[0] # Redis TIME command returns [seconds, microseconds]
now_nodeB = r_nodeB.time()[0]
clock_skew = abs(now_nodeA - now_nodeB)
if clock_skew > time_tolerance:
logging.warn(f'Clock skew {clock_skew}s exceeds tolerance, using local source')
# Fall back to single node or cache locally
return check_from_authoritative_source()
# Now safe to trust EXPIREAT across nodes
expiry = r_nodeA.expiretime(promo_key)
current = now_nodeA
return expiry > current
EXPIREAT semantics assume synchronized clocks. Verify synchronization (typically via NTP) before relying on absolute timestamps for critical decisions. Use the timestamp debugger tool to inspect clock values across your infrastructure.
Mistake: Using PEXPIRE for Sub-Millisecond Precision Without Understanding Overhead
# ✗ WRONG: Assuming millisecond precision is free
# SET key value PX 1 # Expires in 1 millisecond!
# Redis internally stores TTL in fixed-point seconds + milliseconds
# Tracking expiration at 1ms granularity adds memory overhead:
# - Standard key: ~80 bytes
# - Key with TTL: ~88 bytes (8 byte expiration time)
# - Key with 1ms precision TTL: still 88 bytes, but cache coherency cost
# Real problem: Expiration granularity in Redis is ~100ms
# Even if you set PX 1, key might live 50-100ms longer due to
# lazy expiration on access (not background sweeping)
# ✓ RIGHT: Use millisecond precision only when needed
# Rate limiting: PX 60000 (60 second window)
# → 60000ms precision justified, common pattern
SET ratelimit:api:user123 10 PX 60000
# But don't over-engineer:
# ✗ PEXPIRE key 50 # Overkill, just use 100+
# ✓ PEXPIRE key 100 # Sane minimum
Frequently Asked Questions
What is the difference between Redis EXPIRE and EXPIREAT?
EXPIRE sets a relative time-to-live (TTL) from the command execution moment. EXPIREAT sets an absolute Unix timestamp. EXPIRE mykey 3600 expires the key in 3600 seconds from now; EXPIREAT mykey 1782864000 expires exactly when system time reaches 1782864000 (July 1, 2026 00:00:00 UTC). EXPIRE is vulnerable to clock jumping forward (key expires early); EXPIREAT requires synchronized clocks across nodes but safer for fixed deadlines.
How to set Redis TTL with a Unix timestamp?
Use EXPIREAT key timestamp_seconds or PEXPIREAT key timestamp_milliseconds. Always convert to UTC first. Example: from datetime import datetime, timezone; ts = int(datetime(2026, 7, 1, tzinfo=timezone.utc).timestamp()); r.expireat('key', ts). Verify with the timestamp API that your calculated timestamp is correct before applying to production keys.
Why does Redis EXPIREAT fail with wrong timezone?
EXPIREAT interprets the timestamp argument as Unix time (always UTC-based), but if you calculate the timestamp using local timezone without explicit conversion, you'll get the wrong value. For example, `datetime(2026, 7, 1)` without `tzinfo=timezone.utc` uses the server's local timezone, causing the timestamp to shift by the timezone offset. Always use `datetime(..., tzinfo=timezone.utc).timestamp()` to ensure correct conversion.
How to implement sliding expiration in Redis?
Call EXPIRE on every access to reset the TTL. Example: `def get_session(id): session = r.get(f'session:{id}'); r.expire(f'session:{id}', 1800); return session`. This keeps the session alive as long as activity continues—if user goes idle for 30+ minutes, the next request finds the session expired. Use this for authentication tokens, user connections, and activity-based timeouts.
Key Takeaways
EXPIREuses relative TTL (duration), whileEXPIREATuses absolute Unix timestamps. Choose based on whether expiration is time-based (fixed deadline) or duration-based (relative timeout).- Clock skew in distributed systems makes
EXPIREATunsafe without synchronization checks and safety margins. NTP corrections of 1-5 seconds are common; reduce your EXPIREAT deadline by 5 seconds to absorb drift. - Add jitter to
EXPIRE(±10-20% random offset) to prevent cache stampedes when thousands of keys expire simultaneously, spreading expiration across time instead of on exact boundaries. - Always convert calendar dates to Unix timestamps in UTC, never local timezone. Use
datetime(..., tzinfo=timezone.utc).timestamp()to avoid timezone-based bugs that work locally but fail in production. - Implement sliding expiration by calling
EXPIREon every request to extend session lifetimes based on activity, keeping users logged in as long as they're active. - Tag-based invalidation (
SADD tag:promo key; DEL tag:promo members) enables bulk cache coherency without relying on time, useful for promotions and product updates.
Optimization: Monitoring TTL Health in Production
Track keys without TTL expiration to prevent unbounded memory growth. Periodically scan for keys with TTL -1 (no expiration set).
# Scan for keys without TTL (requires Redis 2.8+)
redis-cli --scan --pattern "*" | while read key; do
ttl_val=$(redis-cli TTL "$key")
if [ "$ttl_val" -eq -1 ]; then
echo "No expiration: $key"
fi
done | head -20
# Or use INFO memory to catch memory leaks early
redis-cli INFO memory | grep used_memory_human
# → used_memory_human:2.5M
# If growing unbounded without adding keys, likely missing TTL on something
# Set up alerts: if used_memory grows >100MB/day without proportional keycount growth
# → Investigate keys without TTL
For distributed systems, monitor clock skew across nodes:
import redis
from statistics import stdev
def check_clock_sync(redis_nodes):
"""
Check clock synchronization across Redis nodes.
Returns max skew in seconds.
"""
times = []
for node in redis_nodes:
# Redis TIME returns [seconds, microseconds] since 1970
server_time = node.time()[0]
times.append(server_time)
max_skew = max(times) - min(times)
if max_skew > 5:
print(f'WARNING: Clock skew {max_skew}s exceeds safe threshold')
print(f'Times: {times}')
# Alert ops team to investigate NTP
return max_skew
# Run hourly
nodes = [redis.Redis(host=h, port=6379) for h in ['node1', 'node2', 'node3']]
skew = check_clock_sync(nodes)
# → Clock skew 2s — within safe margin