ā ļø PRELIMINARY DOCUMENTATION - NEEDS REVIEW
Note: This documentation was created with assumptions about the project architecture and technology stack. It requires review and correction based on the actual implementation.
This document will be reworked once the project is closer to final release.
If you notice inaccuracies (e.g., assumed AWS CloudFront CDN), please flag them so this can be updated with actual project details.
Comprehensive guide to caching in Flash Turbo CMS.
graph TB
Client["š„ļø Browser"]
CDN["š CDN
CloudFront"]
AppCache["š¦ App Cache
Redis"]
Database["š¾ Database
MongoDB"]
Client -->|HTTP Cache| CDN
CDN -->|Cache Miss| AppCache
AppCache -->|Cache Miss| Database
Client -->|Instant| CDN
CDN -->|Milliseconds| AppCache
AppCache -->|Milliseconds| Database
style Client fill:#e1f5ff
style CDN fill:#fff3e0
style AppCache fill:#f3e5f5
style Database fill:#e8f5e9
graph TB
Request["Product Request"]
Request -->|Check| L1["Level 1: CDN
(Static pages)
TTL: 1 hour"]
L1 -->|Miss| L2["Level 2: Redis
(Product data)
TTL: 30 min"]
L2 -->|Miss| L3["Level 3: Database
(MongoDB)
MISS"]
L1 -->|Invalidate on| Update["Admin Updates
Product"]
L2 -->|Invalidate on| Update
Update -->|Clear| Pattern["tenant_abc_products_*"]
Implementation
const getCachedProduct = async (productId: string) => {
const cacheKey = `tenant_${tenantId}_products_${productId}`;
// L1: Check Redis
let product = await redis.get(cacheKey);
if (product) return JSON.parse(product);
// L2: Query database
product = await db.products.findById(productId);
// L3: Store in cache
await redis.setex(cacheKey, 1800, JSON.stringify(product)); // 30 min TTL
return product;
};
Products: 30 minutes (frequently accessed)
Orders: 15 minutes (less frequent)
Customers: 1 hour (stable data)
Settings: 1 hour (mostly static)
Why Different TTLs?
graph TB
Inventory["šŖ Inventory"]
Inventory -->|High Priority| Cache["Cache for
5 minutes"]
Inventory -->|When Updated| Clear["Clear cache
immediately"]
Inventory -->|Zero Stock| NoCache["Don't cache
out of stock"]
Cache -->|Quick Lookup| Frontend["Product Page"]
Clear -->|Fresh Data| Customer["Customer Sees
Current Stock"]
NoCache -->|Always Fresh| Customer
// When product updated
async function invalidateProductCache(productId: string, tenantId: string) {
// Clear specific product
await redis.del(`tenant_${tenantId}_products_${productId}`);
// Clear product list (pattern match)
const pattern = `tenant_${tenantId}_products_list_*`;
const keys = await redis.keys(pattern);
if (keys.length) await redis.del(...keys);
}
sequenceDiagram
Admin->>API: Update product
API->>DB: Save changes
DB->>Event: Emit "product.updated"
Event->>Cache: Subscribe handler fires
Cache->>Redis: Delete cache keys
Redis->>Event: Cache cleared
Event->>Revalidation: Trigger page regeneration
Revalidation->>CDN: Purge old content
note over CDN: Next request gets fresh
// Automatic expiration
await redis.setex(
key,
30 * 60, // 30 minutes in seconds
JSON.stringify(data)
);
// Cache expires automatically
// Redis deletes after TTL expires
Pre-populate cache before needed:
async function warmCache() {
// Load popular products
const popular = await db.products
.find({ tenantId, status: "published" })
.sort({ viewCount: -1 })
.limit(100);
// Cache them
for (const product of popular) {
const key = `tenant_${tenantId}_products_${product._id}`;
await redis.setex(key, 3600, JSON.stringify(product));
}
console.log(`Warmed ${popular.length} products`);
}
// Run on app startup
app.on('startup', warmCache);
// Product caching
tenant_{tenantId}_products_{productId}
tenant_{tenantId}_products_list_{page}
tenant_{tenantId}_products_category_{categoryId}
// Order caching
tenant_{tenantId}_orders_{orderId}
tenant_{tenantId}_orders_customer_{customerId}
// Settings caching
tenant_{tenantId}_settings_general
tenant_{tenantId}_settings_payment
graph TB
Metrics["Cache Metrics"]
Metrics -->|Hit Rate| HitRate["ā
85% hit rate
(ideal: > 80%)"]
Metrics -->|Memory| Memory["š¾ 512 MB used
(of 1 GB)"]
Metrics -->|Evictions| Evictions["ā ļø 100/hour
(monitor)"]
Metrics -->|Keys| Keys["š 45,000 keys
(1 per second)"]
HitRate -->|Good| Action["No action needed"]
Evictions -->|High| Action2["Consider:
More memory
Shorter TTL
Less caching"]
// Log cache statistics hourly
setInterval(async () => {
const info = await redis.info('stats');
const hitRate =
info.keyspace_hits /
(info.keyspace_hits + info.keyspace_misses) * 100;
logger.info('Cache Stats', {
hitRate: hitRate.toFixed(2) + '%',
evictedKeys: info.evicted_keys,
usedMemory: info.used_memory_human,
});
}, 60 * 60 * 1000); // Every hour
graph TB
Requests["š„ Requests"]
LB["āļø Load Balancer"]
LB -->|Route| App1["App 1"]
LB -->|Route| App2["App 2"]
LB -->|Route| App3["App 3"]
App1 -->|Shared| Redis["Redis
Shared Cache"]
App2 -->|Shared| Redis
App3 -->|Shared| Redis
Redis -->|Cache all| Instances["All instances
see same cache"]
// All app instances share single Redis
// So cache is always consistent
// App 1
await redis.set('key', 'value');
// App 2 (sees same cache)
const value = await redis.get('key'); // ā
Returns 'value'
// No cache sync needed
// Redis is single source of truth
Cause: Cache not invalidated properly Solution:
redis.flushdb()Cause: Not caching enough, or caching wrong data Solution:
Cause: Redis memory limit exceeded Solution:
Cause: Cache not synchronized Solution:
redis.flushall()Without Cache:
GET /products ā MongoDB (200ms) = 200ms response
With Cache:
GET /products ā Redis hit (5ms) = 5ms response
40x faster! ā”
Before Caching:
- Page load: 3.2 seconds
- 150 database hits per page view
- CPU: 45%
- Throughput: 50 req/sec
After Caching:
- Page load: 0.8 seconds
- 3 database hits per page view
- CPU: 12%
- Throughput: 500 req/sec
Last Updated: October 27, 2025 Version: 1.0 Status: ā Production Ready