Caching
Redis cache keys, TTLs, and invalidation scopes for portfolio reads
Two read endpoints are Redis-backed; everything else hits Postgres directly. Cache invalidation is structurally enforced via shared.TxRunner.Run — every write declares a CacheScope at the call site, and the post-commit Invalidator.Apply busts exactly the right keys.
Cached endpoints
| Endpoint | Key pattern | TTL | Invalidator scope |
|---|---|---|---|
GET /portfolios/{id} | portfolio:detail:<portfolioID> | 15s | Detail |
GET /portfolios/{id}/summary | (same as Get — derived from FindDetail) | 15s | Detail |
GET /portfolios/{id}/valuation?range=R | portfolio:valuation:<portfolioID>:<R> | 10min | Valuation (busts all 6 ranges) |
R ∈ {1M, 3M, 6M, 1Y, YTD, ALL}. There are 6 separate cache keys per portfolio for valuation — one per range — and they are always invalidated as a group.
Cache scopes
Defined in internal/modules/portfolio/shared/cache.go:
type CacheScope int
const (
CacheScopeNone CacheScope = iota // no DEL
CacheScopeDetail // DEL portfolio:detail:<id>
CacheScopeDetailAndValuation // both detail and all 6 valuation keys
)There is no CacheScopeValuation (valuation-only). In practice every operation that affects valuation also affects detail, so a finer-grained scope buys nothing.
Scope per write op
Every Add / Update / Delete declares its scope when calling tx.Run:
| Method | Scope | Reason |
|---|---|---|
core.Create | CacheScopeNone | New portfolio — no cache entries to invalidate. |
core.Update | CacheScopeDetail | Name/description change visible in Get only. |
core.Delete | CacheScopeDetailAndValuation | Cascade clears the projection; both reads must miss. |
trades.AddTrade | CacheScopeDetailAndValuation | New trade changes lots → cost basis → value series. |
trades.UpdateTrade | CacheScopeDetailAndValuation | Edit re-derives the lot tree. |
trades.DeleteTrade | CacheScopeDetailAndValuation | Soft-delete re-derives without this trade. |
dividends.AddDividend | CacheScopeDetail | Dividends change summary.div and summary.totalPnl but not the value time-series. |
dividends.UpdateDividend | CacheScopeDetail | Same as above. |
dividends.DeleteDividend | CacheScopeDetail | Same as above. |
actions.AddCorporateAction | CacheScopeDetailAndValuation | Action moves units and cost. |
actions.UpdateCorporateAction | CacheScopeDetailAndValuation | Same as above. |
actions.DeleteCorporateAction | CacheScopeDetailAndValuation | Same as above. |
12 write paths total; 9 bust both keys, 3 bust detail only, 1 doesn't bust.
Read flow with cache
Failure isolation:
- A Redis GET error is non-fatal: log warn, fall through to Postgres.
- A Redis SET error is non-fatal: log warn, return the freshly-computed response.
- A 404 from
shared.FindPortfolioshort-circuits before the cache is consulted (the cache key is per-portfolio; cache hit on someone else's data is impossible because a customer cannot reach a portfolio they don't own — the guard runs first).
Write flow with invalidation
Cache bust runs after Commit. If the commit succeeds but the DEL fails (Redis hiccup), the bust is silently skipped. The next read will return stale data until TTL. With detail TTL = 15s the wedge is bounded; the valuation 10-min wedge is more painful but rare.
Cache key reference
Defined in internal/platform/cache/keys.go:
const (
PrefixPortfolioValuation = "portfolio:valuation:"
PrefixPortfolioDetail = "portfolio:detail:"
ttlPortfolioValuation = 10 * time.Minute
ttlPortfolioDetail = 15 * time.Second
)
func PortfolioDetailKey(portfolioID string) string {
return PrefixPortfolioDetail + portfolioID
}
func PortfolioValuationKey(portfolioID, rangeStr string) string {
return PrefixPortfolioValuation + portfolioID + ":" + rangeStr
}
func TTLPortfolioDetail() time.Duration { return ttlPortfolioDetail }
func TTLPortfolioValuation() time.Duration { return ttlPortfolioValuation }Examples:
portfolio:detail:0192d4e5-6f7a-7b8c-9d0e-1f2a3b4c5d6eportfolio:valuation:0192d4e5-6f7a-7b8c-9d0e-1f2a3b4c5d6e:3M
Why these TTLs
Trades, summary numbers, and live LTPs change second-by-second during market hours. 15s is short enough to feel "live" without making the dashboard hammer Postgres on every refresh. After a write, the cache busts immediately — readers see fresh data without the 15s delay.
The valuation series materially changes only when:
- A trade or corporate action lands (busted explicitly).
- A new daily close is published by the EOD cron.
Within a market session, today's bar updates as new daily history rolls forward — but a 10-min lag is fine for a chart. Six range keys × 10min = ~60 KB peak per portfolio, trivially small.
Cache stampede consideration
Both endpoints are per-portfolio, so cache stampedes (many concurrent misses for the same key) only happen when one user hits refresh many times. The single-flight protection isn't worth the complexity at this scale. If you measure pain later, wrap FindDetail and FindValuation in singleflight.Group.Do.
What is not cached
GET /portfolios(list) — small per-customer cardinality.GET /portfolios/{id}/holdings(paginated) — page/size variability.GET /portfolios/{id}/distribution— millisecond compute.GET /portfolios/{id}/companies/{symbol}— high cardinality (per portfolio × per stock).- All
*list endpoints (/trades,/dividends,/companies/{symbol}/actions, etc.) — audit-trail reads.
These are all served straight from Postgres. The detail+valuation cache + TouchPortfolioFetchedAt already absorb the "user opens dashboard, then drills" pattern.
References
- Invalidator + TxRunner:
internal/modules/portfolio/shared/cache.go - Cache keys + TTLs:
internal/platform/cache/keys.go - Detail cache integration:
internal/modules/portfolio/core/service.go→FindDetail - Valuation cache integration:
internal/modules/portfolio/core/service.go→FindValuation