Caching
Three-layer caching — in-process config bytes, Redis symbol cache, Nginx proxy cache
Three independent caching layers sit in front of the TV endpoints. Each has a different TTL tuned to how fast its underlying data changes.
Layer 1 — in-process /config bytes cache
/config returns a compile-time constant struct. Marshaling the same JSON per request is pure waste, so the handler marshals once per process and serves the cached []byte thereafter.
// internal/modules/nepse/tradingview/cache.go
type configBytesCache struct {
once sync.Once
bytes []byte
err error
}
func (c *configBytesCache) get(cfg *ConfigResponse) ([]byte, error) {
c.once.Do(func() {
c.bytes, c.err = json.Marshal(cfg)
})
return c.bytes, c.err
}Handler usage:
func (h *Handler) Config(w http.ResponseWriter, r *http.Request) {
b, err := h.configCache.get(h.svc.GetConfig())
if err != nil {
writeUDFError(w, http.StatusInternalServerError, "internal_error")
return
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write(b)
}Properties:
| Scope | Single process (Handler instance) |
| TTL | Process lifetime |
| Invalidation | Restart on deploy |
| Concurrency safety | sync.Once |
| Per-request cost | Zero allocation, single w.Write |
Verify:
curl -s $BASE/config | md5sum
curl -s $BASE/config | md5sum
curl -s $BASE/config | md5sum
# All three MD5s identical.Layer 2 — Redis /symbols cache
Resolving a symbol requires a DB query. Since symbol metadata changes rarely, we cache resolved SymbolResponse payloads in Redis for 5 minutes.
Key + TTL
// internal/platform/cache/keys.go
const PrefixNepseTVSymbol = "nepse:tv:symbol:"
func NepseTVSymbolKey(symbol string) string {
return PrefixNepseTVSymbol + symbol
}
func TTLNepseTVSymbol() time.Duration { return 5 * time.Minute }Why 404 is never cached
A delisted or not-yet-added company might be re-added. Caching a 404 for 5 minutes would delay recovery when the nepse_companies row finally lands. The cost of always re-querying for a 404 is one DB roundtrip — acceptable.
Failure modes
Redis errors are never fatal. The service logs a warning and falls through to the DB, then attempts a write-through; either step may fail independently.
| Failure | Service behavior |
|---|---|
GetJSON returns error (Redis down) | Log warning, skip cache, hit DB, attempt cache write |
SetJSON returns error (Redis down) | Log warning, still return the resolved symbol |
| Redis is nil (not wired) | All cache paths are skipped; DB is queried every time |
Injection
// internal/modules/nepse/tradingview/service.go
type symbolCache interface {
GetJSON(ctx context.Context, key string, dest any) (bool, error)
SetJSON(ctx context.Context, key string, value any, ttl time.Duration) error
}
func NewService(queries tradingviewStore, tradingDays []int, c *cache.Cache) *Service {
var cc symbolCache
if c != nil {
cc = c
}
return &Service{queries: queries, tradingDays: tradingDays, cache: cc}
}The symbolCache interface is narrow so tests can fake Redis without spinning up the real thing — see cache_test.go → fakeSymbolCache.
Manual cache operations
# Inspect
docker exec nepse_redis_dev redis-cli GET nepse:tv:symbol:NABIL
docker exec nepse_redis_dev redis-cli TTL nepse:tv:symbol:NABIL
# Invalidate one
docker exec nepse_redis_dev redis-cli DEL nepse:tv:symbol:NABIL
# Bulk invalidate (all resolved symbols)
docker exec nepse_redis_dev redis-cli --scan --pattern 'nepse:tv:symbol:*' | \
xargs -r docker exec -i nepse_redis_dev redis-cli DELLayer 3 — Nginx proxy_cache
Edge caching at Nginx reduces upstream load further. Configuration sits in docs/DEPLOYMENT.md:
# In http {} scope
proxy_cache_path /var/cache/nginx/nepse-tv levels=1:2 keys_zone=nepse_tv:10m
max_size=256m inactive=1h use_temp_path=off;
# Inside the server {} block for the API hostname:
location /api/nepse/tradingview/config {
proxy_pass http://nepse_go;
proxy_cache nepse_tv;
proxy_cache_valid 200 1h;
add_header X-Cache-Status $upstream_cache_status always;
}
location /api/nepse/tradingview/symbols {
proxy_pass http://nepse_go;
proxy_cache nepse_tv;
proxy_cache_key "$scheme$request_method$host$request_uri"; # includes ?symbol=
proxy_cache_valid 200 5m;
proxy_cache_valid 404 30s;
add_header X-Cache-Status $upstream_cache_status always;
}
location /api/nepse/tradingview/search {
proxy_pass http://nepse_go;
proxy_cache nepse_tv;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_valid 200 1m;
add_header X-Cache-Status $upstream_cache_status always;
}
location /api/nepse/tradingview/history {
proxy_pass http://nepse_go;
proxy_cache nepse_tv;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_valid 200 5m;
add_header X-Cache-Status $upstream_cache_status always;
}
# Leave /api/nepse/tradingview/time uncached.Recommended TTLs
| Path | Nginx TTL | Rationale |
|---|---|---|
/config | 1h | Changes only on deploy. |
/symbols?symbol=X | 5m | Matches Redis TTL. Vary on query string. |
/search?query=X | 1m | Low churn but stay fresh. Vary on query string. |
/history?… (general) | 5m | Historical bars are immutable. Vary on full query string. |
/history?… (intraday live) | 10s | Optional: a separate location matched on to ≈ now. |
/time | never | Real-time clock sync. |
gzip
Gzip is handled inside the Go app for the TV route group via chi/middleware.Compress(5). Nginx doesn't need to re-compress — it just passes the already-gzipped body through. The Accept-Encoding header is passed as-is to upstream.
A typical 225-bar daily history response compresses from ~8.4KB to ~3KB (2.75×). Empty weeks compress even more dramatically.
Scope check
Gzip and proxy_cache are scoped to the TV routes only. Other endpoints in /api/nepse/* (like /companies) do not receive gzip or proxy caching by default. Verify:
curl -s -H 'Accept-Encoding: gzip' -D - "$BASE/companies?size=1" -o /dev/null | grep -i content-encoding
# (no output — not gzipped)
curl -s -H 'Accept-Encoding: gzip' -D - "$BASE/tradingview/config" -o /dev/null | grep -i content-encoding
# Content-Encoding: gzipPutting the layers together
A single client request for /symbols?symbol=NABIL walks up to three layers:
Typical production steady-state for a chart with 10 symbols and moderate user activity:
- Nginx hit rate: high (single-digit number of upstream hits per symbol per 5m).
- Redis hit rate: very high when Nginx is not configured or in cold-start.
- DB query rate for
/symbols: roughlyunique_symbols / 5minat steady state.
References
internal/modules/nepse/tradingview/cache.go— in-process/configbytes cacheinternal/modules/nepse/tradingview/service.go— Redis cache wiring inResolveSymbolinternal/platform/cache/keys.go— key helpers + TTLinternal/platform/cache/cache.go—*cache.Cache(GetJSON/SetJSON)docs/DEPLOYMENT.md— Nginx configuration section