Shop It Docs
TradingView UDF

Architecture

System topology, request lifecycle, and middleware stack for the TradingView UDF module

System topology

  • NS1 (compute) runs the Go app and Nginx. Nginx terminates TLS, applies proxy cache rules (see Caching), and forwards TV routes to 127.0.0.1:5001.
  • NS2 (data) runs Postgres and Redis. The ingester writes OHLCV into Postgres; the TV module reads it. Redis holds the /symbols cache (5-min TTL) plus rate-limit counters.
  • The ingester is a separate module — see Data flow.

Request lifecycle

For any TV endpoint, the request walks a fixed middleware chain before reaching a handler:

The TV router group is declared in internal/http/router/router.go:

r.Group(func(r chi.Router) {
    r.Use(rl60.Middleware)                 // 60 req/min per IP
    r.Use(tradingview.Recoverer())         // JSON error on panic
    r.Use(chimw.Compress(5))               // gzip (array-heavy /history benefits 3×)
    r.Get("/tradingview/config", deps.TradingView.Config)
    r.Get("/tradingview/time", deps.TradingView.Time)
    r.Get("/tradingview/symbols", deps.TradingView.Symbols)
    r.Get("/tradingview/search", deps.TradingView.Search)
    r.Get("/tradingview/history", deps.TradingView.History)
})

The global chi.Recoverer is still present in the outer chain, so any panic escaping the TV-scoped recoverer is still caught — the TV recoverer just guarantees UDF-shaped output for this group.

Envelope bypass

Every other module in the codebase wraps responses in a {"message": "...", "data": {...}} envelope via internal/http/response.JSON. TradingView's UDFCompatibleDatafeed parser reads fields from the root, so the TV module bypasses that envelope entirely.

The bypass is local to the TV package (internal/modules/nepse/tradingview/write.go), not a global middleware toggle. This is deliberate — if a future endpoint needs UDF shape it must opt in explicitly via writeJSON/writeUDFError, and there's no chance of accidentally leaking envelope structure into UDF responses.

Data boundary

  • Intraday (1, 5, 15, 30, 60) reads nepse_intraday_prices, aggregating on the fly via GetIntradayStockCandles (bucket seconds passed in).
  • Daily (1D) reads nepse_price_history.
  • Weekly (1W) and Monthly (1M) aggregate from the same nepse_price_history using Postgres date_trunc('week', …) / date_trunc('month', …). Since NEPSE trades Mon–Fri, the ISO week (Mon-start) aligns correctly — no manual DOW math needed.

Middleware responsibilities

MiddlewareScopePurpose
chimw.RequestIDglobalcorrelation id header
httpmiddleware.RealIPglobaltrust X-Forwarded-For from allowed proxies
chimw.Recoverergloballast-resort panic catch
httpmiddleware.RequestLoggerglobalstructured access log
cors.HandlerglobalCORS policy
httpmiddleware.BodyLimitglobalmax body bytes
rl60TV group60 req/min per IP
tradingview.Recoverer()TV grouppanic → {"s":"error","errmsg":"internal_error"} JSON
chimw.Compress(5)TV groupgzip response bodies

Timeouts

Each data-fetching handler wraps the request context in a 5-second timeout:

const tvRequestTimeout = 5 * time.Second

func tvContext(r *http.Request) (context.Context, context.CancelFunc) {
    return context.WithTimeout(r.Context(), tvRequestTimeout)
}

A slow DB query producing context.DeadlineExceeded is mapped by the handler to:

  • 504 Gateway Timeout with {"s":"error","errmsg":"timeout"} for /history, /symbols.
  • 200 OK with [] for /search — autocomplete must stay usable even if the DB is momentarily slow.
  • /config never touches the DB; /time is instant.

Rate limit scope

rl60 uses a Redis key per IP with a sliding 60-second window. The TV group shares one bucket across all endpoints — a client hammering /history will have /search refused if the bucket fills.

If you serve the chart from a shared proxy or single corporate egress, 60 rpm may be too tight. Adjust the rl60 limit in router.go for production sizing.

References

  • Handler: internal/modules/nepse/tradingview/handler.go
  • Service: internal/modules/nepse/tradingview/service.go
  • Router: internal/http/router/router.go
  • Rate limiter: internal/http/middleware/ratelimit.go
  • sqlc queries: internal/platform/database/queries/{companies,charts}.sql