Architecture
Sub-package layout, Module aggregation, request lifecycle, and middleware stack for the Portfolio module
Sub-package decomposition
The portfolio module is organised by Swagger tag (vertical slices) plus two cross-cutting infrastructure packages. No file in any sub-package exceeds 600 lines; each slice owns its Store interface, DTOs, mappers, service, and handler.
Invariants:
holdings/imports onlyshared/.- Tag packages (
core,companies,trades,dividends,actions) importholdings/andshared/only. - No tag package imports another tag package. They are siblings, not collaborators.
- No cycles.
The Module
portfolio.go is the only file at the package root. It aggregates the 5 tag-package handlers and exposes a single RegisterRoutes entry point.
type Module struct {
core *core.Handler
companies *companies.Handler
trades *trades.Handler
dividends *dividends.Handler
actions *actions.Handler
}
func NewModule(q *sqlc.Queries, pool *pgxpool.Pool, cacheLayer *cache.Cache) *Module {
inv := shared.NewInvalidator(cacheLayer)
tx := shared.NewTxRunner(pool, q, inv)
return &Module{
core: core.NewHandler(core.NewService(q, tx, cacheLayer)),
companies: companies.NewHandler(companies.NewService(q)),
trades: trades.NewHandler(trades.NewService(q, tx)),
dividends: dividends.NewHandler(dividends.NewService(q, tx)),
actions: actions.NewHandler(actions.NewService(q, tx)),
}
}
func (m *Module) RegisterRoutes(r chi.Router) {
// 23 routes — see the Reference page
}cmd/server/main.go only ever calls portfolio.NewModule(...). There are no per-sub-package constructors used outside the package.
companies/ is read-only and doesn't take a *shared.TxRunner. The four other tag packages need it for write paths.
Request lifecycle
Every portfolio endpoint walks this chain before reaching a handler:
Router wiring
r.Route("/api/v1", func(r chi.Router) {
r.Group(func(r chi.Router) {
r.Use(deps.JWTAuth.Middleware)
deps.Portfolio.RegisterRoutes(r)
})
})All 23 portfolio routes are JWT-gated. There is no public sub-surface — even reads require a valid token. The customer ID is taken from the token claim, not from the body.
Ownership guard
Every read and every write starts with the same ownership check:
shared.FindPortfolio(ctx, q, customerID, portfolioID)Implementation: qtx.FindPortfolioByID(ctx, {ID, CustomerID}). The CustomerID is part of the WHERE clause — a customer asking for another customer's portfolio gets pgx.ErrNoRows → 404 (not 403). The server never reveals whether a UUID exists for someone else.
This guard runs inside a transaction for write paths and outside the cache lookup for read paths, so:
- Writes can never land on a stranger's portfolio even with a stale cache.
- Reads still serve the cached payload (which itself is keyed by portfolio ID, not customer ID), so the cache must be populated by an authenticated request to begin with.
TxRunner pattern
shared.TxRunner.Run is the only sanctioned way to mutate a portfolio.
type TxRunner struct {
pool *pgxpool.Pool
q *sqlc.Queries
inv *Invalidator
}
func (tx *TxRunner) Run(
ctx context.Context,
portfolioID uuid.UUID,
scope CacheScope,
fn func(qtx *sqlc.Queries) error,
) error {
dbTx, err := tx.pool.Begin(ctx)
if err != nil { return ... }
qtx := tx.q.WithTx(dbTx)
if err := fn(qtx); err != nil {
_ = dbTx.Rollback(ctx)
return err
}
if err := dbTx.Commit(ctx); err != nil { return ... }
tx.inv.Apply(ctx, portfolioID, scope) // post-commit cache bust
return nil
}Three guarantees:
- No cache mutation on rollback. If
fnfails, noDELruns. The previous cache state stays. - Cache bust is post-commit. A reader reading the cached payload between commit and bust still gets a stale-but-consistent snapshot — not a torn read.
- Forgetting cache invalidation requires explicit
CacheScopeNone. Compile-time the scope is mandatory; behaviorally, cache busting is structurally enforced.
CacheScope values: CacheScopeNone, CacheScopeDetail, CacheScopeDetailAndValuation. Mapping per write op is enumerated in Caching.
Per-sub-package Store interfaces
Each tag package declares its own Store interface with only the queries it actually uses. *sqlc.Queries satisfies all of them via structural typing.
// trades/store.go (excerpt)
type Store interface {
shared.PortfolioLookup
AddTrade(ctx context.Context, arg sqlc.AddTradeParams) (sqlc.AddTradeRow, error)
FindTradesByPortfolio(ctx context.Context, arg sqlc.FindTradesByPortfolioParams) ([]sqlc.FindTradesByPortfolioRow, error)
CountTradesByPortfolio(ctx context.Context, arg sqlc.CountTradesByPortfolioParams) (int64, error)
FindTradeForUpdate(ctx context.Context, arg sqlc.FindTradeForUpdateParams) (sqlc.FindTradeForUpdateRow, error)
UpdateTrade(ctx context.Context, arg sqlc.UpdateTradeParams) (sqlc.UpdateTradeRow, error)
SoftDeleteTrade(ctx context.Context, arg sqlc.SoftDeleteTradeParams) error
FindSettlementDate(ctx context.Context, tradeDate pgtype.Date) (pgtype.Date, error)
IsTradingDay(ctx context.Context, tradeDate pgtype.Date) (bool, error)
FindCurrentHoldingForUpdate(...) (...)
WithTx(tx pgx.Tx) *sqlc.Queries
}Methods called outside a transaction are listed on the Store. Methods called inside tx.Run(... fn(qtx)) go through the concrete *sqlc.Queries (qtx) and don't appear on the interface — there's no point widening the interface for tx-internal queries.
Middleware responsibilities
| Middleware | Scope | Purpose |
|---|---|---|
chimw.RequestID | global | correlation id header |
httpmiddleware.RealIP | global | trust X-Forwarded-For from allowed proxies |
chimw.Recoverer | global | last-resort panic catch |
httpmiddleware.RequestLogger | global | structured access log |
cors.Handler | global | CORS policy |
httpmiddleware.BodyLimit | global | max body bytes |
JWTAuth.Middleware | portfolio group | extracts customerID claim into ctx |
There is no portfolio-specific rate limiter. The global rate limit on /api/v1 applies. If you need finer-grained throttling (e.g. on AddTrade to prevent spam), add it at the route level.
Tx vs cache TTL race
The cache bust runs after Commit returns. A concurrent reader can populate the cache from a snapshot taken before commit, after which the bust DEL fires harmlessly. The window is bounded by:
max_window ≈ rtt(Begin..Commit) + rtt(Commit_ack..Apply)In practice tens of milliseconds. The next read after the bust DEL will hit the DB and re-cache the post-commit snapshot. Detail TTL is 15s, so even a missed bust self-heals quickly. Valuation TTL is 10 minutes — write ops always bust both keys explicitly to avoid wedge.
This matches the prior monolith's behavior exactly. If you need stronger guarantees later, add a Redis pub/sub invalidation layer; don't change TxRunner.Run.
File layout
internal/modules/portfolio/
├── portfolio.go # 65 LOC — Module + NewModule + RegisterRoutes
├── smoke_db_test.go # integration test
│
├── shared/
│ ├── cache.go # Invalidator + TxRunner
│ ├── conversions.go # ParseTradeTime, ParseDate, DateFromStringPtr,
│ │ # DateToStringPtr, StringPtrNonEmpty
│ ├── guard.go # FindPortfolio
│ └── store.go # PortfolioLookup interface
│
├── holdings/
│ ├── replay.go # Rebuild — the FIFO engine
│ ├── replay_types.go # replayLot, replayConsumption, replayEvent
│ ├── daypnl.go # DayPnL
│ ├── view.go # HoldingResponse + HoldingFromRow
│ ├── store.go # Store interface (11 methods)
│ └── holdings_test.go
│
├── core/
│ ├── service.go # Create, FindAll, FindByID, FindDetail,
│ │ # FindSummary, FindValuation, Update, Delete
│ ├── handler.go # 7 handlers
│ ├── types.go # CreatePortfolioRequest, UpdatePortfolioRequest,
│ │ # Response, DetailResponse, SummaryResponse,
│ │ # SuspendedSummary, ValuationPoint, ValuationResponse
│ ├── valuation.go # buildValuation, valuationRangeStart,
│ │ # isValidValuationRange, uniqueCompanyIDsForValuation
│ ├── mappers.go # portfolioToDTO, detailToDTO
│ ├── store.go
│ ├── handler_test.go
│ └── valuation_test.go
│
├── companies/
│ ├── service.go # FindHoldings, FindDistribution, FindCompanyDetail
│ ├── handler.go # 3 handlers
│ ├── types.go # DistributionItem, DistributionResponse,
│ │ # CompanyPortfolioDetailResponse, ...
│ ├── distribution.go # buildDistribution
│ ├── mappers.go # companyDetailToDTO, holdingsFromRows, ...
│ ├── store.go
│ ├── handler_test.go
│ └── distribution_test.go
│
├── trades/
│ ├── service.go # AddTrade, FindTrades, UpdateTrade, DeleteTrade
│ ├── handler.go # 4 handlers + parseTradeFilter
│ ├── types.go # AddTradeRequest, UpdateTradeRequest, TradeResponse, TradeFilter
│ ├── calc.go # brokerRate, completeTradeFees, computedTradeTotal,
│ │ # mergeTradeUpdate, computeCgtMismatch, tradedAtWarnings
│ ├── mappers.go # 3 row→DTO mappers
│ ├── store.go
│ ├── handler_test.go
│ └── calc_test.go
│
├── dividends/
│ ├── service.go # 5 methods
│ ├── handler.go # 5 handlers
│ ├── types.go # CreateDividendRequest, UpdateDividendRequest, DividendResponse
│ ├── mappers.go
│ ├── store.go
│ └── handler_test.go
│
└── actions/
├── service.go # 4 methods (BONUS, RIGHT_ISSUE, IPO, FPO, AUCTION)
├── handler.go # 4 handlers
├── types.go # CreateCorporateActionRequest, ..., CorporateActionResponse
├── validation.go # corporateActionCreateParams, validateCorporateAction
├── mappers.go
├── store.go
├── handler_test.go
└── service_test.goReferences
- Module:
internal/modules/portfolio/portfolio.go - Plumbing:
internal/modules/portfolio/shared/{cache,guard,store,conversions}.go - Domain:
internal/modules/portfolio/holdings/{replay,daypnl,view,store}.go - Refactor plan:
docs/portfolio/PORTFOLIO_REFACTOR_PLAN.md - Audit notes:
docs/portfolio/PORTFOLIO_BRANCH_AUDIT.md
Portfolio Overview
Authenticated CRUD for user portfolios, trades, dividends, and corporate actions, with a FIFO holdings engine that recomputes positions on every write
Data Flow
How a trade, dividend, or corporate action flows from the user's request through FIFO replay into the holdings projection and back into responses