Skip to main content

ADR-002: API Database Strategy — Schema-per-Module with pgx and sqlc

Date: 2026-02-19

Status: Accepted

Context

The Go API is a modular monolith where each module represents a bounded context (e.g., recipe, ingredient). As we add database persistence, we need to make some interconnected decisions:

  1. Module isolation — How to keep each module's data separate within PostgreSQL
  2. Database driver — How Go code talks to PostgreSQL
  3. Query layer — How we write and maintain SQL queries
  4. Migrations — How schema changes are applied and tracked

Key forces:

  • Modules are bounded contexts that should not share tables or have cross-module foreign keys
  • The team values explicit, readable SQL over abstracted query builders
  • The project is early-stage with one module (recipe); the strategy must scale to many modules without coordination overhead
  • Production deployments run multiple API instances simultaneously, so migrations must be safe to run independently

Decision

Schema-per-module isolation

Each module owns a dedicated PostgreSQL schema within the shared api database. All SQL uses schema-qualified table names (e.g., recipe.recipes). Each module's first migration creates its schema with CREATE SCHEMA IF NOT EXISTS.

pgx v5 (native interface) as the database driver

We use jackc/pgx/v5 with pgxpool for connection pooling. A single pool is created at startup and injected into each module.

sqlc for type-safe query generation

We use sqlc to generate Go code from annotated SQL queries. Each module has its own sqlc config at infra/sqlc/sqlc.yaml, generating code into infra/db/. This pattern makes the specs source of truth and code is generated.

goose v3 for migrations with per-module version tables

Each module has its own embedded SQL migration files and its own goose version table (goose_<module>_version). Migrations use the goose.NewProvider API (no global state). A dedicated cmd/migrate binary runs migrations in production; an optional DB_AUTO_MIGRATE env var enables startup migrations for local development.

Consequences

Positive

  • Strong module boundaries — Schemas enforce data isolation at the database level, making it impossible to accidentally join across modules
  • Independent versioning — Each module tracks migrations separately; adding a module requires no coordination with existing ones
  • Type safety from SQL — sqlc catches query errors at build time (wrong column names, type mismatches) rather than at runtime
  • Readable queries — SQL is written directly, not hidden behind an ORM's abstraction; any developer who knows SQL can read and write queries
  • High-performance driver — pgx's native interface provides full PostgreSQL type support, COPY protocol, and connection pooling without the overhead of database/sql abstraction
  • Safe production migrations — The dedicated migrate binary avoids race conditions when multiple API instances start simultaneously

Negative

  • Schema-qualified names are verbose — Every query must reference recipe.recipes instead of just recipes; this is intentional but adds visual noise
  • No cross-module joins — Reporting queries that span modules require application-level composition or a separate read model; this is a deliberate trade-off for module independence
  • sqlc requires regeneration — Changing a query or migration requires running make sqlc; forgetting this produces stale code (mitigated by CI checks and the generate pipeline)
  • pgx types in generated code — sqlc generates pgtype.UUID and pgtype.Timestamptz instead of stdlib types; repositories need a toDomain mapping layer to keep domain types clean

Alternatives Considered

1. Separate logical database per module

Each module would get its own PostgreSQL database (e.g., recipe, ingredient) instead of a schema within a shared database.

Rejected because:

  • Requires multiple connection pools (one per database) since PostgreSQL connections are scoped to a single database, wasting resources
  • Cross-module queries (even for operational/debugging purposes) require dblink or postgres_fdw
  • The Docker init script already creates a single api database; schemas provide equivalent isolation with simpler operational characteristics
  • If we outgrow schemas, extracting a module to its own database is a straightforward migration path

2. GORM (ORM)

Full-featured Go ORM with struct-tag-based schema definition, auto-migrations, and a fluent query builder.

Rejected because:

  • Hides SQL behind an abstraction layer, making it harder to reason about what queries actually run
  • Auto-migration is not safe for production use; we need explicit, versioned migrations regardless
  • Struct tags for schema definition conflict with our SQL-first philosophy
  • Runtime reflection for query building adds overhead and makes errors appear at runtime instead of build time

3. sqlx (+ raw SQL)

Extension of database/sql that adds struct scanning, named parameters, and Select/Get helpers. Often paired with hand-written SQL.

Rejected because:

  • Queries are embedded as strings with no compile-time validation — typos in column names or type mismatches only surface at runtime
  • Requires database/sql which adds a layer of abstraction over pgx's native interface, losing features like pgtype support and COPY
  • sqlc provides the same "write SQL, get Go code" workflow but with build-time type safety

4. squirrel (query builder)

Fluent SQL query builder that generates parameterized SQL strings.

Rejected because:

  • Queries are composed in Go code rather than written as SQL — harder to read, review, and paste into psql for debugging
  • No type safety for the result set; still need manual scanning
  • Adds a runtime dependency for something that can be done at build time with sqlc

5. scany (struct scanner)

Generic struct scanning library for database/sql or pgx rows.

Rejected because:

  • Only solves the scanning problem, not query validation — still need to write and maintain raw SQL strings
  • sqlc generates both the queries and the scanning code, making scany redundant
  • Runtime reflection for struct mapping vs sqlc's compile-time generation

6. Ent (entity framework)

Code-generation-based ORM by Facebook/Meta with a schema-as-Go-code approach.

Rejected because:

  • Schema is defined in Go code rather than SQL — the generated migrations are harder to review and customize
  • Heavy framework with its own graph-based query API that diverges from standard SQL patterns
  • Overkill for our current needs; adds significant dependency weight
  • The code generation approach is interesting but schema-as-SQL (via sqlc reading migration files) is more transparent