46 KiB
Research Notes
Query: NestJS TypeScript REST API WebSocket architecture best practices 2026
For a modern NestJS backend that serves both a REST API and WebSockets, treat HTTP and WS as first‑class but separate concerns, built on a modular, layered architecture, with clear domain boundaries, shared application services, and infrastructure focused on scaling (Redis, microservices) and robustness.
Below is an architecture and best‑practices checklist you can use as a reference design for 2025–2026 projects.
1. High‑level architecture
Core principles
-
Hexagonal / clean architecture:
- Domain layer: entities, value objects, domain services (framework‑agnostic).
- Application layer: use cases/services orchestrating domain logic.
- Interface layer: REST controllers, WebSocket gateways, GraphQL resolvers, etc.
- Infrastructure layer: database adapters, message brokers, cache, 3rd‑party APIs.
-
Nest modules for vertical slicing
- One module per domain (e.g.
UsersModule,ChatModule,OrdersModule), each exposing:- REST controllers for HTTP
- gateways for WebSocket events
- shared providers (services, repositories).
- This modular approach is explicitly recommended for scalable NestJS apps.[7][8]
- One module per domain (e.g.
-
Transport separation, logic reuse
- REST controllers and WebSocket gateways should call the same application services, not duplicate business logic.
- Gateways are just another interface adapter, as described in the Nest docs on WebSocket gateways.[6]
2. REST API best practices (NestJS + TypeScript)
-
Controllers
- Thin controllers that:
- Map DTOs to method arguments.
- Delegate to service/use‑case classes.
- Map results to HTTP responses.
- Use route versioning (
/v1,/v2) and Nest’s versioning support for breaking changes.
- Thin controllers that:
-
DTOs, validation, transformation
- Use
class-validator+class-transformerand globalValidationPipefor all REST inputs. - Keep REST DTOs separate from WebSocket payload DTOs when they differ.
- Use
-
Error handling
- Use global exception filter (e.g.
HttpExceptionFilter) to normalize API error shape. - Map domain errors to HTTP status codes consistently.
- Use global exception filter (e.g.
-
Security
- JWT or OAuth2 for auth, Nest Guards for authorization.
- Rate‑limit sensitive endpoints with Nest interceptors or a gateway like API Gateway / NGINX.
3. WebSocket architecture in NestJS
3.1. Gateways as interface layer
-
Use Nest WebSocket gateways
-
A gateway class annotated with
@WebSocketGateway()is the entry point for WS connections.[6] -
Example (Socket.IO):
@WebSocketGateway({ namespace: '/chat', cors: true }) export class ChatGateway { @WebSocketServer() server: Server; @SubscribeMessage('message') handleMessage( @MessageBody() data: string, @ConnectedSocket() client: Socket, ) { client.broadcast.emit('message', data); } }This pattern is standard in 2024–2025 NestJS WebSocket guides.[2][3][6]
-
-
Lifecycle hooks
- Implement
afterInit,handleConnection,handleDisconnectto manage:- Connection registry (online presence).
- Resource allocation / cleanup.[1][2][3][6]
- Implement
-
Namespaces and rooms
- Use namespaces per domain (
/chat,/notifications,/trading) and rooms per context (user, group, document) to keep broadcasting efficient.[2]
- Use namespaces per domain (
3.2. WebSocket best practices (2025–2026)
Security[1][2][3][4]
- Authenticate during the handshake:
- Pass JWT in query, header, or cookie; validate in a guard or middleware.
- Authorize events:
- Use per‑event guards or authorization services.
- Prevent clients from subscribing to rooms they are not allowed to see.
- Validate all payloads:
- Use pipes or schema validation (e.g. Zod) for WS events, similar to HTTP.[4]
Connection handling[1][2][3]
- Track connections per user (userId → socketIds).
- Clean up on
handleDisconnect(presence, locks, subscriptions). - Avoid long‑running work in WS handlers; delegate to async services/queues.
Performance & scalability[1][2][3][7]
- Offload heavy work to:
- Background jobs (Bull/BullMQ), microservices, or message queues.
- Use Redis pub/sub or similar for horizontal scaling:
- Shared adapter for Socket.IO so messages propagate across instances.[1][2]
- Monitor:
- Event loop lag, memory, open sockets, message throughput.[2][7]
Resilience[1][2]
- Configure client‑side reconnect and exponential backoff.
- Handle network partitions, stale sockets, and replay/duplicate events at the application layer.
4. Integrating REST and WebSockets cleanly
-
Pattern: REST for CRUD, WS for realtime
- REST:
- Resource creation, updates, queries, pagination.
- WebSocket:
- Push updates (notifications, status changes, live data).
- Collaborative operations (presence, typing indicators, etc.).
- REST:
-
Event flow (typical pattern)
- Client calls REST API to change state (e.g. create message).
- Application service persists change, publishes domain event.
- WS gateway (or a microservice) listens to events and broadcasts to interested clients (rooms/users).
-
Avoid duplicating write paths
- Prefer one canonical mutation path (often REST or a command bus) and use WS mostly for reads/updates propagation.
- If you accept writes over WS (e.g. chat messages), a service should handle both HTTP and WS commands identically.
5. Scaling and microservices
-
Nest microservices
- Nest supports microservices and multiple transports (Redis, NATS, Kafka, MQTT, etc.).[5]
- Use microservices for:
- High‑traffic domains (chat, notifications, analytics).
- Decoupling heavy real‑time loads from main REST API.
-
Pattern
- API gateway / BFF exposes REST + WebSocket.
- Internal microservices handle business logic and communicate via message broker.
- Gateway uses Redis pub/sub or broker to broadcast results to WS clients.
6. Cross‑cutting concerns (2026‑ready)
-
Type safety
- Use TypeScript generics and shared contract types for REST DTOs and WS payloads.
- Consider a shared package (
@app/contracts) for request/response/event types.
-
Logging and observability[1][2][3][7]
- Structured logging from controllers and gateways (requestId / correlationId).
- Metrics: number of connections, messages per second, failed events, response times.
- Tracing: OpenTelemetry across REST and WS flows.
-
Configuration & environment
- Use Nest
ConfigModulefor environment‑based configuration (DB/Redis URLs, CORS, allowed origins). - Keep WS CORS rules aligned with HTTP CORS.
- Use Nest
-
Testing
- Unit test:
- Services and domain logic in isolation.
- Integration test:
- REST endpoints via
@nestjs/testing. - WebSockets using Socket.IO client in tests to assert message flows.
- REST endpoints via
- Unit test:
7. Practical checklist
When designing a NestJS REST + WebSocket service for 2026, ensure:
- Architecture
- Domain‑driven modules, clean layering, shared services.
- REST
- Versioned, validated, documented (Swagger), with unified error format.
- WebSockets
- Gateway per domain, namespaces/rooms, lifecycle hooks, validated payloads, authenticated handshakes.[1][2][3][6]
- Scaling
- Redis adapter for WS, microservices or queues for heavy tasks.[1][2][5][7]
- Security & robustness
- Guards, rate‑limit on HTTP, message validation, connection management, observability.[1][2][3][4][7][8]
If you share your specific use case (e.g., trading app, collaborative editor, chat, IoT), I can propose a concrete module layout and message/REST design aligned with these practices.
Sources
Query: React Vite calendar drag-drop FullCalendar react-big-calendar performance optimization 2026
For heavy drag-and-drop calendar UIs in React/Vite (e.g., FullCalendar or react-big-calendar) with many events, you get the best performance by combining library-level optimizations (virtualization, progressive/partial rendering) with React-level memoization and minimal rerenders.
Below is a focused checklist, split by library and then general React/Vite strategies.
1. When to use FullCalendar vs react-big-calendar vs alternatives
- FullCalendar
- Strong feature set (drag/drop, resource views, Scheduler, etc.).
- Historically suffers when rendering many events because it renders all events in the DOM and historically re-renders too much on drag/drop.[2]
- From v6.1.18, event rerendering was optimized so that only modified events are rerendered, not all.[2]
- Roadmap: v7 adds optimizations; v7.1+ mentions virtual rendering as a goal.[2] v8/v9 roadmap continues performance work.[4]
- react-big-calendar
- React-friendly; uses Flexbox instead of table layout, which was originally cited as a possible performance improvement over FullCalendar’s table layout.[2]
- No built‑in virtualization; performance drops with hundreds+ of events similar to FullCalendar.[5]
- High-performance alternatives for 2025–2026
- Bryntum Calendar / Scheduler: virtual rendering, minimal DOM footprint, advanced performance features.[3][5]
- DayPilot: progressive rendering, on-demand loading, partial updates, optimized for heavy workloads.[3]
- Planby: React timeline/calendar component with virtual rendering, reported ~3× faster than FullCalendar for 500+ events.[1][3]
If you need thousands of events with smooth drag/drop, consider Bryntum, DayPilot, or Planby before trying to push FullCalendar/react-big-calendar to their limits.[1][3][5]
2. FullCalendar + React/Vite performance strategies
2.1 Use the latest FullCalendar with optimized rerenders
- Use FullCalendar v6.1.18+ or v7 when available:
- Event updates now rerender only modified events, fixing the “all events rerender on drag/drop or update” issue.[2]
- This greatly cuts CPU time when dragging or updating single events in large views.
- For frequent serial updates (e.g., rapidly mutating events), use:
This batches rerenders and significantly reduces main-thread work.[2]
const options = { rerenderDelay: 100, // ms };
2.2 Reduce DOM and event complexity
- Filter data before passing it to FullCalendar.
- Only pass events in (or near) the visible date range instead of your whole dataset.
- Use backend pagination or API parameters to fetch only what is needed.
- Avoid unnecessary custom DOM in
eventContent:- Keep event render content minimal; heavy React trees inside each event will dominate render cost.
- Prefer simple markup and minimal React state inside event content.
2.3 Avoid re-creating props on every render
In your React wrapper around FullCalendar:
- Memoize events and other large props:
const events = useMemo(() => transformRawEvents(rawEvents), [rawEvents]); - Memoize callbacks passed to FullCalendar (e.g.,
eventDrop,eventClick) usinguseCallbackso React doesn’t think props changed every render.
2.4 Defer heavy work off the main thread
- For large transforms (e.g., normalizing thousands of events), use:
- Web Workers
- Debounced/batched updates (e.g., only recompute after user stops dragging for X ms).
- Precompute layout data on the server if possible (e.g., start/end times and conflicts) so the client only renders.
3. react-big-calendar performance strategies
react-big-calendar lacks built-in virtualization, so the focus is on minimizing React work:
- Use
React.memofor all custom components (event renderer, toolbar, custom headers). - Memoize the
eventsarray and do not recreate it on every render:const events = useMemo(() => toBigCalendarEvents(rawEvents), [rawEvents]); - Avoid storing large event lists in multiple layers of state; one source of truth is enough.
- When drag/drop is enabled:
- Update the single affected event in place and reuse the same array reference when possible, or use a keyed immutable update that doesn’t require rebuilding the whole list.
- Keep custom event and slot renderers simple; avoid heavy trees inside each cell.
If you still experience lag with 1000+ events, consider switching to a virtualized scheduler (Bryntum/DayPilot/Planby).[1][3][5]
4. General React performance patterns (for any calendar)
These apply for FullCalendar, react-big-calendar, or alternatives:
- Avoid global rerenders
- Use state libraries that support fine-grained updates (e.g., Zustand, Jotai, Redux with careful selectors) so updating one event doesn’t rerender the entire app.
- Memoize everything passed into the calendar:
events,resources,views, handlers likeonEventDrop,onSelectSlot, etc.
- Virtualization where possible
- If your calendar library exposes a way to control rendering of rows or resources, implement your own virtualization or use a library that already does this (Bryntum, DayPilot, Planby).[1][3][5]
- Throttle drag/drop-driven updates
- Do not persist to server or update global state on every drag movement.
- Use
onEventDrop(or equivalent) for final commits; only show local/optimistic feedback during drag.
- Keep React DevTools and logs off in production; they can distort performance tests.
5. Vite-specific considerations
Vite itself is very fast; issues are nearly always in runtime React, not bundling. Still:
- Use production builds (
vite build+ serve) when testing performance; dev mode adds overhead. - Configure code splitting to keep the calendar and its heavy dependencies in separate chunks so initial load is smaller.
- Avoid bundling multiple calendar libraries simultaneously unless needed.
6. If you’re starting a new project in 2025–2026
For a React + Vite calendar with drag/drop and many events:
- If you need enterprise-level scale (thousands of events, many resources, smooth interaction):
- Consider Bryntum Calendar/Scheduler or DayPilot React Scheduler for built-in virtual rendering and progressive loading.[3][5]
- If you need a timeline-style UI (TV guide, bookings, shifts) with virtual scroll:
- Consider Planby, which was specifically built for large event sets and is reported ~3× faster than FullCalendar with 500+ events.[1]
- If you are already invested in FullCalendar:
- Upgrade to the latest v6.1.18+ and adopt
rerenderDelay, data filtering, and memoization as above.[2][3]
- Upgrade to the latest v6.1.18+ and adopt
If you share your current stack (FullCalendar or react-big-calendar), approximate event count, and whether you use resources/timeline views, I can propose a more concrete configuration and small code snippets tailored to your case.
Sources
Query: NestJS PostgreSQL Docker Compose self-hosted deployment security best practices
For a self‑hosted NestJS + PostgreSQL stack on Docker Compose, focus on hardening containers, networking, secrets, and database configuration, plus general app security.
Below is a concise checklist you can adapt.
1. Docker & Compose hardening
-
Use multi‑stage builds & small base images
- Build in a
builderstage and copy only the compileddist+ production deps into a minimal runtime (e.g.node:alpine).[3][5] - Avoid shipping dev tools, build tools, tests, and
.gitinto the final image.
- Build in a
-
Run as non‑root
- Set
USER node(or another unprivileged user) in the final stage, not root.[3][4] - Ensure mounted volumes and files are readable by that user, not world‑writable.
- Set
-
Set production environment
NODE_ENV=productionin the final image so that frameworks and libraries use hardened, production defaults.[3]
-
Read‑only filesystem where possible
- For the API container, keep the filesystem mostly read‑only and write only to explicit volumes (logs, temp, etc.).
-
Limit container capabilities
- In
docker-compose.ymladd:and selectively add what you truly need.cap_drop: - ALL read_only: true
- In
2. Network isolation & exposure
-
Use a private Docker network
- Attach NestJS and PostgreSQL to a dedicated user‑defined network so only those services can talk to each other.[2]
- Do not publish the DB port on the host unless truly required:
services: api: networks: - app_net db: networks: - app_net # avoid: ports: ["5432:5432"] networks: app_net: driver: bridge
-
Restrict PostgreSQL listen addresses
- In
postgresql.conf, set:or even the explicit container IP if you manage it carefully.[2]listen_addresses = '0.0.0.0' # inside Docker; but reachable only via app_net
- In
-
Single public entrypoint
- Only expose the NestJS container (or, better, a reverse proxy like Nginx/Traefik) to the internet.
- PostgreSQL must never be directly reachable from the public network.
3. Secrets & configuration (NestJS + Postgres)
-
Avoid hard‑coded secrets and plain env files
- Do not check
.envinto version control. - For production, use Docker secrets or an external secret manager:
services: db: environment: POSTGRES_PASSWORD_FILE: /run/secrets/pg_passwd secrets: - pg_passwd secrets: pg_passwd: external: true ```[2]
- Do not check
-
Separate config per environment
- Use different
.env/secret sets for dev, staging, prod. - Ensure DB name/user/password differ per environment.
- Use different
-
NestJS config management
- Use
@nestjs/configor a similar centralized config module; never commit secrets into code. - Validate config (e.g. via
Joi) on startup to avoid misconfigurations.
- Use
4. PostgreSQL security in Docker
-
Use a maintained image & pinned versions
- Use official/maintained images (
postgres:X.Yorbitnami/postgresql) and pin a major/minor version to avoid surprises.[2][3]
- Use official/maintained images (
-
Strong credentials and least privilege
- Use strong passwords for the
POSTGRES_USERand for the application DB user. - Create a dedicated DB user for the NestJS app with only required privileges (no
SUPERUSER).
- Use strong passwords for the
-
Persistent volumes with proper permissions
- Mount a volume for data:
volumes: - pgdata:/var/lib/postgresql/data - Ensure the volume is only accessible to the postgres user inside the container.
- Mount a volume for data:
-
TLS/SSL for DB connections
- For self‑hosting where network hops extend beyond a single host (or you don’t fully trust the network), enable SSL/TLS in PostgreSQL and require SSL in the NestJS connection string.[2]
-
Connection limits & resource tuning
- Set reasonable limits (e.g.
max_connections,work_mem) to mitigate DoS from misbehaving app instances.
- Set reasonable limits (e.g.
5. NestJS application security
-
Framework‑level security
- Disable detailed error messages and stack traces in production.
- Use security middleware: Helmet for HTTP headers, rate‑limiting, and CORS with strict allowed origins.[1]
-
Authentication & session security
- Use a vetted auth library (JWT, session management, or a solution like SuperTokens) and store tokens securely (e.g. HTTP‑only cookies).[1]
- Implement CSRF protection if you use cookie‑based auth.
-
Input validation & sanitization
- Use
class-validatorandclass-transformerglobally to validate DTOs. - Sanitize inputs to mitigate injection; never concatenate untrusted data into SQL—use an ORM or parameterized queries.
- Use
-
Logging & monitoring
- Centralize logs per container and avoid logging secrets.
- Enable basic health checks in Docker (e.g.
pg_isreadyfor Postgres,/healthendpoint for NestJS).[1][3]
6. Compose example (security‑aware skeleton)
version: "3.9"
services:
api:
build:
context: .
dockerfile: Dockerfile
environment:
NODE_ENV: production
DB_HOST: db
DB_PORT: 5432
DB_USER: app_user
DB_NAME: app_db
# DB_PASSWORD via secret or env in production
depends_on:
- db
networks:
- app_net
ports:
- "80:3000"
restart: unless-stopped
user: "1000:1000" # non-root
read_only: true
db:
image: postgres:16
restart: unless-stopped
environment:
POSTGRES_DB: app_db
POSTGRES_USER: app_user
POSTGRES_PASSWORD_FILE: /run/secrets/db_password
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- app_net
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app_user"]
interval: 5s
timeout: 5s
retries: 5
secrets:
- db_password
networks:
app_net:
driver: bridge
volumes:
pgdata:
secrets:
db_password:
external: true
7. Operational practices
- Keep images up to date with security patches; automate rebuilds and deploys.
- Regularly back up PostgreSQL (physical or logical backups) and test restores.
- Limit SSH and Docker daemon access; treat it as production infrastructure.
- Use a firewall / security groups to only allow app and admin traffic where needed.
If you share your current Dockerfile and docker-compose.yml, I can point out specific changes to harden your setup.
Sources
Query: TypeScript monorepo project structure NestJS React common mistakes pitfalls
A solid TypeScript monorepo with NestJS (backend) and React (frontend) usually fails not because of tools, but because of structure and boundaries. Below are the most common mistakes and how to avoid them, specifically for a Nest + React monorepo.
1. Treating the monorepo as one big TS project
Mistake: Single tsconfig.json at the root, all code in one program, no project references.
Problems:
- Slow
tscand IDE responsiveness as the repo grows.[3][5] - No clear build order between backend, frontend, and shared libs.
- Harder to run isolated builds in CI.
Better:
- Root
tsconfig.base.jsonwith shared compiler options.[1][3] - Each app/lib has its own
tsconfig.jsonand usesreferencesto shared packages.[3][5] - Build with
tsc --build(or via Nx/Turbo) from root so TS respects the dependency graph.[3][5]
2. No clear separation between apps and shared libraries
Mistake: Nest and React importing each other’s code directly via relative paths instead of through shared packages.
Problems:
- Circular dependencies (e.g., React importing from
apps/api/src/...and Nest importing fromapps/web/src/...). - Accidental leaking of backend-only code to the frontend bundle (e.g., Node APIs in browser).
Better structure:
apps/api– NestJS appapps/web– React apppackages/shared-domain– pure domain logic, types, DTOs (no Nest/React-specific code)packages/shared-config– environment/config types, config helpers (no framework globals)
Use package boundaries:
- Frontend imports only from
packages/*. - Backend imports from
packages/*plus its ownapps/api/*.
3. Sharing “too much” code between Nest and React
Mistake: Putting everything common (including Nest decorators, pipes, React hooks) into one “shared” package.
Problems:
- Shared package becomes framework-dependent and unusable on the other side.
- React app may accidentally import Nest-only code, causing bundling/runtime failures.
Better:
- Keep shared packages framework-agnostic: domain models, validation schemas, DTOs, API types.
- Have framework-specific adapters:
packages/nest-adapters(uses shared DTOs but also Nest decorators).packages/react-hooks(uses shared types/DTOs but React-specific logic).
4. Ignoring module boundaries and coupling Nest modules with React routes
Mistake: Nest modules knowing about React routing or component structure, or React directly calling Nest internal modules instead of HTTP APIs.
Problems:
- Tight coupling across layers; refactoring either side becomes expensive.
- Impossible to test backend separately without frontend.
Better:
- The boundary between Nest and React is always a protocol:
- REST/GraphQL schema, or
- shared API type definitions in a common package.
- React talks only to HTTP endpoints; Nest exposes controllers/services internally, not React-specific abstractions.
5. Bad path alias and import strategy
Mistake:
- Using long relative paths (
../../../) everywhere. - Path aliases defined differently in TypeScript vs bundler (e.g., Vite/Webpack) vs Node runtime.
- Using
tsconfigpaths without aligning them with your workspace tool.[1][3][7]
Problems:
- Code compiles in editor but fails at runtime.
- Confusing circular imports and build errors.
Better:
- Define root-level
tsconfig.base.jsonwithbaseUrlandpathsand extend it from app/libtsconfigs.[1][3][7] - Make sure bundler and test runner resolve aliases the same way (e.g., Jest
moduleNameMapper, Vite/Webpackalias). - Use package imports (
@project/shared-domain) instead of deep internal paths where possible.
6. Missing or misusing workspace tooling (Yarn/NPM/pnpm + Nx/Turbo)
Mistake:
- Manual
cd apps/api && npm run buildeverywhere. - No topological build order or caching.[3][4]
Problems:
- Rebuilding everything on every CI run.
- Subtle build-order bugs: React built before the shared package it uses, etc.
Better:
- Use workspaces (Yarn/pnpm/npm) for package linking and dependency management.[2][3][4]
- Use a monorepo tool like Nx or Turborepo to:
- infer dependency graph,
- run
build/testin topological order with caching.[3][4]
- Expose a single root command: e.g.
nx run-many --target=buildoryarn workspaces foreach --topological-dev run build.[3][4]
7. Inconsistent tooling config (ESLint, Prettier, Jest/Vitest)
Mistake:
- Each app has its own slightly-different ESLint/Prettier/Jest config.
- Some packages use strict TS rules, others don’t.
Problems:
- Inconsistent code quality and formatting.
- Harder onboarding and surprise build failures.
Better:
- Root shared config files:
eslint.base.jsand app-level small extensions.prettier.configat root.[5]- Shared Jest/Vitest base config; each app adds its own transforms.
- Ensure test runners understand TS project references and path aliases.
8. Wrong granularity of packages
Mistake:
- Either: one giant
sharedpackage with everything. - Or: dozens of tiny packages for every small utility function.
Problems:
- Giant shared package: no clear boundaries, difficult to version.
- Tiny packages: dependency graph and tooling overhead become unmanageable.
Better:
- Package around cohesive domains, not individual functions:
shared-domain,shared-api-types,shared-ui(if you truly have cross-app UI), etc.
- Keep packages independent and acyclic: avoid cycles in dependencies.[5]
9. Not using TypeScript project references correctly
Mistake:
- Setting
referencesintsconfigbut still running plaintscortsc -pwithout--build.[3]
Problems:
- You get none of the incremental build benefits.
- Editors and CI may behave differently.[3][5]
Better:
- Use
tsc --build(ortsc -b) from the root to respect project references and incremental builds.[3][5] - Ensure each referenced project has:
"composite": true"declaration": true
- Use watch mode (
tsc -b --watch) during development where appropriate.[5]
10. Environment and config confusion between Nest and React
Mistake:
- Using the same
.envor config loading code in both server and client without differentiating secrets vs public values. - Directly importing server-only config from React.
Problems:
- Secrets leaked to frontend bundles.
- Hard-to-debug environment mismatch between apps.
Better:
- Shared config types in a package (
Config,PublicConfig). - Implementation separated:
- Nest reads from process env, files, secrets managers.
- React uses build-time env injection (e.g., Vite/CRA/Next pattern).
- Only public subset goes to the client.
11. Sharing runtime-only Nest/Node constructs with React
Mistake:
- Putting Node-only things (e.g.,
fs, Nest interceptors) in “shared” code imported by React.
Problems:
- Bundle fails at build or runtime in browser.
- Tree-shaking doesn’t always save you.
Better:
- Shared packages should be runtime-agnostic or clearly server-only/client-only.
- Use separate entry points:
main.ts/server.tsin Nest.index.tsx/client.tsxin React.
- If you must share code that is server-only, keep it in a
packages/server-*namespace, never used by React.
12. CI/CD not aligned with monorepo structure
Mistake:
- CI builds and tests every app and package on every change.
- Or worse, CI doesn't understand cross-package dependencies.
Problems:
- Slow pipelines, flaky deployments.
- Backend deployed without updated shared package, etc.
Better:
- Use your monorepo tool’s affected/changed commands:
- Nx “affected” commands, or Turborepo’s hashing/caching.
- Ensure CI:
- Restores cache for
node_modulesand build outputs. - Runs builds in dependency order.
- Restores cache for
- Optionally: one Docker image at root containing all services instead of one per service if that fits your deployment model.[4]
If you share your current structure (folders + tsconfigs) I can point out concrete, repo-specific pitfalls and give an adjusted layout for Nest + React + shared packages.
Sources
Query: NestJS scheduled jobs CalDAV Microsoft Graph API integration testing approaches
You’re essentially looking for ways to test a NestJS app that has scheduled jobs which talk to CalDAV and Microsoft Graph (calendar sync-type scenario). Below is a concise strategy focusing on testing approaches, not full implementation.
1. Architectural testability pre‑conditions
To make testing possible, structure your code so that:
- A scheduler layer only triggers methods on a domain/service layer.
- The domain/service layer depends on interfaces like:
CalDavClient(e.g.ICalendarProvider/ICalDavClient)GraphClient(Microsoft Graph)
- Actual HTTP calls are only in those client classes; they are injected via Nest DI (
@Injectableproviders withuseClass/useFactory).[2]
This lets you:
- Unit test the service by mocking clients.
- Integration test by swapping real vs fake HTTP implementations.
2. Testing scheduled jobs (NestJS Cron / Scheduler)
Assuming you use @nestjs/schedule:
- Put schedule decorators on a thin job class:
@Injectable()
export class CalendarSyncJob {
constructor(private readonly syncService: CalendarSyncService) {}
@Cron('0 * * * *') // every hour, for example
async handleCron() {
await this.syncService.syncAllAccounts();
}
}
Unit test of the job:
- Use
@nestjs/testingand mockCalendarSyncService:
const module = await Test.createTestingModule({
providers: [
CalendarSyncJob,
{
provide: CalendarSyncService,
useValue: { syncAllAccounts: jest.fn().mockResolvedValue(undefined) },
},
],
}).compile();
const job = module.get(CalendarSyncJob);
await job.handleCron();
expect(syncService.syncAllAccounts).toHaveBeenCalledTimes(1);
No real time passes, no network involved: you call handleCron() directly.
3. Unit testing CalDAV / Graph integration services
Example service:
@Injectable()
export class CalendarSyncService {
constructor(
private readonly caldavClient: CalDavClient,
private readonly graphClient: GraphClient,
) {}
async syncUser(userId: string) {
const caldavEvents = await this.caldavClient.getEvents(userId);
const msEvents = await this.graphClient.getEvents(userId);
// diff + write changes
}
}
Unit test approach:
- Replace
CalDavClientandGraphClientwith Jest mocks or fake implementations. - Cover:
- Happy path (events synced correctly).
- Conflicts / duplicates.
- Error handling (CalDAV fails, Graph fails, partial sync).
- Assert:
- Correct calls to
.createEvent,.updateEvent,.deleteEvent, etc. - Correct transformation between CalDAV and Graph schemas.
- Correct calls to
No Nest specifics required beyond DI; this is plain unit testing.
4. Integration testing with Nest’s testing module
Here you want to test the real Nest module wiring but still avoid hitting external systems.
Strategy
- Use
Test.createTestingModulewith your real modules. - Override external clients with HTTP-mocking or fake in-memory servers.
Examples:
- For HTTP clients (Axios,
@nestjs/axios):- Use
nockor similar to mock CalDAV/Graph endpoints.
- Use
- Alternatively:
- Provide fake
CalDavClient/GraphClientthat behave like a small in-memory server.
- Provide fake
const module = await Test.createTestingModule({
imports: [AppModule],
})
.overrideProvider(CalDavClient)
.useClass(FakeCalDavClient)
.overrideProvider(GraphClient)
.useClass(FakeGraphClient)
.compile();
Then:
- Resolve your
CalendarSyncServiceor job and call it. - Assert on side effects (DB state, logs, events).
This validates Nest DI wiring and internal logic while avoiding real network calls.
5. End‑to‑end tests (E2E) with “real” external systems
If you need high‑confidence tests:
- Spin up:
- Test CalDAV server (e.g. Radicale or DAViCal in Docker).
- Microsoft Graph test tenant (with dedicated test accounts).
- Use Nest E2E tests (
@nestjs/testing+supertest) to:- Call REST endpoints that trigger sync, or
- Call job handlers directly while Nest app is bootstrapped as in production.
Key ideas:
- Use separate env/config for E2E: test credentials, test URLs.
- Clean up test data (delete created events) at the end of each test suite.
These tests are slower and brittle; run them in CI only, not on every quick dev run.
6. Dealing with time / schedule semantics
Scheduled jobs are time‑based by nature; tests should not depend on real time:
- Never wait for cron triggers in tests.
- Expose job handlers (e.g.
handleCron()) and call directly. - If your code uses
Date.now()/new Date()for “now”:- Inject a
Clockor use Jest’s fake timers to control time. - This is important for tests like “events starting in next 10 minutes”.
- Inject a
7. Authentication / tokens for Graph & CalDAV
For unit/integration tests with mocks:
- Don’t generate real tokens—mock the token provider / auth client.
- If you must hit real Graph:
- Use app‑only auth with client credentials for tests (service principal).
- Store secrets in CI’s secret store; load via Nest config module.[2]
For CalDAV:
- Use dedicated test users with credentials stored in env variables for E2E; mock them in unit/integration tests.
8. Testing concurrency and request context (optional)
If your scheduled jobs need per‑tenant context or something similar, you might use AsyncLocalStorage or nestjs-cls to carry contextual data through async calls.[1]
For tests:
- When calling job handlers directly, explicitly set the context:
- Either via your own context service’s
runWithContext(...). - Or by injecting
ClsServiceand manuallysetneeded values before calling the service.[1]
- Either via your own context service’s
This ensures the same context behavior as in HTTP requests but within scheduled jobs.
9. Recommended layering for testability
A practical layering that makes all of the above straightforward:
- Job layer
CalendarSyncJob(only schedule + call into service).
- Domain/service layer
CalendarSyncService(contains sync logic).- Depends only on interfaces + repository (DB) + clock.
- Integration layer
CalDavClient(wraps CalDAV HTTP).GraphClient(wraps Microsoft Graph SDK/HTTP).
- Infrastructure
- Config, auth/token provider, HTTP module, repositories.
Then:
- Unit tests target domain/service.
- Integration tests target domain + fake clients/DB.
- E2E tests target whole app + real external systems.
If you share your current Nest module layout (which modules/services you have), I can outline very concrete test module setups and example Jest test files for each level (unit, integration, E2E) tailored to CalDAV + Graph.
Sources
Query: PostgreSQL real-time updates WebSocket NestJS concurrency performance optimization 2026
You can build high-performance real‑time updates on PostgreSQL + NestJS + WebSockets, but you must be deliberate about where you get change events from and how you fan them out to clients.
Below is a concise, production‑oriented approach for 2025/2026‑era stacks.
1. Choose your Postgres → “events” mechanism
Avoid polling for anything beyond trivial loads. Use one of:
A. LISTEN/NOTIFY + WebSocket bridge (simple, good for moderate scale)
- Postgres emits events via
pg_notifyfrom triggers. - A small service listens via
LISTENand pushes over WebSockets to clients.[3][6] - Example of a bridge: pg_eventserv, which converts LISTEN/NOTIFY into standard WebSocket messages to web clients.[3]
Pros:
- Simple mental model; small infra footprint.[3]
- Great latency (single round‑trip DB → app → WS).
Cons & constraints:
NOTIFYpayloads are limited (~8KB).LISTEN/NOTIFYis not designed for very high fan‑out or tens of thousands of topics.- No built‑in message durability; if consumers disconnect, they miss events.[3]
When to use:
Dashboards, admin panels, low/medium‑traffic SaaS, 100s–low 1000s of concurrent WebSocket clients.
B. Logical replication / WAL streaming (scales much better)
Use logical replication slots (or a library built on them) to stream changes, then fan them out.
- Trigger.dev describes using Postgres replication slots + ElectricSQL as their real‑time backbone.[4]
- Flow: Postgres writes to WAL → replication slot captures changes → ElectricSQL processes and pushes to clients via long‑poll/WS.[4]
Performance numbers from Trigger.dev:
- ~20,000 updates/second processed.
- 500GB+ of Postgres inserts daily.
- Sub‑100ms latency to browsers.[4]
Pros:
- Much higher throughput and lower DB overhead than triggers + NOTIFY.[4]
- Can support historical subscriptions (subscribe to objects created before opening the page).[4]
- Strong consistency guarantees; Postgres remains the single source of truth.[4]
Cons:
- More infra and operational complexity (replication slots, separate service like ElectricSQL or your own change‑consumer)[4].
- Need to ensure replication slots don’t bloat WAL.
When to use:
High‑throughput real‑time feeds, large multi‑tenant apps, “activity feed” or “runs/jobs” style products at scale.
C. External real‑time services on top of Postgres
If you do not want to manage the event bridge:
-
Supabase Realtime
- Elixir/Phoenix service that can listen to Postgres changes and send them over WebSockets, plus broadcast and presence features.[2]
- Works via logical replication or CDC extensions (
postgres_cdc_rls).[2]
-
Ably LiveSync + Postgres
- Neon’s guide shows using serverless Postgres with an outbox table + trigger that calls
pg_notify, and Ably for WS fan‑out.[5]
- Neon’s guide shows using serverless Postgres with an outbox table + trigger that calls
When to use:
You want real‑time updates, presence, and fan‑out without writing the whole infra yourself.
2. NestJS architecture for WebSockets + concurrency
A. NestJS WebSocket gateway
Use @WebSocketGateway() and channels/rooms per logical subscription:
@WebSocketGateway({ cors: { origin: '*' } })
export class RealtimeGateway {
@WebSocketServer()
server: Server; // for socket.io
@SubscribeMessage('subscribeToItem')
handleSubscribe(
@MessageBody() data: { itemId: string },
@ConnectedSocket() client: Socket,
) {
client.join(`item:${data.itemId}`);
}
publishUpdate(itemId: string, payload: any) {
this.server.to(`item:${itemId}`).emit('item:update', payload);
}
}
Your Postgres‑event consumer (LISTEN/NOTIFY or WAL) injects the gateway and calls publishUpdate.
B. Concurrency and scaling
To scale beyond a single NestJS instance:
- Run NestJS behind a load balancer.
- Use sticky sessions for WS if you use in‑memory rooms, or move to Redis adapter for socket.io so rooms work across nodes.
- Offload expensive work (e.g. heavy projections) to background workers; gateway process should be light and mostly I/O.
For write contention in Postgres:
- Keep transactions short and indexes lean.
- Where possible, use append‑only tables (events, logs, runs) instead of frequent UPDATEs; this plays well with WAL‑based approaches.[4]
3. Reducing load on Postgres
Across the real‑time designs above, common optimizations:
-
Initial state vs. live updates
- Client fetches initial state via a regular HTTP/REST/GraphQL request (can be cached).
- WebSocket is only for incremental updates, not full re‑queries.[4]
-
Avoid “per‑client” DB connections
- Trigger.dev notes that each WebSocket needing its own query and connection is expensive at scale.[4]
- Instead, centralize DB access in 1–N backend services that multiplex changes to many clients.
-
Outbox pattern
- App writes to main tables and to an outbox table in the same transaction.
- A trigger publishes
pg_notifyfor new outbox rows.[5] - A separate process reads/deletes from outbox and broadcasts.
- This keeps writes coherent and avoids doing heavy work inside triggers.[5]
4. Recommended concrete patterns in 2026‑ish stack
If you are:
-
Small/medium NestJS app (≤ 10k concurrent sockets, modest write volume):
- Postgres triggers +
LISTEN/NOTIFY. - A NestJS background service (or pg_eventserv) listens and calls a
RealtimeGatewayto broadcast.[3][6] - Optional: outbox table to decouple triggers from business events.[5]
- Postgres triggers +
-
Growing app or high‑throughput:
- Move to logical replication / CDC (ElectricSQL, Supabase Realtime, or custom consumer).
- Keep NestJS as the API + auth + WS edge, but the “change capture” logic lives in a separate service.[2][4]
-
You want to outsource real‑time infra:
- Supabase Realtime as the WS/event layer on top of Postgres.[2]
- Or Ably/LiveSync with an outbox + NOTIFY bridge from Postgres.[5]
If you share your current scale (writes/sec, expected concurrent clients) and whether you can add extra infra (Redis, separate change‑consumer), I can outline an exact NestJS + Postgres + WS architecture and minimal code skeleton tailored to that.