commit ce0e5f1769ad3eebde178190c4f30a4a118ce185 Author: Debian Date: Sun Jan 11 07:51:30 2026 +0000 Initial Ralph scaffold diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..f9e115f --- /dev/null +++ b/.gitignore @@ -0,0 +1,37 @@ +# Dependencies +node_modules/ +.pnpm-store/ + +# Build outputs +dist/ +build/ +*.js.map + +# Environment +.env +.env.local +.env.*.local + +# IDE +.idea/ +.vscode/ +*.swp +*.swo +*~ + +# OS +.DS_Store +Thumbs.db + +# Logs +*.log +npm-debug.log* +yarn-debug.log* +yarn-error.log* + +# Test +coverage/ +.nyc_output/ + +# TypeScript +*.tsbuildinfo diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..ab3a53f --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,71 @@ +# Claude Code Configuration + +## Project Context + +nick-tracker - Read PROMPT.md for full requirements. + +Read prd.json for feature tracking. +Append progress to progress.txt after each significant change. + +## Tech Stack + +- TypeScript with Node.js 20 +- NestJS 10 +- TypeORM +- PostgreSQL +- @nestjs/jwt +- @nestjs/passport +- @nestjs/websockets +- @nestjs/platform-socket.io +- @nestjs/schedule +- @nestjs/config +- class-validator +- class-transformer +- bcrypt +- date-fns +- ical.js +- node-caldav +- @microsoft/microsoft-graph-client +- imap +- axios +- bull +- ioredis +- helmet +- express-rate-limit +- React 18 +- Vite 5 +- React Router +- TanStack Query (React Query) +- Zustand +- FullCalendar +- react-beautiful-dnd +- socket.io-client +- date-fns +- React Hook Form +- Zod +- TailwindCSS +- Radix UI +- Jest (backend), Vitest (frontend) for testing +- Docker Compose, pnpm workspaces, tsup for building + +## Working Rules + +1. Always run tests before committing +2. Never commit failing code +3. Update prd.json when features complete by setting passes to true +4. Use conventional commit messages +5. Make reasonable decisions - do not ask questions + +## Commands + +- Build: npm run build +- Test: npm run test +- Lint: npm run lint +- All: npm run build && npm run test && npm run lint + +## Key Files + +- PROMPT.md: Full specification +- prd.json: Feature tracking +- progress.txt: Append-only progress log +- GUIDE.md: Step-by-step guide diff --git a/GUIDE.md b/GUIDE.md new file mode 100644 index 0000000..10a5b5d --- /dev/null +++ b/GUIDE.md @@ -0,0 +1,187 @@ +# nick-tracker - Implementation Guide + +## Overview + +This guide walks you through implementing nick-tracker using the Ralph Method with phase-by-phase execution. + +**Tech Stack**: TypeScript / NestJS 10 +**Features**: 10 +**Estimated Cost**: ~$5.00 (at ~$0.50/feature) + +## Prerequisites + +1. Read `PROMPT.md` for full project requirements +2. Review `prd.json` for feature tracking +3. Ensure your environment is set up for TypeScript + +## Execution + +Execute each phase in order. Do not proceed to the next phase until the current phase is complete. + +### Phase 1: Foundation + +```bash +/ralph-wiggum:ralph-loop "$(cat prompts/phase1-prompt.txt)" --max-iterations 30 --completion-promise "PHASE_1_COMPLETE" +``` + +**Verify**: +```bash +npm run build && npm run test && npm run lint +``` + +**Bash loop alternative** (if plugin unavailable): +```bash +PROMPT=$(cat prompts/phase1-prompt.txt) +MAX_ITERATIONS=30 +COMPLETION_PROMISE="PHASE_1_COMPLETE" + +for i in $(seq 1 $MAX_ITERATIONS); do + echo "=== Iteration $i of $MAX_ITERATIONS ===" + RESPONSE=$(claude -p "$PROMPT") + echo "$RESPONSE" + if echo "$RESPONSE" | grep -q "$COMPLETION_PROMISE"; then + echo "Phase 1 complete!" + break + fi + if echo "$RESPONSE" | grep -q "ABORT_BLOCKED"; then + echo "Blocked - manual intervention required" + break + fi +done +``` + +--- + +### Phase 2: Core + +```bash +/ralph-wiggum:ralph-loop "$(cat prompts/phase2-prompt.txt)" --max-iterations 40 --completion-promise "PHASE_2_COMPLETE" +``` + +**Verify**: +```bash +npm run build && npm run test && npm run lint +``` + +**Bash loop alternative**: +```bash +PROMPT=$(cat prompts/phase2-prompt.txt) +MAX_ITERATIONS=40 +COMPLETION_PROMISE="PHASE_2_COMPLETE" + +for i in $(seq 1 $MAX_ITERATIONS); do + echo "=== Iteration $i of $MAX_ITERATIONS ===" + RESPONSE=$(claude -p "$PROMPT") + echo "$RESPONSE" + if echo "$RESPONSE" | grep -q "$COMPLETION_PROMISE"; then + echo "Phase 2 complete!" + break + fi + if echo "$RESPONSE" | grep -q "ABORT_BLOCKED"; then + echo "Blocked - manual intervention required" + break + fi +done +``` + +--- + +### Phase 3: Integration + +```bash +/ralph-wiggum:ralph-loop "$(cat prompts/phase3-prompt.txt)" --max-iterations 40 --completion-promise "PHASE_3_COMPLETE" +``` + +**Verify**: +```bash +npm run build && npm run test && npm run lint +``` + +**Bash loop alternative**: +```bash +PROMPT=$(cat prompts/phase3-prompt.txt) +MAX_ITERATIONS=40 +COMPLETION_PROMISE="PHASE_3_COMPLETE" + +for i in $(seq 1 $MAX_ITERATIONS); do + echo "=== Iteration $i of $MAX_ITERATIONS ===" + RESPONSE=$(claude -p "$PROMPT") + echo "$RESPONSE" + if echo "$RESPONSE" | grep -q "$COMPLETION_PROMISE"; then + echo "Phase 3 complete!" + break + fi + if echo "$RESPONSE" | grep -q "ABORT_BLOCKED"; then + echo "Blocked - manual intervention required" + break + fi +done +``` + +--- + +### Phase 4: Polish + +```bash +/ralph-wiggum:ralph-loop "$(cat prompts/phase4-prompt.txt)" --max-iterations 30 --completion-promise "PHASE_4_COMPLETE" +``` + +**Verify**: +```bash +npm run build && npm run test && npm run lint +``` + +**Bash loop alternative**: +```bash +PROMPT=$(cat prompts/phase4-prompt.txt) +MAX_ITERATIONS=30 +COMPLETION_PROMISE="PHASE_4_COMPLETE" + +for i in $(seq 1 $MAX_ITERATIONS); do + echo "=== Iteration $i of $MAX_ITERATIONS ===" + RESPONSE=$(claude -p "$PROMPT") + echo "$RESPONSE" + if echo "$RESPONSE" | grep -q "$COMPLETION_PROMISE"; then + echo "Phase 4 complete!" + break + fi + if echo "$RESPONSE" | grep -q "ABORT_BLOCKED"; then + echo "Blocked - manual intervention required" + break + fi +done +``` + +--- + +## Full Project Execution (Alternative) + +If you prefer to run the entire project in one loop: + +```bash +/ralph-wiggum:ralph-loop "$(cat PROMPT.md)" --max-iterations 100 --completion-promise "PROJECT_COMPLETE" +``` + +**Note**: Phase-by-phase execution is recommended for complex projects as it provides better control and verification checkpoints. + +## Tracking Progress + +- Check `prd.json` to see which features have `passes: true` +- Review `progress.txt` for the implementation log +- All phases complete when all features pass + +## Troubleshooting + +If the agent outputs `ABORT_BLOCKED`: +1. Review the error message +2. Fix the blocking issue manually +3. Re-run the current phase + +If iterations exhaust without completion: +1. Check `progress.txt` for what was accomplished +2. Review `prd.json` for remaining features +3. Increase `--max-iterations` and re-run + +--- + +*Generated with [Ralph PRD Generator](https://github.com/your-username/ralph-vibe)* diff --git a/PROMPT.md b/PROMPT.md new file mode 100644 index 0000000..98fc646 --- /dev/null +++ b/PROMPT.md @@ -0,0 +1,226 @@ +# PROMPT.md – nick-tracker (AutoScheduler GTD System) + +## Objective + +Build **AutoScheduler**, a self-hosted web application implementing the Getting Things Done (GTD) methodology with automatic calendar scheduling. The system ingests tasks from multiple sources (manual capture, email via IMAP/Microsoft Graph, read-only ConnectWise Manage sync), processes them through GTD workflows (Inbox → Next Actions/Projects/Waiting For/Someday-Maybe/Tickler), and automatically schedules actionable tasks into available calendar slots from CalDAV-compatible sources (Nextcloud, Google Calendar, Outlook). The scheduling engine respects user-defined working hours, context constraints (@desk, @phone, @errand, @homelab, @anywhere), deadlines, and manual priorities, batching similar tasks when possible and automatically rescheduling displaced tasks when calendar conflicts arise. A React SPA with interactive week-view calendar supports drag-and-drop manual overrides and task locking. Weekly Review interface auto-schedules and presents comprehensive system review. Real-time notifications via WebSocket, email, and optional webhook inform users of schedule changes and follow-up reminders. + +**Core Value Proposition:** Unified GTD task management across work (ConnectWise) and personal domains (Homelab, Daily Routines, House, Professional Development) with intelligent automatic scheduling that respects context, time constraints, and priorities, eliminating manual time-blocking while maintaining GTD methodology integrity. + +--- + +## Application Type + +**web** – Self-hosted web application with React SPA frontend and NestJS backend API, accessed via browser. Containerized deployment via Docker Compose for user-managed infrastructure. No native desktop or mobile apps required; responsive web UI serves all devices. + +--- + +## Architecture + +### Core Components + +1. **Frontend (React SPA)** + - Interactive week-view calendar with drag-and-drop task scheduling + - GTD workflow interfaces: Inbox processing, Next Actions, Projects, Weekly Review + - Real-time updates via WebSocket for automatic rescheduling notifications + - Task capture quick-add form and external REST API endpoint + - Settings UI for calendar connections, email/ConnectWise integration, working hours + +2. **Backend (NestJS API)** + - REST API for CRUD operations on tasks, projects, inbox items, connections + - WebSocket gateway for real-time notifications and schedule updates + - Scheduled jobs (cron) for: email polling, ConnectWise sync, calendar sync, Tickler activation, Weekly Review scheduling + - Task scheduling engine with constraint satisfaction and conflict resolution + - Integration services: CalDAV client, Microsoft Graph client, Google Calendar API, ConnectWise Manage API, IMAP/SMTP + +3. **Database (PostgreSQL)** + - Relational schema: Users, InboxItems, Tasks, Projects, CalendarEvents, ReschedulingEvents, Notifications + - Foreign keys enforce referential integrity; indexes optimize scheduler queries + - JSON columns for flexible metadata (workingHours, notificationPreferences, sourceMetadata) + +4. **Message Queue (Redis + Bull)** + - Asynchronous job processing for long-running syncs (ConnectWise, email, calendar) + - Rate limiting and retry logic for external API calls + - Queue-based scheduling engine to prevent concurrent conflicts + +5. **Container Orchestration (Docker Compose)** + - Services: backend (NestJS), database (PostgreSQL), cache (Redis), frontend (nginx serving static build) + - Volumes for PostgreSQL persistence, file uploads (reference materials) + - Health checks, restart policies, environment-based configuration + +### Interfaces + +- **REST API:** Full CRUD for all entities, connection management, manual scheduling overrides, Weekly Review operations +- **WebSocket:** Real-time push notifications for rescheduling events, Waiting For follow-ups, Tickler activations +- **External Integrations:** CalDAV (Nextcloud/generic), Microsoft Graph (Outlook/Google Calendar via OAuth), ConnectWise Manage REST API, IMAP/SMTP + +### Deployment + +**self_hosted** – Docker Compose stack deployed to user's infrastructure (home server, VPS). Users manage environment variables (API keys, DB credentials), backups, and reverse proxy (Traefik/nginx) for HTTPS. Optional OIDC authentication for enterprise users; local account authentication default. + +--- + +## Tech Stack + +- **Language:** TypeScript (strict mode, ES2022 target) +- **Runtime:** Node.js 20 LTS +- **Backend Framework:** NestJS 10 (modules, dependency injection, guards, interceptors) +- **Frontend Framework:** React 18 (functional components, hooks) + Vite 5 (build tool) +- **Database:** PostgreSQL 16 (relational, JSONB support) +- **ORM:** TypeORM 0.3 (Active Record pattern, migrations, query builder) +- **State Management:** Zustand (lightweight, minimal boilerplate) +- **Data Fetching:** TanStack Query (React Query v5) for server state caching +- **Routing:** React Router v6 (nested routes, loaders) +- **Calendar UI:** FullCalendar (drag-drop, resource timeline, week view) +- **Drag-and-Drop:** react-beautiful-dnd (accessible drag-drop for task lists) +- **WebSocket:** socket.io (NestJS WS adapter, auto-reconnect) +- **Job Queue:** Bull (Redis-backed task queues, cron scheduling) +- **Cache:** Redis 7 (Bull queue storage, session cache) +- **Authentication:** Passport.js (JWT strategy, optional OIDC) +- **Validation:** class-validator, class-transformer (backend DTOs), Zod (frontend forms) +- **Styling:** TailwindCSS 3 + Radix UI (accessible component primitives) +- **Date Handling:** date-fns (timezone-aware, immutable) +- **Calendar Protocols:** ical.js (iCal parsing), node-caldav (CalDAV client) +- **External APIs:** @microsoft/microsoft-graph-client (Outlook/Graph), axios (ConnectWise REST), imap (email polling) +- **Testing:** Jest (backend unit/integration), Vitest (frontend unit), Supertest (API e2e) +- **Containerization:** Docker, Docker Compose v2 +- **Monorepo:** pnpm workspaces (shared types package) +- **Build Tools:** tsup (backend bundling), Vite (frontend HMR and build) + +--- + +## Phases & Completion Criteria + +### Phase 1: Foundation + +**Goal:** Establish project structure, core infrastructure, authentication, and basic database schema. Prove Docker Compose stack runs and REST API accepts requests. + +#### Completion Criteria + +- [ ] **Monorepo initialized** with pnpm workspaces: `packages/backend`, `packages/frontend`, `packages/shared-types`. Root `package.json` has workspace scripts: `pnpm dev`, `pnpm build`, `pnpm test`. Verify: `pnpm install && pnpm -r list` shows all three workspaces. +- [ ] **Backend NestJS app** scaffolded with modules: `AuthModule`, `UsersModule`, `TasksModule`, `ProjectsModule`, `InboxModule`. Verify: `pnpm --filter backend build` succeeds, `dist/` contains compiled JS. +- [ ] **Frontend React app** created with Vite, TailwindCSS configured, Radix UI installed. Basic route structure: `/login`, `/inbox`, `/calendar`, `/projects`, `/settings`. Verify: `pnpm --filter frontend dev` starts dev server on port 5173, homepage renders. +- [ ] **PostgreSQL database** schema created via TypeORM migrations: `User`, `InboxItem`, `Task`, `Project`, `CalendarConnection`, `CalendarEvent`, `ConnectWiseConnection`, `EmailConnection`, `ReferenceMaterial`, `ReschedulingEvent`, `Notification` entities with all fields and relationships from spec. Verify: `pnpm --filter backend migration:run` succeeds, `psql -d autoscheduler -c "\dt"` lists 11 tables. +- [ ] **Docker Compose** stack configured: `backend`, `postgres`, `redis`, `frontend` services. Backend exposes port 3000, frontend port 80, PostgreSQL port 5432 (internal only). Environment variables loaded from `.env` file. Verify: `docker compose up -d && docker compose ps` shows all services healthy, `curl http://localhost:3000/health` returns `{"status":"ok"}`. +- [ ] **Authentication implemented:** JWT-based auth with Passport.js. `/api/v1/auth/register`, `/api/v1/auth/login`, `/api/v1/auth/refresh` endpoints functional. Password hashing with bcrypt (12 rounds). JWT guard protects all routes except auth endpoints. Verify: `curl -X POST http://localhost:3000/api/v1/auth/register -d '{"email":"test@example.com","password":"SecurePass123!","name":"Test User","timezone":"America/New_York"}' -H "Content-Type: application/json"` returns 201 with JWT, subsequent `curl -H "Authorization: Bearer " http://localhost:3000/api/v1/users/me` returns user object. +- [ ] **Frontend authentication flow:** Login form posts to `/api/v1/auth/login`, stores JWT in memory (Zustand), redirects to `/inbox`. Protected routes require auth token, redirect to `/login` if missing. Axios interceptor adds `Authorization` header. Verify: Manual test in browser: register user, log in, see redirect to inbox, refresh page maintains session. +- [ ] **Basic error handling:** Global exception filter in NestJS logs errors, returns standardized JSON error responses (status, message, timestamp). Frontend axios interceptor catches 401, clears token, redirects to login. Verify: `curl http://localhost:3000/api/v1/nonexistent` returns 404 JSON, `curl -H "Authorization: Bearer invalid" http://localhost:3000/api/v1/users/me` returns 401 JSON. +- [ ] **Logging configured:** NestJS Logger configured with timestamps, log levels (dev: debug, prod: info). Winston logger optional enhancement but basic Logger functional. Verify: Backend console shows structured logs on startup and per request. +- [ ] **Type safety enforced:** `shared-types` package exports DTOs (CreateTaskDto, TaskResponseDto, etc.) used by both backend validators and frontend Zod schemas. Verify: Change a field type in `shared-types`, run `pnpm build`, see TypeScript errors in both frontend and backend if mismatched. +- [ ] **Health checks:** `/health` endpoint returns database connection status, Redis connection status. Docker Compose health checks configured for backend (GET /health every 30s). Verify: `curl http://localhost:3000/health` returns `{"status":"ok","database":"connected","redis":"connected"}`, `docker compose ps` shows backend as `healthy`. + +**Phase 1 Verification Command:** + +```bash +pnpm install && pnpm build && docker compose up -d && sleep 10 && \ +curl -f http://localhost:3000/health && \ +curl -X POST http://localhost:3000/api/v1/auth/register \ + -d '{"email":"verify@test.com","password":"Test123!","name":"Verify User","timezone":"UTC"}' \ + -H "Content-Type: application/json" | grep -q "token" && \ +echo "✓ Phase 1 Complete" +``` + +--- + +### Phase 2: Core Features + +**Goal:** Implement GTD workflows (Inbox capture and processing, Next Actions, Projects, Someday/Maybe, Waiting For, Tickler), basic scheduling engine, and interactive calendar week view with drag-and-drop. + +#### Completion Criteria + +- [ ] **Inbox capture endpoints:** `POST /api/v1/inbox` creates unprocessed InboxItem with source=MANUAL. Fields: content (text), source (enum), sourceMetadata (JSON). Returns created item with ID. Verify: `curl -X POST -H "Authorization: Bearer " http://localhost:3000/api/v1/inbox -d '{"content":"Test inbox item"}' -H "Content-Type: application/json"` returns 201 with `{"id":"uuid","content":"Test inbox item","processed":false}`. +- [ ] **Inbox list and processing:** `GET /api/v1/inbox` returns all unprocessed items for authenticated user. `POST /api/v1/inbox/:id/process` accepts action payload (e.g., `{"action":"task","context":"@desk","domain":"work","title":"Do thing"}`) and converts inbox item to Task, Project, or marks as Trash, setting `processed=true`. Verify: Create inbox item, process to task, GET inbox returns empty array, `GET /api/v1/tasks` includes new task. +- [ ] **Task CRUD:** Full CRUD endpoints for Tasks with validation: `POST /api/v1/tasks` (required: title, domain; optional: context, priority, dueDate, estimatedDuration, projectId), `GET /api/v1/tasks` (with filters: status, context, domain), `PATCH /api/v1/tasks/:id` (partial updates), `DELETE /api/v1/tasks/:id` (soft delete or hard delete based on policy). Task status enum: NEXT_ACTION, WAITING_FOR, SOMEDAY_MAYBE, TICKLER, COMPLETED. Context enum: DESK, PHONE, ERRAND, HOMELAB, ANYWHERE. Domain enum: WORK, HOMELAB, DAILY_ROUTINES, HOUSE, PROFESSIONAL_DEVELOPMENT. Verify: Create task with all fields, GET task by ID returns full object, PATCH updates priority, DELETE removes task. +- [ ] **Task filtering views:** Separate endpoints or query params for GTD views: `GET /api/v1/tasks?status=NEXT_ACTION` (Next Actions), `GET /api/v1/tasks?status=WAITING_FOR` (Waiting For items with follow-up dates), `GET /api/v1/tasks?status=SOMEDAY_MAYBE` (Someday/Maybe list), `GET /api/v1/tasks?status=TICKLER` (future Tickler items). Verify: Create 5 tasks with different statuses, each GET returns only matching tasks. +- [ ] **Project CRUD:** Full CRUD for Projects: `POST /api/v1/projects` (required: name, domain; optional: description, desiredOutcome, connectwiseProjectId), `GET /api/v1/projects` (filter by status: active, on-hold, completed, domain), `PATCH /api/v1/projects/:id`, `DELETE /api/v1/projects/:id`. Verify: Create project, assign task to project via `PATCH /api/v1/tasks/:id` with `projectId`, `GET /api/v1/projects/:id/tasks` returns associated tasks. +- [ ] **Reference material attachments:** `POST /api/v1/projects/:id/reference` accepts multipart file upload or URL/text reference. Creates ReferenceMaterial entity linked to project. File uploads stored in Docker volume, path saved in DB. `GET /api/v1/projects/:id` includes references array. Verify: Upload PDF to project, GET project returns reference with file path, file accessible at `/uploads/:filename` route. +- [ ] **Waiting For follow-up dates:** Tasks with status=WAITING_FOR accept `followUpDate` timestamp. Scheduled job (cron) runs daily, identifies Waiting For items where `followUpDate <= NOW()`, creates Notification for user. Verify: Create Waiting For task with followUpDate tomorrow, manually trigger cron (`POST /api/v1/debug/trigger-cron`), next day notification appears in `GET /api/v1/notifications`. +- [ ] **Tickler activation:** Tasks with status=TICKLER have `ticklerDate`. Daily cron job checks for `ticklerDate <= NOW()`, converts to InboxItem with source=TICKLER, status=PROCESSED, creates notification. Verify: Create Tickler task for tomorrow, run cron job manually, InboxItem appears with content from task title, notification created. +- [ ] **User preferences:** `GET /api/v1/user/preferences` returns user's workingHours (JSON: `{"monday":{"start":"09:00","end":"17:00"},...}`), timezone, notificationPreferences (JSON: `{"email":true,"webhook":false,"webhookUrl":""}`). `PATCH /api/v1/user/preferences` updates preferences. Verify: Update working hours, GET preferences returns new hours. +- [ ] **Basic scheduling engine:** Service (`SchedulingService`) finds available time slots in user's calendar based on working hours and existing CalendarEvents. `POST /api/v1/schedule/regenerate` triggers scheduling: fetches all NEXT_ACTION tasks without `scheduledStart`, assigns `scheduledStart`/`scheduledEnd` within available slots, respecting `estimatedDuration` (default 30 min) and constraints (work contexts only during work hours). Persists scheduled times to Task table. Returns scheduled task count. Verify: Create 3 Next Action tasks (2 @desk, 1 @phone), set working hours 9-5, run regenerate, GET tasks shows scheduledStart/scheduledEnd within 9-5. +- [ ] **Context-based batching:** Scheduling engine groups consecutive tasks with same context when possible (e.g., 3 @phone tasks scheduled 10:00, 10:30, 11:00). Verify: Create 5 @phone tasks, regenerate schedule, GET schedule shows consecutive phone blocks. +- [ ] **Priority and deadline respect:** Scheduling engine sorts tasks by manual priority (1=highest) and dueDate before assigning slots. Higher priority tasks scheduled earlier in available slots. Verify: Create 3 tasks with priorities 1, 2, 3 and no due dates, regenerate, task with priority=1 gets earliest slot; create task with dueDate=today, regenerate, it schedules before lower-priority tasks with later due dates. +- [ ] **Frontend inbox UI:** React component renders unprocessed inbox items as list. Quick-add form at top posts to `/api/v1/inbox`. Each item has "Process" button opening modal with GTD decision tree (Is it actionable? → Yes: Is it multi-step? → Project vs. Next Action; No: Reference/Trash). Modal form submits to `/api/v1/inbox/:id/process`. Optimistic UI updates with React Query mutations. Verify: Manual browser test: add inbox item, process to Next Action with @desk context, see item disappear from inbox. +- [ ] **Frontend calendar week view:** FullCalendar component configured with timeGridWeek view, displays Tasks with `scheduledStart`/`scheduledEnd` as events. Color-coded by context (custom CSS or FullCalendar event color prop). Verify: Create 5 scheduled tasks, navigate to `/calendar`, see tasks rendered in week grid with correct times and colors. +- [ ] **Drag-and-drop manual scheduling:** FullCalendar `eventDrop` callback fires on drag, PATCH `/api/v1/tasks/:id` with new `scheduledStart`/`scheduledEnd`, sets `isLocked=true` to prevent auto-rescheduling. `eventResize` callback handles duration changes. Verify: Drag task in calendar, reload page, task remains at new time, `isLocked=true` in DB. +- [ ] **Task lock/unlock:** `POST /api/v1/tasks/:id/lock` sets `isLocked=true`, `POST /api/v1/tasks/:id/unlock` sets `isLocked=false`. Scheduling engine skips locked tasks. Lock icon displayed in calendar for locked tasks. Verify: Lock task, run schedule regenerate, task time unchanged; unlock, regenerate, task can move. +- [ ] **Next Actions list UI:** React component at `/next-actions` fetches `GET /api/v1/tasks?status=NEXT_ACTION`, renders grouped by context. Filter dropdown by context. Click task opens detail modal with edit form. Verify: Create 10 Next Actions across 3 contexts, filter by @phone, see only phone tasks. +- [ ] **Projects list UI:** React component at `/projects` fetches `GET /api/v1/projects`, renders cards with name, domain, active task count. Click card navigates to `/projects/:id` detail view showing tasks, references, edit form. Verify: Create 3 projects with tasks, navigate to project detail, see tasks listed, add reference material, see it appear. +- [ ] **Waiting For UI:** React component at `/waiting-for` fetches `GET /api/v1/tasks?status=WAITING_FOR`, displays with follow-up dates. Overdue follow-ups highlighted red. Verify: Create Waiting For task with past follow-up date, see red highlight. +- [ ] **Someday/Maybe UI:** React component at `/someday-maybe` fetches `GET /api/v1/tasks?status=SOMEDAY_MAYBE`, allows activation (status change to NEXT_ACTION). Verify: Create Someday task, click "Activate" button, task disappears from Someday list, appears in Next Actions. + +**Phase 2 Verification Command:** + +```bash +TOKEN=$(curl -s -X POST http://localhost:3000/api/v1/auth/login \ + -d '{"email":"verify@test.com","password":"Test123!"}' \ + -H "Content-Type: application/json" | jq -r .token) && \ +curl -f -X POST -H "Authorization: Bearer $TOKEN" http://localhost:3000/api/v1/inbox \ + -d '{"content":"Phase 2 verify"}' -H "Content-Type: application/json" && \ +TASK_ID=$(curl -s -X POST -H "Authorization: Bearer $TOKEN" http://localhost:3000/api/v1/tasks \ + -d '{"title":"Test task","domain":"WORK","context":"DESK"}' \ + -H "Content-Type: application/json" | jq -r .id) && \ +curl -f -X POST -H "Authorization: Bearer $TOKEN" \ + http://localhost:3000/api/v1/schedule/regenerate && \ +curl -s -H "Authorization: Bearer $TOKEN" \ + http://localhost:3000/api/v1/tasks/$TASK_ID | jq .scheduledStart | grep -q "T" && \ +echo "✓ Phase 2 Complete" +``` + +--- + +### Phase 3: Integration + +**Goal:** Implement external integrations (CalDAV, Microsoft Graph, ConnectWise Manage, IMAP email capture), real-time WebSocket notifications for rescheduling, conflict detection and automatic rescheduling, Weekly Review scheduling and interface. + +#### Completion Criteria + +- [ ] **CalDAV connection CRUD:** `POST /api/v1/connections/calendar` creates CalendarConnection with provider=CALDAV, calendarUrl, credentials (username/password encrypted at rest). `GET /api/v1/connections/calendar` lists user's connections. `DELETE /api/v1/connections/calendar/:id` removes connection. Verify: Create CalDAV connection with Nextcloud URL, GET returns connection with masked credentials. +- [ ] **CalDAV sync job:** `CalendarSyncService` using `node-caldav` library queries CalDAV server for events in date range (next 30 days). Creates/updates CalendarEvent entities with externalId, startTime, endTime. Cron job runs every 15 minutes. Manual trigger: `POST /api/v1/connections/calendar/:id/sync`. Verify: Configure CalDAV connection to test Nextcloud instance, create event in Nextcloud, trigger sync, `GET /api/v1/calendar/events?start=&end=<+7days>` returns event. +- [ ] **Microsoft Graph calendar connection:** `POST /api/v1/connections/calendar` with provider=MICROSOFT_GRAPH initiates OAuth flow, stores refresh token. `CalendarSyncService` uses `@microsoft/microsoft-graph-client` to fetch events from Outlook. Verify: Mock Microsoft Graph client in tests to return sample events, trigger sync, events persisted. +- [ ] **Google Calendar API connection:** `POST /api/v1/connections/calendar` with provider=GOOGLE initiates OAuth flow (or accepts service account JSON). `CalendarSyncService` uses Google Calendar API via axios/googleapis. Verify: Mock Google API responses, trigger sync, events persisted with externalId. +- [ ] **ConnectWise Manage connection:** `POST /api/v1/connections/connectwise` accepts companyId, publicKey, privateKey, apiUrl, memberId. `ConnectWiseService` using axios queries `/service/tickets`, `/project/tickets`, `/project/projects` with conditions `owner/id={memberId}`. Creates InboxItems with source=CONNECTWISE, sourceMetadata includes ticketId, priority, SLA. Cron job runs hourly. Verify: Mock ConnectWise API responses (3 tickets), trigger sync, inbox has 3 items with sourceMetadata. +- [ ] **ConnectWise zero-ticket projects:** ConnectWise sync queries projects assigned to user with `/project/projects/:id/tickets` count=0. Creates InboxItem with content="Plan project: {projectName}" and sourceMetadata.connectwiseProjectId. Verify: Mock API returns project with zero tickets, sync creates planning task inbox item. +- [ ] **ConnectWise priority display:** Tasks sourced from ConnectWise store `connectwisePriority` and `connectwiseSLA` fields (strings). Displayed in UI for reference but do not affect scheduling priority (user manually sets `priority` integer). Verify: Process ConnectWise inbox item to task, task has `connectwisePriority="High"`, `priority=null`, user updates `priority=1`, scheduling uses user priority. +- [ ] **IMAP email connection:** `POST /api/v1/connections/email` with provider=IMAP accepts imapHost, imapPort, credentials, inboxFolder. `EmailSyncService` using `imap` library connects, fetches UNSEEN messages, creates InboxItem with content=email subject+body, sourceMetadata includes from, date. Marks email as SEEN. Cron job runs every 5 minutes. Verify: Mock IMAP responses, trigger sync, inbox items created from emails. +- [ ] **Microsoft Graph email connection:** `POST /api/v1/connections/email` with provider=MICROSOFT_GRAPH uses OAuth token to fetch messages from inbox folder via Graph API. Same InboxItem creation logic. Verify: Mock Graph API responses, sync creates inbox items. +- [ ] **Conflict detection on calendar sync:** After calendar sync, `ConflictDetectionService` compares new CalendarEvents against scheduled Tasks. If CalendarEvent overlaps Task's `scheduledStart`/`scheduledEnd` and task is not locked, marks task for rescheduling. Verify: Create scheduled task 10:00-11:00, sync calendar event 10:30-11:30, task marked for rescheduling. +- [ ] **Automatic rescheduling on conflict:** `ReschedulingService` finds next available slot for displaced tasks (respecting working hours, context, priority), updates `scheduledStart`/`scheduledEnd`, creates ReschedulingEvent record with reason, original/new times. Verify: Trigger conflict, rescheduling runs, task moves to 11:30-12:30, ReschedulingEvent created. +- [ ] **WebSocket gateway setup:** NestJS WebSocket gateway (`@WebSocketGateway()`) with JWT authentication guard. Clients connect to `ws://localhost:3000`. On connection, server stores userId-to-socketId mapping. Verify: Frontend connects via socket.io-client, connection established, `getaddrinfo` command shows WebSocket connection in logs. +- [ ] **Real-time rescheduling notifications:** When ReschedulingService reschedules task, emits WebSocket event `task:rescheduled` with payload `{taskId, originalStart, newStart, reason}` to user's socket. Frontend socket.io client listens, shows toast notification, refetches calendar via React Query invalidation. Verify: Open browser with WebSocket devtools, trigger reschedule via API, see WebSocket message received, toast appears. +- [ ] **Notification persistence:** All rescheduling events create Notification entity with type=RESCHEDULING, message, relatedEntityId=taskId. `GET /api/v1/notifications` returns unread notifications. `PATCH /api/v1/notifications/:id/read` marks as read. Frontend shows notification bell icon with count. Verify: Reschedule task, GET notifications returns 1 unread, click mark read, count decreases. +- [ ] **Email notification for rescheduling:** If user's `notificationPreferences.email=true`, `NotificationService` sends email via SMTP (nodemailer) with rescheduling details. Email template includes task title, old/new times, reason. Verify: Mock SMTP server (MailHog), trigger reschedule, check MailHog for email. +- [ ] **Webhook notification:** If `notificationPreferences.webhook=true` and `webhookUrl` configured, `NotificationService` posts JSON payload to webhook URL on rescheduling. Verify: Mock webhook endpoint (webhook.site), trigger reschedule, see POST request received with correct payload. +- [ ] **Weekly Review auto-scheduling:** On user creation or preferences update, `WeeklyReviewService` creates recurring Task with status=WEEKLY_REVIEW, `scheduledStart` at user-configured day/time (e.g., Friday 4 PM), duration 60 min, `isLocked=true`. Cron job checks weekly, ensures review task exists for upcoming week. Verify: Set weekly review time in preferences, trigger cron, GET tasks shows recurring review task at configured time. +- [ ] **Weekly Review interface:** `GET /api/v1/weekly-review` endpoint returns aggregated data: active projects count, projects without next actions (flagged), total Next Actions grouped by project, unprocessed inbox count, Waiting For items with overdue follow-ups, Someday/Maybe count. Frontend `/weekly-review` page displays checklist UI. Verify: Create 2 projects (1 with tasks, 1 without), 5 inbox items, GET weekly-review returns `{"activeProjects":2,"projectsWithoutNextActions":1,"inboxCount":5,...}`. +- [ ] **Weekly Review completion:** `POST /api/v1/weekly-review/complete` updates User.lastWeeklyReview timestamp. Verify: Complete review, GET user preferences shows lastWeeklyReview updated. +- [ ] **Rate limiting on external API calls:** Bull queue configured for calendar, ConnectWise, email sync jobs with concurrency=1, rate limiter (1 request per 2 seconds to respect API limits). Verify: Queue 10 sync jobs, monitor logs, jobs execute sequentially with delay. +- [ ] **Error handling for external APIs:** Sync services wrap API calls in try-catch, log errors, create system Notification for user if sync fails (e.g., "ConnectWise sync failed: invalid credentials"). Failed jobs retry 3 times with exponential backoff (Bull retry strategy). Verify: Mock ConnectWise API to return 401, trigger sync, see error notification created, job retries logged. +- [ ] **Encryption for credentials:** Database credentials (IMAP passwords, API keys) encrypted with AES-256 using secret from environment variable before persisting. Decrypted on retrieval. Use `crypto` module or `@nestjs/config` with encryption utility. Verify: Inspect database, credentials columns show encrypted strings; GET connection via API returns functional connection (decrypt succeeds). + +**Phase 3 Verification Command:** + +```bash +TOKEN=$(curl -s -X POST http://localhost:3000/api/v1/auth/login \ + -d '{"email":"verify@test.com","password":"Test123!"}' \ + -H "Content-Type: application/json" | jq -r .token) && \ +curl -f -X POST -H "Authorization: Bearer $TOKEN" \ + http://localhost:3000/api/v1/connections/calendar \ + -d '{"provider":"CALDAV","calendarUrl":"http://mock","credentials":{"username":"test","password":"pass"}}' \ + -H "Content-Type: application/json" && \ +curl -f -X GET -H "Authorization: Bearer $TOKEN" \ + http://localhost:3000/api/v1/weekly-review | jq .inboxCount | grep -E '^[0-9]+$' && \ +echo "✓ Phase 3 Complete" +``` + +--- + +### Phase 4: Polish + +**Goal:** Comprehensive documentation, deployment hardening, performance optimization, accessibility audit, error boundary improvements, and final testing. Prepare for production self-hosted deployment. + +#### Completion Criteria + +- [ ] **README.md complete:** Root README includes: project overview, architecture diagram (Mermaid or ASCII), prerequisites (Docker, Docker Compose, Node.js for dev), quick-start instructions (clone, `cp .env.example .env`, `docker compose up`), environment variable documentation (all required vars listed with descriptions), default ports, access URLs. Verify: Fresh clone on new machine, follow README, app starts successfully. +- [ ] **Docker Compose production configuration:** `docker-compose.prod.yml` with: PostgreSQL persistent volume, Redis persistent volume, backend health checks, restart policies (`unless-stopped`), resource limits (CPU/memory), nginx serving frontend static files with gzip, security headers (Helmet). Verify: `docker compose -f docker-compose.prod.yml up -d`, all services start with resource constraints visible in `docker stats`. +- [ ] **Environment variable validation:** Backend startup validates required env vars (DATABASE_URL, REDIS_URL, JWT_SECRET, ENCRYPTION_KEY) using `@nestjs/config` with Joi schema. Missing vars log error and exit process. Verify: Remove DATABASE_URL from `.env`, start backend, see error "Missing required env var: DATABASE_URL" and exit code 1. +- [ ] **Database migrations documentation:** `packages/backend/README.md` documents migration workflow: `pnpm migration:generate -n MigrationName`, `pnpm migration:run`, `pnpm migration:revert`. Includes note on production migrations (run before deploying new backend version). Verify: Generate dummy migration, run it, verify in DB, re \ No newline at end of file diff --git a/README.md b/README.md new file mode 100644 index 0000000..e13dc09 --- /dev/null +++ b/README.md @@ -0,0 +1,49 @@ +# nick-tracker + +Multi-source task capture system that ingests tasks from manual web form, REST API, email (IMAP/Microsoft Graph), and ConnectWise Manage sync into an unprocessed inbox for later GTD clarification + +## Tech Stack + +- **Language**: TypeScript +- **Runtime**: Node.js 20 +- **Framework**: NestJS 10 +- **Testing**: Jest (backend), Vitest (frontend) +- **Build**: Docker Compose, pnpm workspaces, tsup + +## Getting Started + +1. Read `PROMPT.md` for full project requirements +2. Follow `GUIDE.md` for step-by-step instructions +3. Track progress in `prd.json` + +## Development with Ralph Method + +```bash +# Start a Ralph loop to implement this project +/ralph-wiggum:ralph-loop "$(cat PROMPT.md)" --max-iterations 50 --completion-promise "PROJECT_COMPLETE" +``` + +## Project Structure + +``` +. +├── PROMPT.md # Main Ralph prompt +├── prd.json # Feature tracking +├── progress.txt # Progress log +├── GUIDE.md # Step-by-step guide +├── CLAUDE.md # Claude Code config +├── docs/ # Documentation +│ ├── idea-dump.md +│ ├── architecture.md +│ ├── features.md +│ └── ... +├── agent_docs/ # Agent context +│ ├── tech_stack.md +│ ├── code_patterns.md +│ └── testing.md +└── src/ # Source code +``` + +--- + +*Generated with [Ralph PRD Generator](https://github.com/your-username/ralph-vibe)* diff --git a/agent_docs/code_patterns.md b/agent_docs/code_patterns.md new file mode 100644 index 0000000..1fca143 --- /dev/null +++ b/agent_docs/code_patterns.md @@ -0,0 +1,27 @@ +# Code Patterns + +## Project Conventions + + +### TypeScript Guidelines + +- Use strict mode +- No `any` types +- Use interfaces for object shapes +- Use type guards for narrowing +- Export types from `types/index.ts` + +### API Patterns + +- Use middleware for cross-cutting concerns +- Validate all input +- Return consistent error responses +- Use proper HTTP status codes +- Log all requests + +## Error Handling + +- Always catch and handle errors +- Log errors with context +- Return user-friendly messages +- Never expose internal details diff --git a/agent_docs/tech_stack.md b/agent_docs/tech_stack.md new file mode 100644 index 0000000..e4e166d --- /dev/null +++ b/agent_docs/tech_stack.md @@ -0,0 +1,51 @@ +# Tech Stack Decisions + +# Tech Stack + +- **Language**: TypeScript +- **Runtime**: Node.js 20 +- **Framework**: NestJS 10 +- **Testing**: Jest (backend), Vitest (frontend) +- **Build Tool**: Docker Compose, pnpm workspaces, tsup + +## Libraries + +- TypeORM +- PostgreSQL +- @nestjs/jwt +- @nestjs/passport +- @nestjs/websockets +- @nestjs/platform-socket.io +- @nestjs/schedule +- @nestjs/config +- class-validator +- class-transformer +- bcrypt +- date-fns +- ical.js +- node-caldav +- @microsoft/microsoft-graph-client +- imap +- axios +- bull +- ioredis +- helmet +- express-rate-limit +- React 18 +- Vite 5 +- React Router +- TanStack Query (React Query) +- Zustand +- FullCalendar +- react-beautiful-dnd +- socket.io-client +- date-fns +- React Hook Form +- Zod +- TailwindCSS +- Radix UI + + +## Rationale + +TypeScript provides type safety across full stack for complex scheduling logic and GTD workflows. NestJS offers robust REST API structure, dependency injection for external integrations (CalDAV, ConnectWise, IMAP, Microsoft Graph), scheduled jobs for recurring reviews, and WebSocket support. React with a calendar library (FullCalendar or react-big-calendar) handles the interactive drag-drop week view. PostgreSQL over SQLite for better concurrency with multiple capture sources and complex scheduling queries. Docker Compose orchestrates backend, database, and frontend nginx container for simple self-hosted deployment. diff --git a/agent_docs/testing.md b/agent_docs/testing.md new file mode 100644 index 0000000..bc73659 --- /dev/null +++ b/agent_docs/testing.md @@ -0,0 +1,41 @@ +# Testing Guide + +## Framework + +Jest (backend), Vitest (frontend) + +## Test Structure + +``` +tests/ +├── unit/ # Unit tests +├── integration/ # Integration tests +└── e2e/ # End-to-end tests +``` + +## Running Tests + + +```bash +# Run all tests +npm run test + +# Run with coverage +npm run test -- --coverage + +# Run specific test file +npm run test -- path/to/test.ts +``` + +## Coverage Requirements + +- Minimum 80% coverage +- All public APIs must be tested +- All error paths must be tested + +## Test Patterns + +- Arrange-Act-Assert pattern +- One assertion per test when possible +- Descriptive test names +- Mock external dependencies diff --git a/docs/architecture.md b/docs/architecture.md new file mode 100644 index 0000000..c77d93d --- /dev/null +++ b/docs/architecture.md @@ -0,0 +1,33 @@ +# Architecture + +## Application Type + +**web** + +The application is explicitly described as a 'self-hosted web application' with a React or Vue SPA frontend and calendar week view interface, accessed via browser rather than native desktop or mobile apps. + +## Interface Types + +- rest_api +- websocket + +REST API is required for external integrations (iOS Shortcuts, CLI scripts, browser extensions, capture endpoint). WebSocket enables real-time updates for automatic task rescheduling when calendar conflicts arise and notifications for displaced tasks without page refresh. + +## Persistence + +**local_db** + +Designed for self-hosted deployment with SQLite or PostgreSQL options specified. While PostgreSQL could be remote_db, the self-hosted context and SQLite option indicate local database persistence on the same infrastructure where the app runs. + +## Deployment + +**self_hosted** + +Explicitly stated as 'self-hosted' with containerized Docker Compose deployment for user-managed infrastructure, not cloud platforms or app stores. Users deploy and maintain on their own servers. + +## Suggested Tech Stack + +- **Language**: TypeScript +- **Framework**: Node.js (NestJS backend) + React (Vite frontend) + PostgreSQL + +TypeScript provides type safety across full stack for complex scheduling logic and GTD workflows. NestJS offers robust REST API structure, dependency injection for external integrations (CalDAV, ConnectWise, IMAP, Microsoft Graph), scheduled jobs for recurring reviews, and WebSocket support. React with a calendar library (FullCalendar or react-big-calendar) handles the interactive drag-drop week view. PostgreSQL over SQLite for better concurrency with multiple capture sources and complex scheduling queries. Docker Compose orchestrates backend, database, and frontend nginx container for simple self-hosted deployment. diff --git a/docs/data-models.md b/docs/data-models.md new file mode 100644 index 0000000..90590a9 --- /dev/null +++ b/docs/data-models.md @@ -0,0 +1,233 @@ +# Data Models + + +## User + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| email | string | Yes | +| passwordHash | string | Yes | +| name | string | Yes | +| timezone | string | Yes | +| workingHours | JSON | Yes | +| notificationPreferences | JSON | Yes | +| lastWeeklyReview | timestamp | No | +| createdAt | timestamp | Yes | +| updatedAt | timestamp | Yes | + +### Relationships + +- Has many InboxItems +- Has many Tasks +- Has many Projects +- Has many CalendarConnections + + +## InboxItem + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| content | text | Yes | +| source | enum | Yes | +| sourceMetadata | JSON | No | +| processed | boolean | Yes | +| processedAt | timestamp | No | +| createdAt | timestamp | Yes | + +### Relationships + +- Belongs to User +- Can convert to Task or Project + + +## Task + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| projectId | UUID | No | +| title | string | Yes | +| description | text | No | +| status | enum | Yes | +| context | enum | No | +| domain | enum | Yes | +| priority | integer | No | +| dueDate | timestamp | No | +| estimatedDuration | integer | No | +| scheduledStart | timestamp | No | +| scheduledEnd | timestamp | No | +| isLocked | boolean | Yes | +| completedAt | timestamp | No | +| connectwiseTicketId | string | No | +| connectwisePriority | string | No | +| connectwiseSLA | string | No | +| waitingForDetails | text | No | +| followUpDate | timestamp | No | +| ticklerDate | timestamp | No | +| createdAt | timestamp | Yes | +| updatedAt | timestamp | Yes | + +### Relationships + +- Belongs to User +- Belongs to Project (optional) +- Has many ReschedulingEvents + + +## Project + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| name | string | Yes | +| description | text | No | +| desiredOutcome | text | No | +| domain | enum | Yes | +| status | enum | Yes | +| connectwiseProjectId | string | No | +| completedAt | timestamp | No | +| createdAt | timestamp | Yes | +| updatedAt | timestamp | Yes | + +### Relationships + +- Belongs to User +- Has many Tasks +- Has many ReferenceMaterials + + +## ReferenceMaterial + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| projectId | UUID | Yes | +| title | string | Yes | +| content | text | No | +| url | string | No | +| filePath | string | No | +| createdAt | timestamp | Yes | + +### Relationships + +- Belongs to Project + + +## CalendarConnection + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| provider | enum | Yes | +| calendarUrl | string | No | +| credentials | JSON | Yes | +| isActive | boolean | Yes | +| lastSyncAt | timestamp | No | +| createdAt | timestamp | Yes | + +### Relationships + +- Belongs to User +- Has many CalendarEvents + + +## CalendarEvent + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| calendarConnectionId | UUID | Yes | +| externalId | string | Yes | +| title | string | Yes | +| startTime | timestamp | Yes | +| endTime | timestamp | Yes | +| isAllDay | boolean | Yes | +| syncedAt | timestamp | Yes | + +### Relationships + +- Belongs to CalendarConnection + + +## ConnectWiseConnection + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| companyId | string | Yes | +| publicKey | string | Yes | +| privateKey | string | Yes | +| apiUrl | string | Yes | +| memberId | string | Yes | +| isActive | boolean | Yes | +| lastSyncAt | timestamp | No | +| createdAt | timestamp | Yes | + +### Relationships + +- Belongs to User + + +## EmailConnection + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| provider | enum | Yes | +| imapHost | string | No | +| imapPort | integer | No | +| credentials | JSON | Yes | +| inboxFolder | string | Yes | +| isActive | boolean | Yes | +| lastCheckAt | timestamp | No | +| createdAt | timestamp | Yes | + +### Relationships + +- Belongs to User + + +## ReschedulingEvent + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| taskId | UUID | Yes | +| originalStart | timestamp | Yes | +| originalEnd | timestamp | Yes | +| newStart | timestamp | Yes | +| newEnd | timestamp | Yes | +| reason | string | Yes | +| notificationSent | boolean | Yes | +| createdAt | timestamp | Yes | + +### Relationships + +- Belongs to Task + + +## Notification + +| Field | Type | Required | +|-------|------|----------| +| id | UUID | Yes | +| userId | UUID | Yes | +| type | enum | Yes | +| title | string | Yes | +| message | text | Yes | +| relatedEntityId | UUID | No | +| read | boolean | Yes | +| sentAt | timestamp | Yes | + +### Relationships + +- Belongs to User + diff --git a/docs/features.md b/docs/features.md new file mode 100644 index 0000000..799cf48 --- /dev/null +++ b/docs/features.md @@ -0,0 +1,207 @@ +# Features + + +## 1. GTD Inbox Capture + +**ID**: `gtd_inbox_capture` + +Multi-source task capture system that ingests tasks from manual web form, REST API, email (IMAP/Microsoft Graph), and ConnectWise Manage sync into an unprocessed inbox for later GTD clarification + +### User Story + +As a user, I want to capture tasks from multiple sources into a single inbox, so that I can process them later using GTD methodology without losing any input + +### Acceptance Criteria + +- [ ] Manual tasks can be submitted via web form quick-add and appear in inbox +- [ ] REST API endpoint accepts task capture from external tools and creates inbox items +- [ ] IMAP or Microsoft Graph email monitor converts incoming emails to inbox items +- [ ] ConnectWise Manage sync creates inbox items for new service/project tickets and zero-ticket projects +- [ ] All inbox items retain source metadata (timestamp, origin, attachments) until processed + + +## 2. GTD Processing Workflow + +**ID**: `gtd_processing_workflow` + +Interactive inbox processing interface that guides users through GTD clarification: converting raw inbox items into Next Actions with context tags, Projects, Waiting For items, Someday/Maybe, Reference Material, Tickler items, or Trash + +### User Story + +As a user, I want to process inbox items using GTD decision tree, so that each item becomes actionable or is appropriately filed + +### Acceptance Criteria + +- [ ] Inbox view displays unprocessed items with processing workflow controls +- [ ] User can clarify items into: Next Action (@context), Project, Waiting For, Someday/Maybe, Reference, Tickler, or Trash +- [ ] Next Actions require context label assignment (@desk, @phone, @errand, @homelab, @anywhere) +- [ ] Waiting For items accept optional follow-up date +- [ ] Tickler items accept future date and automatically surface to inbox on that date +- [ ] Processed items disappear from inbox and appear in appropriate GTD list + + +## 3. ConnectWise Manage Integration + +**ID**: `connectwise_integration` + +Read-only sync from ConnectWise Manage that imports service tickets, project tickets, and projects assigned to user. Projects with zero tickets surface as planning tasks. ConnectWise priority/SLA displayed for reference only; user assigns manual priority + +### User Story + +As a ConnectWise user, I want my assigned tickets and projects to flow into my GTD system automatically, so that I can manage work context alongside personal tasks + +### Acceptance Criteria + +- [ ] ConnectWise API integration syncs assigned service tickets as inbox items +- [ ] ConnectWise project tickets sync as inbox items with project association +- [ ] ConnectWise projects with zero tickets create planning task inbox items +- [ ] ConnectWise priority and SLA data displayed on task for reference but does not affect scheduling +- [ ] User can manually assign priority to all ConnectWise-sourced tasks +- [ ] Sync runs on configurable schedule and detects ticket status changes + + +## 4. Intelligent Calendar Scheduling + +**ID**: `intelligent_calendar_scheduling` + +Automatic scheduling engine that pulls from CalDAV calendars (Nextcloud, Google Calendar, Outlook via Microsoft Graph) and places actionable tasks into available time slots, respecting working hours, context constraints, deadlines, and manual priority. Supports drag-drop manual override and task locking + +### User Story + +As a user, I want the system to automatically schedule my next actions into my calendar based on context and availability, so that I have a realistic daily plan without manual time blocking + +### Acceptance Criteria + +- [ ] Engine reads existing events from CalDAV/Google/Outlook calendars +- [ ] Tasks scheduled only during user-defined working hours per day-of-week +- [ ] Work-context tasks (@desk, @phone) constrained to work hours; personal tasks schedulable anytime +- [ ] Scheduling batches tasks by context (consecutive @phone calls, grouped deep work) +- [ ] Manual priority rankings respected; deadlines enforced +- [ ] When calendar conflicts arise from new meetings, displaced tasks automatically reschedule +- [ ] Users can drag-drop tasks to override placement and lock tasks to fixed slots +- [ ] Weekly Review block auto-scheduled at recurring time + + +## 5. Interactive Calendar Week View + +**ID**: `calendar_week_view` + +React SPA with interactive week-view calendar displaying scheduled tasks and calendar events. Supports drag-and-drop task rescheduling, manual time adjustments, and real-time updates when scheduling changes occur + +### User Story + +As a user, I want a visual week view of my scheduled tasks and events with drag-and-drop editing, so that I can see and adjust my plan at a glance + +### Acceptance Criteria + +- [ ] Week view renders all scheduled tasks and synced calendar events +- [ ] Tasks draggable to different time slots and days +- [ ] Drag-drop updates trigger rescheduling of dependent tasks +- [ ] Tasks visually color-coded by context (@desk, @phone, etc.) +- [ ] Locked tasks display lock indicator and resist auto-rescheduling +- [ ] Real-time updates via WebSocket when scheduler makes changes +- [ ] Click-to-edit task details and priority in modal + + +## 6. Weekly Review Interface + +**ID**: `weekly_review_interface` + +Dedicated GTD Weekly Review interface with auto-scheduled recurring review block. Shows all active projects, next actions per project, unprocessed inbox count, waiting-for items, and someday/maybe list for systematic review + +### User Story + +As a GTD practitioner, I want a comprehensive weekly review interface, so that I can maintain clarity on all commitments and keep my system current + +### Acceptance Criteria + +- [ ] Weekly Review block auto-scheduled at user-configured recurring time +- [ ] Review interface displays all active projects with status +- [ ] Next actions grouped by project with completion status +- [ ] Unprocessed inbox item count prominently displayed +- [ ] Waiting For items listed with follow-up dates +- [ ] Someday/Maybe list accessible for potential activation +- [ ] Review completion checkbox updates last-review timestamp + + +## 7. GTD Contexts and Life Domains + +**ID**: `gtd_contexts_and_domains` + +Context label system (@desk, @phone, @errand, @homelab, @anywhere) and domain organization covering Work (ConnectWise tasks), Homelab (Proxmox, networking, 3D printing, NAS), Daily Routines (meals, exercise, supplements), House (maintenance, errands, cleaning), and Professional Development (Azure certification) + +### User Story + +As a user, I want to organize tasks by context and life domain, so that I can work efficiently based on my current situation and maintain balance across life areas + +### Acceptance Criteria + +- [ ] Tasks taggable with context labels (@desk, @phone, @errand, @homelab, @anywhere) +- [ ] Tasks assignable to domains: Work, Homelab, Daily Routines, House, Professional Development +- [ ] Filtering views by context show only relevant tasks +- [ ] Scheduling engine respects context constraints (work contexts during work hours) +- [ ] Domain views aggregate tasks and projects per life area +- [ ] Context-based batching groups similar tasks in schedule + + +## 8. Waiting For and Tickler System + +**ID**: `waiting_for_and_tickler` + +Waiting For list tracks items delegated or awaiting external input with optional follow-up dates. Tickler/Deferred items stored with future activation dates and automatically surface to inbox on specified date + +### User Story + +As a user, I want to track delegated tasks and future reminders, so that I follow up appropriately and activate tasks at the right time + +### Acceptance Criteria + +- [ ] Waiting For list displays all items awaiting external action +- [ ] Waiting For items accept optional follow-up date +- [ ] Items with follow-up dates highlighted when date arrives +- [ ] Tickler items stored with future date and hidden until activation +- [ ] Scheduled job checks daily for Tickler items reaching activation date +- [ ] Activated Tickler items automatically appear in inbox for processing + + +## 9. GTD Project Management + +**ID**: `project_management` + +Project hierarchy supporting multi-step outcomes with next actions, reference material attachments, notes, and project status tracking. ConnectWise projects with zero tickets surface as planning tasks requiring work breakdown + +### User Story + +As a user, I want to manage projects with next actions and reference materials, so that I maintain forward motion on multi-step goals + +### Acceptance Criteria + +- [ ] Projects created with name, description, desired outcome, and domain +- [ ] Each project can have multiple next actions with completion tracking +- [ ] Reference material (files, links, notes) attachable to projects +- [ ] Project status tracked (active, on-hold, completed) +- [ ] Projects without next actions flagged in Weekly Review +- [ ] ConnectWise projects with zero tickets create planning task to define work breakdown +- [ ] Project completion requires all next actions completed + + +## 10. Notifications and Rescheduling Alerts + +**ID**: `notifications_and_rescheduling` + +Real-time notification system via WebSocket, email, and optional webhook when automatic rescheduling occurs due to calendar conflicts, when Waiting For follow-ups are due, or when Tickler items activate + +### User Story + +As a user, I want to be notified when my schedule changes automatically, so that I stay aware of my updated commitments + +### Acceptance Criteria + +- [ ] WebSocket push notifications to active browser sessions when tasks reschedule +- [ ] Email notifications sent for rescheduling events if user preferences allow +- [ ] Webhook endpoint configurable for external notification integrations +- [ ] Notification includes displaced task, original time, new time, and reason +- [ ] Waiting For follow-up notifications sent when follow-up date arrives +- [ ] Tickler activation notifications sent when item surfaces to inbox +- [ ] Notification preferences configurable per notification type + diff --git a/docs/idea-dump.md b/docs/idea-dump.md new file mode 100644 index 0000000..f8f8e1a --- /dev/null +++ b/docs/idea-dump.md @@ -0,0 +1,12 @@ +# Original Idea + +AutoScheduler: Self-Hosted GTD Task Scheduler with ConnectWise Integration +AutoScheduler is a self-hosted web application that implements the Getting Things Done methodology with automatic calendar scheduling. It ingests tasks from multiple sources: a manual capture inbox, email (via IMAP or Microsoft Graph API), and read-only sync from ConnectWise Manage (service tickets, project tickets, and projects assigned to the user). Projects imported from ConnectWise with zero tickets surface as planning tasks requiring work breakdown. The system supports full GTD constructs: Inbox for capture, Next Actions with context labels (@desk, @phone, @errand, @homelab, @anywhere), Waiting For items with optional follow-up dates, Someday/Maybe for uncommitted ideas, Reference Material attachments on projects, and Tickler/Deferred items that surface to inbox on a specified date. All task priorities are manually assigned by the user; ConnectWise priority/SLA data is displayed for reference only. + +The scheduling engine pulls from CalDAV-compatible calendars (Nextcloud, Google Calendar via API, or Microsoft Graph for Outlook) and places actionable tasks into available time slots. Scheduling respects user-defined working hours per day-of-week, with work-context tasks constrained to work hours and personal tasks schedulable outside them. The engine batches tasks by context when possible (consecutive @phone calls, grouped deep work blocks) while respecting deadlines and manual priority rankings. When calendar conflicts arise from new meetings, displaced tasks automatically reschedule to the next available slot. Users can manually override any placement via drag-drop, and lock specific tasks to fixed time slots. A recurring Weekly Review block is auto-scheduled, with a dedicated review interface showing all active projects, next actions per project, unprocessed inbox items, waiting-for items, and the someday/maybe list. + +The personal life domain covers four areas: Homelab (Proxmox, networking, 3D printing, NAS projects), Daily Routines (meals, exercise, supplements), House (maintenance, errands, cleaning), and Professional Development (Microsoft Azure certification study). Work domain tasks flow from ConnectWise Manage and email, processed through the GTD inbox. The tech stack should prioritise self-hosting simplicity: containerised deployment (Docker Compose), SQLite or PostgreSQL for persistence, and a React or Vue SPA frontend with a calendar week view. Authentication via local accounts or OIDC. Notifications for rescheduling events via webhook or email. + +Capture Methods +The system must support multiple task capture mechanisms for the GTD inbox: a quick-add form in the web UI for manual entry, a REST API endpoint for external integrations (iOS Shortcuts, CLI scripts, browser extensions), an email-to-inbox address or IMAP folder monitor that converts emails to inbox items, and optionally a Telegram/Signal bot or webhook receiver for mobile capture. Inbox items arrive as raw text with optional metadata (source, timestamp, attached links) and remain unprocessed until the user clarifies them into actionable tasks, projects, reference material, or trash during the processing workflow. + diff --git a/docs/interfaces.md b/docs/interfaces.md new file mode 100644 index 0000000..e2ffe5f --- /dev/null +++ b/docs/interfaces.md @@ -0,0 +1,62 @@ +# Interface Contracts + +## REST API + +**Type**: rest_endpoints + +### Endpoints + +| Method | Path | Description | +|--------|------|-------------| +| POST | /api/v1/auth/register | Register new user account | +| POST | /api/v1/auth/login | Authenticate user and return JWT | +| POST | /api/v1/auth/refresh | Refresh JWT token | +| GET | /api/v1/inbox | Get all unprocessed inbox items | +| POST | /api/v1/inbox | Create inbox item via quick capture | +| POST | /api/v1/inbox/:id/process | Process inbox item into task/project/etc | +| DELETE | /api/v1/inbox/:id | Delete/trash inbox item | +| GET | /api/v1/tasks | Get tasks with filtering (context, status, domain) | +| POST | /api/v1/tasks | Create new task | +| PATCH | /api/v1/tasks/:id | Update task details, priority, or status | +| DELETE | /api/v1/tasks/:id | Delete task | +| POST | /api/v1/tasks/:id/schedule | Manually schedule or reschedule task | +| POST | /api/v1/tasks/:id/lock | Lock task to fixed time slot | +| POST | /api/v1/tasks/:id/unlock | Unlock task for auto-scheduling | +| GET | /api/v1/tasks/waiting-for | Get all Waiting For items | +| GET | /api/v1/tasks/someday-maybe | Get Someday/Maybe list | +| GET | /api/v1/tasks/tickler | Get future Tickler items | +| GET | /api/v1/projects | Get all projects | +| POST | /api/v1/projects | Create new project | +| PATCH | /api/v1/projects/:id | Update project details or status | +| DELETE | /api/v1/projects/:id | Delete project | +| GET | /api/v1/projects/:id/tasks | Get all tasks for a project | +| POST | /api/v1/projects/:id/reference | Add reference material to project | +| GET | /api/v1/calendar/events | Get calendar events for date range | +| GET | /api/v1/schedule | Get scheduled tasks for date range | +| POST | /api/v1/schedule/regenerate | Trigger full schedule regeneration | +| GET | /api/v1/weekly-review | Get Weekly Review data | +| POST | /api/v1/weekly-review/complete | Mark Weekly Review as completed | +| GET | /api/v1/connections/calendar | Get calendar connections | +| POST | /api/v1/connections/calendar | Create calendar connection | +| PATCH | /api/v1/connections/calendar/:id | Update calendar connection | +| DELETE | /api/v1/connections/calendar/:id | Remove calendar connection | +| POST | /api/v1/connections/calendar/:id/sync | Trigger manual calendar sync | +| GET | /api/v1/connections/connectwise | Get ConnectWise connection status | +| POST | /api/v1/connections/connectwise | Create ConnectWise connection | +| PATCH | /api/v1/connections/connectwise/:id | Update ConnectWise connection | +| DELETE | /api/v1/connections/connectwise/:id | Remove ConnectWise connection | +| POST | /api/v1/connections/connectwise/:id/sync | Trigger manual ConnectWise sync | +| GET | /api/v1/connections/email | Get email connection status | +| POST | /api/v1/connections/email | Create email connection | +| PATCH | /api/v1/connections/email/:id | Update email connection | +| DELETE | /api/v1/connections/email/:id | Remove email connection | +| GET | /api/v1/notifications | Get user notifications | +| PATCH | /api/v1/notifications/:id/read | Mark notification as read | +| GET | /api/v1/user/preferences | Get user preferences and settings | +| PATCH | /api/v1/user/preferences | Update user preferences | + +## WebSocket Events + +**Type**: websocket_events + + diff --git a/docs/research-notes.md b/docs/research-notes.md new file mode 100644 index 0000000..c18ec85 --- /dev/null +++ b/docs/research-notes.md @@ -0,0 +1,1239 @@ +# Research Notes + +## Query: NestJS TypeScript REST API WebSocket architecture best practices 2026 + +For a modern NestJS backend that serves both a **REST API** and **WebSockets**, treat HTTP and WS as first‑class but separate concerns, built on a modular, layered architecture, with clear domain boundaries, shared application services, and infrastructure focused on scaling (Redis, microservices) and robustness. + +Below is an architecture and best‑practices checklist you can use as a reference design for 2025–2026 projects. + +--- + +## 1. High‑level architecture + +**Core principles** + +- **Hexagonal / clean architecture**: + - **Domain layer**: entities, value objects, domain services (framework‑agnostic). + - **Application layer**: use cases/services orchestrating domain logic. + - **Interface layer**: REST controllers, WebSocket gateways, GraphQL resolvers, etc. + - **Infrastructure layer**: database adapters, message brokers, cache, 3rd‑party APIs. + +- **Nest modules for vertical slicing** + - One module per domain (e.g. `UsersModule`, `ChatModule`, `OrdersModule`), each exposing: + - REST **controllers** for HTTP + - **gateways** for WebSocket events + - shared **providers** (services, repositories). + - This modular approach is explicitly recommended for scalable NestJS apps.[7][8] + +- **Transport separation, logic reuse** + - REST controllers and WebSocket gateways should **call the same application services**, not duplicate business logic. + - Gateways are just another interface adapter, as described in the Nest docs on WebSocket gateways.[6] + +--- + +## 2. REST API best practices (NestJS + TypeScript) + +- **Controllers** + - Thin controllers that: + - Map DTOs to method arguments. + - Delegate to service/use‑case classes. + - Map results to HTTP responses. + - Use route versioning (`/v1`, `/v2`) and Nest’s versioning support for breaking changes. + +- **DTOs, validation, transformation** + - Use `class-validator` + `class-transformer` and global `ValidationPipe` for all REST inputs. + - Keep REST DTOs separate from WebSocket payload DTOs when they differ. + +- **Error handling** + - Use **global exception filter** (e.g. `HttpExceptionFilter`) to normalize API error shape. + - Map domain errors to HTTP status codes consistently. + +- **Security** + - JWT or OAuth2 for auth, Nest Guards for authorization. + - Rate‑limit sensitive endpoints with Nest interceptors or a gateway like API Gateway / NGINX. + +--- + +## 3. WebSocket architecture in NestJS + +### 3.1. Gateways as interface layer + +- **Use Nest WebSocket gateways** + - A gateway class annotated with `@WebSocketGateway()` is the entry point for WS connections.[6] + - Example (Socket.IO): + + ```ts + @WebSocketGateway({ namespace: '/chat', cors: true }) + export class ChatGateway { + @WebSocketServer() server: Server; + + @SubscribeMessage('message') + handleMessage( + @MessageBody() data: string, + @ConnectedSocket() client: Socket, + ) { + client.broadcast.emit('message', data); + } + } + ``` + + This pattern is standard in 2024–2025 NestJS WebSocket guides.[2][3][6] + +- **Lifecycle hooks** + - Implement `afterInit`, `handleConnection`, `handleDisconnect` to manage: + - Connection registry (online presence). + - Resource allocation / cleanup.[1][2][3][6] + +- **Namespaces and rooms** + - Use **namespaces** per domain (`/chat`, `/notifications`, `/trading`) and **rooms** per context (user, group, document) to keep broadcasting efficient.[2] + +### 3.2. WebSocket best practices (2025–2026) + +**Security**[1][2][3][4] +- Authenticate during the **handshake**: + - Pass JWT in query, header, or cookie; validate in a guard or middleware. +- Authorize events: + - Use per‑event guards or authorization services. + - Prevent clients from subscribing to rooms they are not allowed to see. +- Validate all payloads: + - Use pipes or schema validation (e.g. Zod) for WS events, similar to HTTP.[4] + +**Connection handling**[1][2][3] +- Track connections per user (userId → socketIds). +- Clean up on `handleDisconnect` (presence, locks, subscriptions). +- Avoid long‑running work in WS handlers; delegate to async services/queues. + +**Performance & scalability**[1][2][3][7] +- Offload heavy work to: + - Background jobs (Bull/BullMQ), microservices, or message queues. +- Use **Redis pub/sub** or similar for horizontal scaling: + - Shared adapter for Socket.IO so messages propagate across instances.[1][2] +- Monitor: + - Event loop lag, memory, open sockets, message throughput.[2][7] + +**Resilience**[1][2] +- Configure client‑side **reconnect** and exponential backoff. +- Handle network partitions, stale sockets, and replay/duplicate events at the application layer. + +--- + +## 4. Integrating REST and WebSockets cleanly + +- **Pattern: REST for CRUD, WS for realtime** + - REST: + - Resource creation, updates, queries, pagination. + - WebSocket: + - Push updates (notifications, status changes, live data). + - Collaborative operations (presence, typing indicators, etc.). + +- **Event flow (typical pattern)** + 1. Client calls REST API to change state (e.g. create message). + 2. Application service persists change, publishes domain event. + 3. WS gateway (or a microservice) listens to events and **broadcasts** to interested clients (rooms/users). + +- **Avoid duplicating write paths** + - Prefer **one canonical mutation path** (often REST or a command bus) and use WS mostly for reads/updates propagation. + - If you accept writes over WS (e.g. chat messages), a service should handle both HTTP and WS commands identically. + +--- + +## 5. Scaling and microservices + +- **Nest microservices** + - Nest supports microservices and multiple transports (Redis, NATS, Kafka, MQTT, etc.).[5] + - Use microservices for: + - High‑traffic domains (chat, notifications, analytics). + - Decoupling heavy real‑time loads from main REST API. + +- **Pattern** + - API gateway / BFF exposes REST + WebSocket. + - Internal microservices handle business logic and communicate via message broker. + - Gateway uses Redis pub/sub or broker to broadcast results to WS clients. + +--- + +## 6. Cross‑cutting concerns (2026‑ready) + +- **Type safety** + - Use TypeScript generics and shared **contract types** for REST DTOs and WS payloads. + - Consider a shared package (`@app/contracts`) for request/response/event types. + +- **Logging and observability**[1][2][3][7] + - Structured logging from controllers and gateways (requestId / correlationId). + - Metrics: number of connections, messages per second, failed events, response times. + - Tracing: OpenTelemetry across REST and WS flows. + +- **Configuration & environment** + - Use Nest `ConfigModule` for environment‑based configuration (DB/Redis URLs, CORS, allowed origins). + - Keep WS CORS rules aligned with HTTP CORS. + +- **Testing** + - Unit test: + - Services and domain logic in isolation. + - Integration test: + - REST endpoints via `@nestjs/testing`. + - WebSockets using Socket.IO client in tests to assert message flows. + +--- + +## 7. Practical checklist + +When designing a NestJS REST + WebSocket service for 2026, ensure: + +- **Architecture** + - Domain‑driven modules, clean layering, shared services. +- **REST** + - Versioned, validated, documented (Swagger), with unified error format. +- **WebSockets** + - Gateway per domain, namespaces/rooms, lifecycle hooks, validated payloads, authenticated handshakes.[1][2][3][6] +- **Scaling** + - Redis adapter for WS, microservices or queues for heavy tasks.[1][2][5][7] +- **Security & robustness** + - Guards, rate‑limit on HTTP, message validation, connection management, observability.[1][2][3][4][7][8] + +If you share your specific use case (e.g., trading app, collaborative editor, chat, IoT), I can propose a concrete module layout and message/REST design aligned with these practices. + +### Sources + +- [Source 1](https://saigon.digital/blog/implementing-websockets-with-nestj/) +- [Source 2](https://www.videosdk.live/developer-hub/websocket/nest-js-websocket) +- [Source 3](https://dev.to/ezilemdodana/real-time-applications-with-nestjs-and-websockets-5afk) +- [Source 4](https://blog.stackademic.com/nestjs-websockets-authentication-authorization-pipes-f18605e3c393) +- [Source 5](https://docs.nestjs.com/microservices/basics) +- [Source 6](https://docs.nestjs.com/websockets/gateways) +- [Source 7](https://www.brilworks.com/blog/optimize-your-nest-js-app-performance/) +- [Source 8](https://fabwebstudio.com/blog/building-scalable-and-secure-enterprise-applications-with-the-nest-js-framework) + +## Query: React Vite calendar drag-drop FullCalendar react-big-calendar performance optimization 2026 + +For heavy drag-and-drop calendar UIs in React/Vite (e.g., FullCalendar or react-big-calendar) with many events, you get the best performance by combining **library-level optimizations** (virtualization, progressive/partial rendering) with **React-level memoization and minimal rerenders**. + +Below is a focused checklist, split by library and then general React/Vite strategies. + +--- + +## 1. When to use FullCalendar vs react-big-calendar vs alternatives + +- **FullCalendar** + - Strong feature set (drag/drop, resource views, Scheduler, etc.). + - Historically suffers when rendering many events because it **renders all events in the DOM** and historically re-renders too much on drag/drop.[2] + - From **v6.1.18**, event rerendering was optimized so that *only modified events* are rerendered, not all.[2] + - Roadmap: v7 adds optimizations; v7.1+ mentions **virtual rendering** as a goal.[2] v8/v9 roadmap continues performance work.[4] +- **react-big-calendar** + - React-friendly; uses Flexbox instead of table layout, which was originally cited as a possible performance improvement over FullCalendar’s table layout.[2] + - No built‑in virtualization; performance drops with hundreds+ of events similar to FullCalendar.[5] +- **High-performance alternatives for 2025–2026** + - **Bryntum Calendar / Scheduler**: virtual rendering, minimal DOM footprint, advanced performance features.[3][5] + - **DayPilot**: progressive rendering, on-demand loading, partial updates, optimized for heavy workloads.[3] + - **Planby**: React timeline/calendar component with **virtual rendering**, reported ~3× faster than FullCalendar for 500+ events.[1][3] + +If you need **thousands of events with smooth drag/drop**, consider Bryntum, DayPilot, or Planby before trying to push FullCalendar/react-big-calendar to their limits.[1][3][5] + +--- + +## 2. FullCalendar + React/Vite performance strategies + +### 2.1 Use the latest FullCalendar with optimized rerenders + +- Use **FullCalendar v6.1.18+** or v7 when available: + - Event updates now rerender **only modified events**, fixing the “all events rerender on drag/drop or update” issue.[2] + - This greatly cuts CPU time when dragging or updating single events in large views. +- For frequent serial updates (e.g., rapidly mutating events), use: + ```js + const options = { + rerenderDelay: 100, // ms + }; + ``` + This batches rerenders and significantly reduces main-thread work.[2] + +### 2.2 Reduce DOM and event complexity + +- Filter data **before** passing it to FullCalendar. + - Only pass events in (or near) the visible date range instead of your whole dataset. + - Use backend pagination or API parameters to fetch only what is needed. +- Avoid unnecessary custom DOM in `eventContent`: + - Keep event render content minimal; heavy React trees inside each event will dominate render cost. + - Prefer simple markup and minimal React state inside event content. + +### 2.3 Avoid re-creating props on every render + +In your React wrapper around FullCalendar: + +- Memoize **events** and other large props: + ```ts + const events = useMemo(() => transformRawEvents(rawEvents), [rawEvents]); + ``` +- Memoize callbacks passed to FullCalendar (e.g., `eventDrop`, `eventClick`) using `useCallback` so React doesn’t think props changed every render. + +### 2.4 Defer heavy work off the main thread + +- For large transforms (e.g., normalizing thousands of events), use: + - Web Workers + - Debounced/batched updates (e.g., only recompute after user stops dragging for X ms). +- Precompute layout data on the server if possible (e.g., start/end times and conflicts) so the client only renders. + +--- + +## 3. react-big-calendar performance strategies + +react-big-calendar lacks built-in virtualization, so the focus is on **minimizing React work**: + +- Use **`React.memo`** for all custom components (event renderer, toolbar, custom headers). +- Memoize the `events` array and **do not recreate it** on every render: + ```ts + const events = useMemo(() => toBigCalendarEvents(rawEvents), [rawEvents]); + ``` +- Avoid storing large event lists in multiple layers of state; one source of truth is enough. +- When drag/drop is enabled: + - Update the single affected event in place and reuse the same array reference when possible, or use a keyed immutable update that doesn’t require rebuilding the whole list. +- Keep custom event and slot renderers simple; avoid heavy trees inside each cell. + +If you still experience lag with 1000+ events, consider switching to a **virtualized scheduler** (Bryntum/DayPilot/Planby).[1][3][5] + +--- + +## 4. General React performance patterns (for any calendar) + +These apply for FullCalendar, react-big-calendar, or alternatives: + +- **Avoid global rerenders** + - Use state libraries that support fine-grained updates (e.g., Zustand, Jotai, Redux with careful selectors) so updating one event doesn’t rerender the entire app. +- **Memoize** everything passed into the calendar: + - `events`, `resources`, `views`, handlers like `onEventDrop`, `onSelectSlot`, etc. +- **Virtualization where possible** + - If your calendar library exposes a way to control rendering of rows or resources, implement your own virtualization or use a library that already does this (Bryntum, DayPilot, Planby).[1][3][5] +- **Throttle drag/drop-driven updates** + - Do not persist to server or update global state on every drag movement. + - Use `onEventDrop` (or equivalent) for final commits; only show local/optimistic feedback during drag. +- **Keep React DevTools and logs off** in production; they can distort performance tests. + +--- + +## 5. Vite-specific considerations + +Vite itself is very fast; issues are nearly always in runtime React, not bundling. Still: + +- Use production builds (`vite build` + serve) when testing performance; dev mode adds overhead. +- Configure code splitting to keep the calendar and its heavy dependencies in separate chunks so initial load is smaller. +- Avoid bundling multiple calendar libraries simultaneously unless needed. + +--- + +## 6. If you’re starting a new project in 2025–2026 + +For a **React + Vite calendar with drag/drop and many events**: + +- If you need **enterprise-level scale** (thousands of events, many resources, smooth interaction): + - Consider **Bryntum Calendar/Scheduler** or **DayPilot React Scheduler** for built-in virtual rendering and progressive loading.[3][5] +- If you need a **timeline-style UI** (TV guide, bookings, shifts) with virtual scroll: + - Consider **Planby**, which was specifically built for large event sets and is reported ~3× faster than FullCalendar with 500+ events.[1] +- If you are already invested in **FullCalendar**: + - Upgrade to the latest v6.1.18+ and adopt `rerenderDelay`, data filtering, and memoization as above.[2][3] + +If you share your current stack (FullCalendar or react-big-calendar), approximate event count, and whether you use resources/timeline views, I can propose a more concrete configuration and small code snippets tailored to your case. + +### Sources + +- [Source 1](https://dev.to/kozerkarol/how-i-built-a-lightweight-react-calendar-thats-3-faster-than-fullcalendar-1bj) +- [Source 2](https://github.com/fullcalendar/fullcalendar/issues/3003) +- [Source 3](https://dhtmlx.com/blog/best-react-scheduler-components-dhtmlx-bryntum-syncfusion-daypilot-fullcalendar/) +- [Source 4](https://fullcalendar.io/roadmap) +- [Source 5](https://bryntum.com/blog/react-fullcalendar-vs-big-calendar/) +- [Source 6](https://sourceforge.net/software/product/FullCalendar/) +- [Source 7](https://www.youtube.com/watch?v=keTcXT145CI) +- [Source 8](https://javascript.plainenglish.io/reacts-top-libraries-for-2026-the-ultimate-guide-to-building-faster-cc1415d7fc4c) + +## Query: NestJS PostgreSQL Docker Compose self-hosted deployment security best practices + +For a self‑hosted NestJS + PostgreSQL stack on Docker Compose, focus on hardening **containers**, **networking**, **secrets**, and **database** configuration, plus general app security. + +Below is a concise checklist you can adapt. + +--- + +## 1. Docker & Compose hardening + +- **Use multi‑stage builds & small base images** + - Build in a `builder` stage and copy only the compiled `dist` + production deps into a minimal runtime (e.g. `node:alpine`).[3][5] + - Avoid shipping dev tools, build tools, tests, and `.git` into the final image. + +- **Run as non‑root** + - Set `USER node` (or another unprivileged user) in the final stage, not root.[3][4] + - Ensure mounted volumes and files are readable by that user, not world‑writable. + +- **Set production environment** + - `NODE_ENV=production` in the final image so that frameworks and libraries use hardened, production defaults.[3] + +- **Read‑only filesystem where possible** + - For the API container, keep the filesystem mostly read‑only and write only to explicit volumes (logs, temp, etc.). + +- **Limit container capabilities** + - In `docker-compose.yml` add: + ```yaml + cap_drop: + - ALL + read_only: true + ``` + and selectively add what you truly need. + +--- + +## 2. Network isolation & exposure + +- **Use a private Docker network** + - Attach NestJS and PostgreSQL to a **dedicated user‑defined network** so only those services can talk to each other.[2] + - Do **not** publish the DB port on the host unless truly required: + ```yaml + services: + api: + networks: + - app_net + + db: + networks: + - app_net + # avoid: ports: ["5432:5432"] + networks: + app_net: + driver: bridge + ``` + +- **Restrict PostgreSQL listen addresses** + - In `postgresql.conf`, set: + ```conf + listen_addresses = '0.0.0.0' # inside Docker; but reachable only via app_net + ``` + or even the explicit container IP if you manage it carefully.[2] + +- **Single public entrypoint** + - Only expose the NestJS container (or, better, a reverse proxy like Nginx/Traefik) to the internet. + - PostgreSQL must never be directly reachable from the public network. + +--- + +## 3. Secrets & configuration (NestJS + Postgres) + +- **Avoid hard‑coded secrets and plain env files** + - Do *not* check `.env` into version control. + - For production, use **Docker secrets** or an external secret manager: + ```yaml + services: + db: + environment: + POSTGRES_PASSWORD_FILE: /run/secrets/pg_passwd + secrets: + - pg_passwd + + secrets: + pg_passwd: + external: true + ```[2] + +- **Separate config per environment** + - Use different `.env`/secret sets for dev, staging, prod. + - Ensure DB name/user/password differ per environment. + +- **NestJS config management** + - Use `@nestjs/config` or a similar centralized config module; never commit secrets into code. + - Validate config (e.g. via `Joi`) on startup to avoid misconfigurations. + +--- + +## 4. PostgreSQL security in Docker + +- **Use a maintained image & pinned versions** + - Use official/maintained images (`postgres:X.Y` or `bitnami/postgresql`) and pin a major/minor version to avoid surprises.[2][3] + +- **Strong credentials and least privilege** + - Use strong passwords for the `POSTGRES_USER` and for the application DB user. + - Create a dedicated DB user for the NestJS app with only required privileges (no `SUPERUSER`). + +- **Persistent volumes with proper permissions** + - Mount a volume for data: + ```yaml + volumes: + - pgdata:/var/lib/postgresql/data + ``` + - Ensure the volume is only accessible to the postgres user inside the container. + +- **TLS/SSL for DB connections** + - For self‑hosting where network hops extend beyond a single host (or you don’t fully trust the network), enable **SSL/TLS** in PostgreSQL and require SSL in the NestJS connection string.[2] + +- **Connection limits & resource tuning** + - Set reasonable limits (e.g. `max_connections`, `work_mem`) to mitigate DoS from misbehaving app instances. + +--- + +## 5. NestJS application security + +- **Framework‑level security** + - Disable detailed error messages and stack traces in production. + - Use security middleware: Helmet for HTTP headers, rate‑limiting, and CORS with strict allowed origins.[1] + +- **Authentication & session security** + - Use a vetted auth library (JWT, session management, or a solution like SuperTokens) and store tokens securely (e.g. HTTP‑only cookies).[1] + - Implement CSRF protection if you use cookie‑based auth. + +- **Input validation & sanitization** + - Use `class-validator` and `class-transformer` globally to validate DTOs. + - Sanitize inputs to mitigate injection; never concatenate untrusted data into SQL—use an ORM or parameterized queries. + +- **Logging & monitoring** + - Centralize logs per container and avoid logging secrets. + - Enable basic health checks in Docker (e.g. `pg_isready` for Postgres, `/health` endpoint for NestJS).[1][3] + +--- + +## 6. Compose example (security‑aware skeleton) + +```yaml +version: "3.9" + +services: + api: + build: + context: . + dockerfile: Dockerfile + environment: + NODE_ENV: production + DB_HOST: db + DB_PORT: 5432 + DB_USER: app_user + DB_NAME: app_db + # DB_PASSWORD via secret or env in production + depends_on: + - db + networks: + - app_net + ports: + - "80:3000" + restart: unless-stopped + user: "1000:1000" # non-root + read_only: true + + db: + image: postgres:16 + restart: unless-stopped + environment: + POSTGRES_DB: app_db + POSTGRES_USER: app_user + POSTGRES_PASSWORD_FILE: /run/secrets/db_password + volumes: + - pgdata:/var/lib/postgresql/data + networks: + - app_net + healthcheck: + test: ["CMD-SHELL", "pg_isready -U app_user"] + interval: 5s + timeout: 5s + retries: 5 + secrets: + - db_password + +networks: + app_net: + driver: bridge + +volumes: + pgdata: + +secrets: + db_password: + external: true +``` + +--- + +## 7. Operational practices + +- Keep images up to date with security patches; automate rebuilds and deploys. +- Regularly back up PostgreSQL (physical or logical backups) and test restores. +- Limit SSH and Docker daemon access; treat it as production infrastructure. +- Use a firewall / security groups to only allow app and admin traffic where needed. + +If you share your current `Dockerfile` and `docker-compose.yml`, I can point out specific changes to harden your setup. + +### Sources + +- [Source 1](https://supertokens.com/blog/how-to-secure-a-nestjs-app) +- [Source 2](https://sliplane.io/blog/best-practices-for-postgres-in-docker) +- [Source 3](https://dev.to/manuchehr/dockerize-secure-nestjs-app-with-postgres-redis-56md) +- [Source 4](https://wanago.io/2023/01/16/api-nestjs-docker-compose/) +- [Source 5](https://forums.docker.com/t/best-practices-for-using-docker-in-development-vs-production-nestjs-nextjs-monorepo/149461) +- [Source 6](https://blog.logrocket.com/containerized-development-nestjs-docker/) +- [Source 7](https://hackernoon.com/how-to-set-up-postgresql-with-nestjs-and-docker-for-fast-local-development-a-quick-guide) + +## Query: TypeScript monorepo project structure NestJS React common mistakes pitfalls + +A solid TypeScript monorepo with **NestJS (backend)** and **React (frontend)** usually fails not because of tools, but because of structure and boundaries. Below are the most common mistakes and how to avoid them, specifically for a Nest + React monorepo. + +--- + +## 1. Treating the monorepo as one big TS project + +**Mistake:** Single `tsconfig.json` at the root, all code in one program, no **project references**. + +**Problems:** + +- Slow `tsc` and IDE responsiveness as the repo grows.[3][5] +- No clear build order between backend, frontend, and shared libs. +- Harder to run isolated builds in CI. + +**Better:** + +- Root `tsconfig.base.json` with shared compiler options.[1][3] +- Each app/lib has its own `tsconfig.json` and uses **`references`** to shared packages.[3][5] +- Build with `tsc --build` (or via Nx/Turbo) from root so TS respects the dependency graph.[3][5] + +--- + +## 2. No clear separation between apps and shared libraries + +**Mistake:** Nest and React importing each other’s code directly via relative paths instead of through **shared packages**. + +**Problems:** + +- Circular dependencies (e.g., React importing from `apps/api/src/...` and Nest importing from `apps/web/src/...`). +- Accidental leaking of backend-only code to the frontend bundle (e.g., Node APIs in browser). + +**Better structure:** + +- `apps/api` – NestJS app +- `apps/web` – React app +- `packages/shared-domain` – pure domain logic, types, DTOs (no Nest/React-specific code) +- `packages/shared-config` – environment/config types, config helpers (no framework globals) + +Use **package boundaries**: + +- Frontend imports only from `packages/*`. +- Backend imports from `packages/*` plus its own `apps/api/*`. + +--- + +## 3. Sharing “too much” code between Nest and React + +**Mistake:** Putting everything common (including Nest decorators, pipes, React hooks) into one “shared” package. + +**Problems:** + +- Shared package becomes framework-dependent and unusable on the other side. +- React app may accidentally import Nest-only code, causing bundling/runtime failures. + +**Better:** + +- Keep **shared packages framework-agnostic**: domain models, validation schemas, DTOs, API types. +- Have framework-specific adapters: + - `packages/nest-adapters` (uses shared DTOs but also Nest decorators). + - `packages/react-hooks` (uses shared types/DTOs but React-specific logic). + +--- + +## 4. Ignoring module boundaries and coupling Nest modules with React routes + +**Mistake:** Nest modules knowing about React routing or component structure, or React directly calling Nest internal modules instead of HTTP APIs. + +**Problems:** + +- Tight coupling across layers; refactoring either side becomes expensive. +- Impossible to test backend separately without frontend. + +**Better:** + +- The **boundary between Nest and React is always a protocol**: + - REST/GraphQL schema, or + - shared **API type definitions** in a common package. +- React talks only to HTTP endpoints; Nest exposes controllers/services internally, not React-specific abstractions. + +--- + +## 5. Bad path alias and import strategy + +**Mistake:** + +- Using long relative paths (`../../../`) everywhere. +- Path aliases defined differently in **TypeScript vs bundler** (e.g., Vite/Webpack) vs Node runtime. +- Using `tsconfig` paths without aligning them with your workspace tool.[1][3][7] + +**Problems:** + +- Code compiles in editor but fails at runtime. +- Confusing circular imports and build errors. + +**Better:** + +- Define **root-level** `tsconfig.base.json` with `baseUrl` and `paths` and extend it from app/lib `tsconfig`s.[1][3][7] +- Make sure bundler and test runner resolve aliases the same way (e.g., Jest `moduleNameMapper`, Vite/Webpack `alias`). +- Use package imports (`@project/shared-domain`) instead of deep internal paths where possible. + +--- + +## 6. Missing or misusing workspace tooling (Yarn/NPM/pnpm + Nx/Turbo) + +**Mistake:** + +- Manual `cd apps/api && npm run build` everywhere. +- No topological build order or caching.[3][4] + +**Problems:** + +- Rebuilding everything on every CI run. +- Subtle build-order bugs: React built before the shared package it uses, etc. + +**Better:** + +- Use **workspaces** (Yarn/pnpm/npm) for package linking and dependency management.[2][3][4] +- Use a monorepo tool like **Nx** or **Turborepo** to: + - infer dependency graph, + - run `build`/`test` in **topological order** with caching.[3][4] +- Expose a single root command: e.g. `nx run-many --target=build` or `yarn workspaces foreach --topological-dev run build`.[3][4] + +--- + +## 7. Inconsistent tooling config (ESLint, Prettier, Jest/Vitest) + +**Mistake:** + +- Each app has its own slightly-different ESLint/Prettier/Jest config. +- Some packages use strict TS rules, others don’t. + +**Problems:** + +- Inconsistent code quality and formatting. +- Harder onboarding and surprise build failures. + +**Better:** + +- Root **shared config** files: + - `eslint.base.js` and app-level small extensions. + - `prettier.config` at root.[5] + - Shared Jest/Vitest base config; each app adds its own transforms. +- Ensure test runners understand TS project references and path aliases. + +--- + +## 8. Wrong granularity of packages + +**Mistake:** + +- Either: one giant `shared` package with everything. +- Or: dozens of tiny packages for every small utility function. + +**Problems:** + +- Giant shared package: no clear boundaries, difficult to version. +- Tiny packages: dependency graph and tooling overhead become unmanageable. + +**Better:** + +- Package around **cohesive domains**, not individual functions: + - `shared-domain`, `shared-api-types`, `shared-ui` (if you truly have cross-app UI), etc. +- Keep packages **independent and acyclic**: avoid cycles in dependencies.[5] + +--- + +## 9. Not using TypeScript project references correctly + +**Mistake:** + +- Setting `references` in `tsconfig` but still running plain `tsc` or `tsc -p` without `--build`.[3] + +**Problems:** + +- You get none of the incremental build benefits. +- Editors and CI may behave differently.[3][5] + +**Better:** + +- Use `tsc --build` (or `tsc -b`) from the root to respect project references and incremental builds.[3][5] +- Ensure each referenced project has: + - `"composite": true` + - `"declaration": true` +- Use watch mode (`tsc -b --watch`) during development where appropriate.[5] + +--- + +## 10. Environment and config confusion between Nest and React + +**Mistake:** + +- Using the same `.env` or config loading code in both server and client without differentiating secrets vs public values. +- Directly importing server-only config from React. + +**Problems:** + +- Secrets leaked to frontend bundles. +- Hard-to-debug environment mismatch between apps. + +**Better:** + +- Shared **config types** in a package (`Config`, `PublicConfig`). +- Implementation separated: + - Nest reads from process env, files, secrets managers. + - React uses build-time env injection (e.g., Vite/CRA/Next pattern). +- Only **public subset** goes to the client. + +--- + +## 11. Sharing runtime-only Nest/Node constructs with React + +**Mistake:** + +- Putting Node-only things (e.g., `fs`, Nest interceptors) in “shared” code imported by React. + +**Problems:** + +- Bundle fails at build or runtime in browser. +- Tree-shaking doesn’t always save you. + +**Better:** + +- Shared packages should be **runtime-agnostic** or clearly server-only/client-only. +- Use separate entry points: + - `main.ts` / `server.ts` in Nest. + - `index.tsx` / `client.tsx` in React. +- If you must share code that is server-only, keep it in a `packages/server-*` namespace, never used by React. + +--- + +## 12. CI/CD not aligned with monorepo structure + +**Mistake:** + +- CI builds and tests every app and package on every change. +- Or worse, CI doesn't understand cross-package dependencies. + +**Problems:** + +- Slow pipelines, flaky deployments. +- Backend deployed without updated shared package, etc. + +**Better:** + +- Use your monorepo tool’s **affected**/changed commands: + - Nx “affected” commands, or Turborepo’s hashing/caching. +- Ensure CI: + - Restores cache for `node_modules` and build outputs. + - Runs builds in dependency order. +- Optionally: one Docker image at root containing all services instead of one per service if that fits your deployment model.[4] + +--- + +If you share your current structure (folders + `tsconfig`s) I can point out concrete, repo-specific pitfalls and give an adjusted layout for Nest + React + shared packages. + +### Sources + +- [Source 1](https://graphite.com/guides/typescript-in-monorepo) +- [Source 2](https://news.ycombinator.com/item?id=32594915) +- [Source 3](https://nx.dev/blog/managing-ts-packages-in-monorepos) +- [Source 4](https://escape.tech/blog/setup-typescript-monorepo/) +- [Source 5](https://dev.to/mxro/the-ultimate-guide-to-typescript-monorepos-5ap7) +- [Source 6](https://blog.bitsrc.io/monorepo-from-hate-to-love-97a866811ccc) +- [Source 7](https://monorepo.tools/typescript) +- [Source 8](https://www.youtube.com/watch?v=TeOSuGRHq7k) + +## Query: NestJS scheduled jobs CalDAV Microsoft Graph API integration testing approaches + +You’re essentially looking for **ways to test a NestJS app that has scheduled jobs which talk to CalDAV and Microsoft Graph** (calendar sync-type scenario). Below is a concise strategy focusing on *testing approaches*, not full implementation. + +--- + +## 1. Architectural testability pre‑conditions + +To make testing possible, structure your code so that: + +- A **scheduler layer** only triggers methods on a **domain/service layer**. +- The domain/service layer depends on interfaces like: + - `CalDavClient` (e.g. `ICalendarProvider` / `ICalDavClient`) + - `GraphClient` (Microsoft Graph) +- Actual HTTP calls are only in those client classes; they are **injected** via Nest DI (`@Injectable` providers with `useClass`/`useFactory`).[2] + +This lets you: + +- Unit test the service by **mocking clients**. +- Integration test by **swapping real vs fake HTTP** implementations. + +--- + +## 2. Testing scheduled jobs (NestJS Cron / Scheduler) + +Assuming you use `@nestjs/schedule`: + +- Put schedule decorators on a thin job class: + +```ts +@Injectable() +export class CalendarSyncJob { + constructor(private readonly syncService: CalendarSyncService) {} + + @Cron('0 * * * *') // every hour, for example + async handleCron() { + await this.syncService.syncAllAccounts(); + } +} +``` + +### Unit test of the job: + +- Use `@nestjs/testing` and **mock `CalendarSyncService`**: + +```ts +const module = await Test.createTestingModule({ + providers: [ + CalendarSyncJob, + { + provide: CalendarSyncService, + useValue: { syncAllAccounts: jest.fn().mockResolvedValue(undefined) }, + }, + ], +}).compile(); + +const job = module.get(CalendarSyncJob); +await job.handleCron(); +expect(syncService.syncAllAccounts).toHaveBeenCalledTimes(1); +``` + +No real time passes, no network involved: you call `handleCron()` directly. + +--- + +## 3. Unit testing CalDAV / Graph integration services + +Example service: + +```ts +@Injectable() +export class CalendarSyncService { + constructor( + private readonly caldavClient: CalDavClient, + private readonly graphClient: GraphClient, + ) {} + + async syncUser(userId: string) { + const caldavEvents = await this.caldavClient.getEvents(userId); + const msEvents = await this.graphClient.getEvents(userId); + // diff + write changes + } +} +``` + +### Unit test approach: + +- Replace `CalDavClient` and `GraphClient` with **Jest mocks** or fake implementations. +- Cover: + - Happy path (events synced correctly). + - Conflicts / duplicates. + - Error handling (CalDAV fails, Graph fails, partial sync). +- Assert: + - Correct calls to `.createEvent`, `.updateEvent`, `.deleteEvent`, etc. + - Correct transformation between CalDAV and Graph schemas. + +No Nest specifics required beyond DI; this is plain unit testing. + +--- + +## 4. Integration testing with Nest’s testing module + +Here you want to test the **real Nest module wiring** but still avoid hitting external systems. + +### Strategy + +1. Use `Test.createTestingModule` with your real modules. +2. Override external clients with **HTTP-mocking or fake in-memory servers**. + +Examples: + +- For HTTP clients (Axios, `@nestjs/axios`): + - Use `nock` or similar to mock CalDAV/Graph endpoints. +- Alternatively: + - Provide fake `CalDavClient`/`GraphClient` that behave like a small in-memory server. + +```ts +const module = await Test.createTestingModule({ + imports: [AppModule], +}) + .overrideProvider(CalDavClient) + .useClass(FakeCalDavClient) + .overrideProvider(GraphClient) + .useClass(FakeGraphClient) + .compile(); +``` + +Then: + +- Resolve your `CalendarSyncService` or job and call it. +- Assert on side effects (DB state, logs, events). + +This validates Nest DI wiring and internal logic while avoiding real network calls. + +--- + +## 5. End‑to‑end tests (E2E) with “real” external systems + +If you need high‑confidence tests: + +- Spin up: + - **Test CalDAV server** (e.g. Radicale or DAViCal in Docker). + - **Microsoft Graph test tenant** (with dedicated test accounts). +- Use Nest E2E tests (`@nestjs/testing` + `supertest`) to: + - Call REST endpoints that trigger sync, or + - Call job handlers directly while Nest app is bootstrapped as in production. + +Key ideas: + +- Use **separate env/config** for E2E: test credentials, test URLs. +- Clean up test data (delete created events) at the end of each test suite. + +These tests are slower and brittle; run them in CI only, not on every quick dev run. + +--- + +## 6. Dealing with time / schedule semantics + +Scheduled jobs are time‑based by nature; tests should not depend on real time: + +- **Never** wait for cron triggers in tests. +- Expose job handlers (e.g. `handleCron()`) and call directly. +- If your code uses `Date.now()` / `new Date()` for “now”: + - Inject a `Clock` or use Jest’s fake timers to control time. + - This is important for tests like “events starting in next 10 minutes”. + +--- + +## 7. Authentication / tokens for Graph & CalDAV + +For unit/integration tests with mocks: + +- Don’t generate real tokens—mock the token provider / auth client. +- If you must hit real Graph: + - Use **app‑only auth** with client credentials for tests (service principal). + - Store secrets in CI’s secret store; load via Nest config module.[2] + +For CalDAV: + +- Use dedicated test users with credentials stored in env variables for E2E; mock them in unit/integration tests. + +--- + +## 8. Testing concurrency and request context (optional) + +If your scheduled jobs need *per‑tenant context* or something similar, you might use **AsyncLocalStorage** or `nestjs-cls` to carry contextual data through async calls.[1] + +For tests: + +- When calling job handlers directly, explicitly set the context: + - Either via your own context service’s `runWithContext(...)`. + - Or by injecting `ClsService` and manually `set` needed values before calling the service.[1] + +This ensures the same context behavior as in HTTP requests but within scheduled jobs. + +--- + +## 9. Recommended layering for testability + +A practical layering that makes all of the above straightforward: + +- **Job layer** + - `CalendarSyncJob` (only schedule + call into service). +- **Domain/service layer** + - `CalendarSyncService` (contains sync logic). + - Depends only on interfaces + repository (DB) + clock. +- **Integration layer** + - `CalDavClient` (wraps CalDAV HTTP). + - `GraphClient` (wraps Microsoft Graph SDK/HTTP). +- **Infrastructure** + - Config, auth/token provider, HTTP module, repositories. + +Then: + +- Unit tests target domain/service. +- Integration tests target domain + fake clients/DB. +- E2E tests target whole app + real external systems. + +--- + +If you share your current Nest module layout (which modules/services you have), I can outline very concrete test module setups and example Jest test files for each level (unit, integration, E2E) tailored to CalDAV + Graph. + +### Sources + +- [Source 1](https://docs.nestjs.com/recipes/async-local-storage) +- [Source 2](https://www.freecodecamp.org/news/the-nestjs-handbook-learn-to-use-nest-with-code-examples/) +- [Source 3](https://www.youtube.com/watch?v=8_X0nSrzrCw) +- [Source 4](https://www.youtube.com/watch?v=2byT7fYT8UE) +- [Source 5](https://www.geeksforgeeks.org/node-js/how-to-integrate-google-calendar-in-node-js/) +- [Source 6](https://developers.google.com/workspace/calendar/api/quickstart/nodejs) +- [Source 7](https://docs.nestjs.com/techniques/events) + +## Query: PostgreSQL real-time updates WebSocket NestJS concurrency performance optimization 2026 + +You can build high-performance real‑time updates on PostgreSQL + NestJS + WebSockets, but you must be deliberate about where you get change events from and how you fan them out to clients. + +Below is a concise, production‑oriented approach for 2025/2026‑era stacks. + +--- + +## 1. Choose your Postgres → “events” mechanism + +**Avoid polling** for anything beyond trivial loads. Use one of: + +### A. `LISTEN/NOTIFY` + WebSocket bridge (simple, good for moderate scale) + +- Postgres emits events via `pg_notify` from triggers. +- A small service listens via `LISTEN` and pushes over WebSockets to clients.[3][6] +- Example of a bridge: **pg_eventserv**, which converts LISTEN/NOTIFY into standard WebSocket messages to web clients.[3] + +**Pros:** + +- Simple mental model; small infra footprint.[3] +- Great latency (single round‑trip DB → app → WS). + +**Cons & constraints:** + +- `NOTIFY` payloads are limited (~8KB). +- `LISTEN/NOTIFY` is not designed for very high fan‑out or tens of thousands of topics. +- No built‑in message durability; if consumers disconnect, they miss events.[3] + +**When to use:** +Dashboards, admin panels, low/medium‑traffic SaaS, 100s–low 1000s of concurrent WebSocket clients. + +--- + +### B. Logical replication / WAL streaming (scales much better) + +Use **logical replication slots** (or a library built on them) to stream changes, then fan them out. + +- Trigger.dev describes using **Postgres replication slots + ElectricSQL** as their real‑time backbone.[4] +- Flow: Postgres writes to WAL → replication slot captures changes → ElectricSQL processes and pushes to clients via long‑poll/WS.[4] + +**Performance numbers from Trigger.dev:** + +- ~**20,000 updates/second** processed. +- **500GB+** of Postgres inserts daily. +- **Sub‑100ms latency** to browsers.[4] + +**Pros:** + +- Much higher throughput and lower DB overhead than triggers + NOTIFY.[4] +- Can support **historical subscriptions** (subscribe to objects created before opening the page).[4] +- Strong consistency guarantees; Postgres remains the single source of truth.[4] + +**Cons:** + +- More infra and operational complexity (replication slots, separate service like ElectricSQL or your own change‑consumer)[4]. +- Need to ensure replication slots don’t bloat WAL. + +**When to use:** +High‑throughput real‑time feeds, large multi‑tenant apps, “activity feed” or “runs/jobs” style products at scale. + +--- + +### C. External real‑time services on top of Postgres + +If you do not want to manage the event bridge: + +- **Supabase Realtime** + - Elixir/Phoenix service that can **listen to Postgres changes and send them over WebSockets**, plus broadcast and presence features.[2] + - Works via logical replication or CDC extensions (`postgres_cdc_rls`).[2] + +- **Ably LiveSync + Postgres** + - Neon’s guide shows using serverless Postgres with an **outbox table + trigger that calls `pg_notify`**, and Ably for WS fan‑out.[5] + +**When to use:** +You want real‑time updates, presence, and fan‑out without writing the whole infra yourself. + +--- + +## 2. NestJS architecture for WebSockets + concurrency + +### A. NestJS WebSocket gateway + +Use `@WebSocketGateway()` and channels/rooms per logical subscription: + +```ts +@WebSocketGateway({ cors: { origin: '*' } }) +export class RealtimeGateway { + @WebSocketServer() + server: Server; // for socket.io + + @SubscribeMessage('subscribeToItem') + handleSubscribe( + @MessageBody() data: { itemId: string }, + @ConnectedSocket() client: Socket, + ) { + client.join(`item:${data.itemId}`); + } + + publishUpdate(itemId: string, payload: any) { + this.server.to(`item:${itemId}`).emit('item:update', payload); + } +} +``` + +Your Postgres‑event consumer (LISTEN/NOTIFY or WAL) injects the gateway and calls `publishUpdate`. + +### B. Concurrency and scaling + +To scale beyond a single NestJS instance: + +- Run NestJS behind a load balancer. +- Use **sticky sessions** for WS if you use in‑memory rooms, or move to **Redis adapter** for socket.io so rooms work across nodes. +- Offload expensive work (e.g. heavy projections) to background workers; gateway process should be light and mostly I/O. + +For **write contention** in Postgres: + +- Keep transactions short and indexes lean. +- Where possible, use **append‑only tables** (events, logs, runs) instead of frequent UPDATEs; this plays well with WAL‑based approaches.[4] + +--- + +## 3. Reducing load on Postgres + +Across the real‑time designs above, common optimizations: + +- **Initial state vs. live updates** + - Client fetches initial state via a **regular HTTP/REST/GraphQL** request (can be cached). + - WebSocket is only for **incremental updates**, not full re‑queries.[4] + +- **Avoid “per‑client” DB connections** + - Trigger.dev notes that each WebSocket needing its own query and connection is expensive at scale.[4] + - Instead, centralize DB access in 1–N backend services that multiplex changes to many clients. + +- **Outbox pattern** + - App writes to main tables and to an **outbox table** in the same transaction. + - A trigger publishes `pg_notify` for new outbox rows.[5] + - A separate process reads/deletes from outbox and broadcasts. + - This keeps writes coherent and avoids doing heavy work inside triggers.[5] + +--- + +## 4. Recommended concrete patterns in 2026‑ish stack + +If you are: + +- **Small/medium NestJS app** (≤ 10k concurrent sockets, modest write volume): + - Postgres triggers + `LISTEN/NOTIFY`. + - A NestJS background service (or pg_eventserv) listens and calls a `RealtimeGateway` to broadcast.[3][6] + - Optional: outbox table to decouple triggers from business events.[5] + +- **Growing app or high‑throughput**: + - Move to **logical replication / CDC** (ElectricSQL, Supabase Realtime, or custom consumer). + - Keep NestJS as the **API + auth + WS edge**, but the “change capture” logic lives in a separate service.[2][4] + +- **You want to outsource real‑time infra**: + - Supabase Realtime as the WS/event layer on top of Postgres.[2] + - Or Ably/LiveSync with an outbox + NOTIFY bridge from Postgres.[5] + +--- + +If you share your current scale (writes/sec, expected concurrent clients) and whether you can add extra infra (Redis, separate change‑consumer), I can outline an exact NestJS + Postgres + WS architecture and minimal code skeleton tailored to that. + +### Sources + +- [Source 1](https://www.youtube.com/watch?v=51Mzx0c6Z30) +- [Source 2](https://github.com/supabase/realtime) +- [Source 3](https://www.crunchydata.com/blog/real-time-database-events-with-pg_eventserv) +- [Source 4](https://trigger.dev/blog/how-we-built-realtime) +- [Source 5](https://neon.com/guides/real-time-comments) +- [Source 6](https://dev.to/bias/build-reactive-realtime-apps-with-postgres-49ej) +- [Source 7](https://www.postgresql.org) +- [Source 8](https://www.amplifilabs.com/post/postgres-breakout-year-agent-protocols-rise-and-a-six-bug-rce-wake-up-call) + diff --git a/docs/tech-stack.md b/docs/tech-stack.md new file mode 100644 index 0000000..3eb14f4 --- /dev/null +++ b/docs/tech-stack.md @@ -0,0 +1,44 @@ +# Tech Stack + +- **Language**: TypeScript +- **Runtime**: Node.js 20 +- **Framework**: NestJS 10 +- **Testing**: Jest (backend), Vitest (frontend) +- **Build Tool**: Docker Compose, pnpm workspaces, tsup + +## Libraries + +- TypeORM +- PostgreSQL +- @nestjs/jwt +- @nestjs/passport +- @nestjs/websockets +- @nestjs/platform-socket.io +- @nestjs/schedule +- @nestjs/config +- class-validator +- class-transformer +- bcrypt +- date-fns +- ical.js +- node-caldav +- @microsoft/microsoft-graph-client +- imap +- axios +- bull +- ioredis +- helmet +- express-rate-limit +- React 18 +- Vite 5 +- React Router +- TanStack Query (React Query) +- Zustand +- FullCalendar +- react-beautiful-dnd +- socket.io-client +- date-fns +- React Hook Form +- Zod +- TailwindCSS +- Radix UI diff --git a/prd.json b/prd.json new file mode 100644 index 0000000..8dcd548 --- /dev/null +++ b/prd.json @@ -0,0 +1,96 @@ +{ + "project": "nick-tracker", + "version": "1.0.0", + "features": [ + { + "id": "gtd_inbox_capture", + "phase": 1, + "name": "GTD Inbox Capture", + "description": "Multi-source task capture system that ingests tasks from manual web form, REST API, email (IMAP/Microsoft Graph), and ConnectWise Manage sync into an unprocessed inbox for later GTD clarification", + "priority": 1, + "passes": false, + "acceptance": "Manual tasks can be submitted via web form quick-add and appear in inbox" + }, + { + "id": "gtd_processing_workflow", + "phase": 1, + "name": "GTD Processing Workflow", + "description": "Interactive inbox processing interface that guides users through GTD clarification: converting raw inbox items into Next Actions with context tags, Projects, Waiting For items, Someday/Maybe, Reference Material, Tickler items, or Trash", + "priority": 2, + "passes": false, + "acceptance": "Inbox view displays unprocessed items with processing workflow controls" + }, + { + "id": "connectwise_integration", + "phase": 2, + "name": "ConnectWise Manage Integration", + "description": "Read-only sync from ConnectWise Manage that imports service tickets, project tickets, and projects assigned to user. Projects with zero tickets surface as planning tasks. ConnectWise priority/SLA displayed for reference only; user assigns manual priority", + "priority": 3, + "passes": false, + "acceptance": "ConnectWise API integration syncs assigned service tickets as inbox items" + }, + { + "id": "intelligent_calendar_scheduling", + "phase": 2, + "name": "Intelligent Calendar Scheduling", + "description": "Automatic scheduling engine that pulls from CalDAV calendars (Nextcloud, Google Calendar, Outlook via Microsoft Graph) and places actionable tasks into available time slots, respecting working hours, context constraints, deadlines, and manual priority. Supports drag-drop manual override and task locking", + "priority": 4, + "passes": false, + "acceptance": "Engine reads existing events from CalDAV/Google/Outlook calendars" + }, + { + "id": "calendar_week_view", + "phase": 2, + "name": "Interactive Calendar Week View", + "description": "React SPA with interactive week-view calendar displaying scheduled tasks and calendar events. Supports drag-and-drop task rescheduling, manual time adjustments, and real-time updates when scheduling changes occur", + "priority": 5, + "passes": false, + "acceptance": "Week view renders all scheduled tasks and synced calendar events" + }, + { + "id": "weekly_review_interface", + "phase": 3, + "name": "Weekly Review Interface", + "description": "Dedicated GTD Weekly Review interface with auto-scheduled recurring review block. Shows all active projects, next actions per project, unprocessed inbox count, waiting-for items, and someday/maybe list for systematic review", + "priority": 6, + "passes": false, + "acceptance": "Weekly Review block auto-scheduled at user-configured recurring time" + }, + { + "id": "gtd_contexts_and_domains", + "phase": 3, + "name": "GTD Contexts and Life Domains", + "description": "Context label system (@desk, @phone, @errand, @homelab, @anywhere) and domain organization covering Work (ConnectWise tasks), Homelab (Proxmox, networking, 3D printing, NAS), Daily Routines (meals, exercise, supplements), House (maintenance, errands, cleaning), and Professional Development (Azure certification)", + "priority": 7, + "passes": false, + "acceptance": "Tasks taggable with context labels (@desk, @phone, @errand, @homelab, @anywhere)" + }, + { + "id": "waiting_for_and_tickler", + "phase": 3, + "name": "Waiting For and Tickler System", + "description": "Waiting For list tracks items delegated or awaiting external input with optional follow-up dates. Tickler/Deferred items stored with future activation dates and automatically surface to inbox on specified date", + "priority": 8, + "passes": false, + "acceptance": "Waiting For list displays all items awaiting external action" + }, + { + "id": "project_management", + "phase": 4, + "name": "GTD Project Management", + "description": "Project hierarchy supporting multi-step outcomes with next actions, reference material attachments, notes, and project status tracking. ConnectWise projects with zero tickets surface as planning tasks requiring work breakdown", + "priority": 9, + "passes": false, + "acceptance": "Projects created with name, description, desired outcome, and domain" + }, + { + "id": "notifications_and_rescheduling", + "phase": 4, + "name": "Notifications and Rescheduling Alerts", + "description": "Real-time notification system via WebSocket, email, and optional webhook when automatic rescheduling occurs due to calendar conflicts, when Waiting For follow-ups are due, or when Tickler items activate", + "priority": 10, + "passes": false, + "acceptance": "WebSocket push notifications to active browser sessions when tasks reschedule" + } + ] +} \ No newline at end of file diff --git a/progress.txt b/progress.txt new file mode 100644 index 0000000..86f3d74 --- /dev/null +++ b/progress.txt @@ -0,0 +1,7 @@ +# Progress Log - nick-tracker +# Format: [TIMESTAMP] [ITERATION] [STATUS] - [DETAILS] +# Agent: Append only, never modify previous entries + +--- + +[2026-01-11T04:59:40.694Z] [0] [INIT] - Project scaffold created diff --git a/prompts/phase1-prompt.txt b/prompts/phase1-prompt.txt new file mode 100644 index 0000000..8888cc9 --- /dev/null +++ b/prompts/phase1-prompt.txt @@ -0,0 +1,58 @@ +# Phase 1: Foundation + +## Context + +Read PROMPT.md for full project requirements and context. +This prompt focuses ONLY on Phase 1: Foundation. + +## Phase Objective + +Project setup, core infrastructure, and initial configuration + +## Phase 1 Tasks + +- [ ] GTD Inbox Capture: Multi-source task capture system that ingests tasks from manual web form, REST API, email (IMAP/Microsoft Graph), and ConnectWise Manage sync into an unprocessed inbox for later GTD clarification + - Acceptance: Manual tasks can be submitted via web form quick-add and appear in inbox +- [ ] GTD Processing Workflow: Interactive inbox processing interface that guides users through GTD clarification: converting raw inbox items into Next Actions with context tags, Projects, Waiting For items, Someday/Maybe, Reference Material, Tickler items, or Trash + - Acceptance: Inbox view displays unprocessed items with processing workflow controls + +## Working Instructions + +1. Read PROMPT.md to understand the full project context +2. Focus ONLY on the tasks listed above for this phase +3. For each task: + - Implement the feature + - Write tests + - Run: npm run build && npm run test && npm run lint + - Update prd.json to set passes: true for completed features + - Append progress to progress.txt + - Commit with conventional commit message + +## Constraints + +- Always run tests before committing +- Never commit failing code +- Do not implement features from other phases +- Make reasonable decisions - do not ask questions +- Update prd.json when features complete + +## Verification + +After completing all Phase 1 tasks: +```bash +npm run build && npm run test && npm run lint +``` + +All commands must pass with zero errors. + +## Completion + +When ALL Phase 1 tasks are complete and verified: +- All features for this phase pass their acceptance criteria +- prd.json shows passes: true for all Phase 1 features +- Build, test, and lint all pass + +Output: PHASE_1_COMPLETE + +If blocked and cannot proceed: +Output: ABORT_BLOCKED diff --git a/prompts/phase2-prompt.txt b/prompts/phase2-prompt.txt new file mode 100644 index 0000000..1d4b1d7 --- /dev/null +++ b/prompts/phase2-prompt.txt @@ -0,0 +1,60 @@ +# Phase 2: Core + +## Context + +Read PROMPT.md for full project requirements and context. +This prompt focuses ONLY on Phase 2: Core. + +## Phase Objective + +Main functionality and core features implementation + +## Phase 2 Tasks + +- [ ] ConnectWise Manage Integration: Read-only sync from ConnectWise Manage that imports service tickets, project tickets, and projects assigned to user. Projects with zero tickets surface as planning tasks. ConnectWise priority/SLA displayed for reference only; user assigns manual priority + - Acceptance: ConnectWise API integration syncs assigned service tickets as inbox items +- [ ] Intelligent Calendar Scheduling: Automatic scheduling engine that pulls from CalDAV calendars (Nextcloud, Google Calendar, Outlook via Microsoft Graph) and places actionable tasks into available time slots, respecting working hours, context constraints, deadlines, and manual priority. Supports drag-drop manual override and task locking + - Acceptance: Engine reads existing events from CalDAV/Google/Outlook calendars +- [ ] Interactive Calendar Week View: React SPA with interactive week-view calendar displaying scheduled tasks and calendar events. Supports drag-and-drop task rescheduling, manual time adjustments, and real-time updates when scheduling changes occur + - Acceptance: Week view renders all scheduled tasks and synced calendar events + +## Working Instructions + +1. Read PROMPT.md to understand the full project context +2. Focus ONLY on the tasks listed above for this phase +3. For each task: + - Implement the feature + - Write tests + - Run: npm run build && npm run test && npm run lint + - Update prd.json to set passes: true for completed features + - Append progress to progress.txt + - Commit with conventional commit message + +## Constraints + +- Always run tests before committing +- Never commit failing code +- Do not implement features from other phases +- Make reasonable decisions - do not ask questions +- Update prd.json when features complete + +## Verification + +After completing all Phase 2 tasks: +```bash +npm run build && npm run test && npm run lint +``` + +All commands must pass with zero errors. + +## Completion + +When ALL Phase 2 tasks are complete and verified: +- All features for this phase pass their acceptance criteria +- prd.json shows passes: true for all Phase 2 features +- Build, test, and lint all pass + +Output: PHASE_2_COMPLETE + +If blocked and cannot proceed: +Output: ABORT_BLOCKED diff --git a/prompts/phase3-prompt.txt b/prompts/phase3-prompt.txt new file mode 100644 index 0000000..1edfeee --- /dev/null +++ b/prompts/phase3-prompt.txt @@ -0,0 +1,60 @@ +# Phase 3: Integration + +## Context + +Read PROMPT.md for full project requirements and context. +This prompt focuses ONLY on Phase 3: Integration. + +## Phase Objective + +External services, error handling, and system integration + +## Phase 3 Tasks + +- [ ] Weekly Review Interface: Dedicated GTD Weekly Review interface with auto-scheduled recurring review block. Shows all active projects, next actions per project, unprocessed inbox count, waiting-for items, and someday/maybe list for systematic review + - Acceptance: Weekly Review block auto-scheduled at user-configured recurring time +- [ ] GTD Contexts and Life Domains: Context label system (@desk, @phone, @errand, @homelab, @anywhere) and domain organization covering Work (ConnectWise tasks), Homelab (Proxmox, networking, 3D printing, NAS), Daily Routines (meals, exercise, supplements), House (maintenance, errands, cleaning), and Professional Development (Azure certification) + - Acceptance: Tasks taggable with context labels (@desk, @phone, @errand, @homelab, @anywhere) +- [ ] Waiting For and Tickler System: Waiting For list tracks items delegated or awaiting external input with optional follow-up dates. Tickler/Deferred items stored with future activation dates and automatically surface to inbox on specified date + - Acceptance: Waiting For list displays all items awaiting external action + +## Working Instructions + +1. Read PROMPT.md to understand the full project context +2. Focus ONLY on the tasks listed above for this phase +3. For each task: + - Implement the feature + - Write tests + - Run: npm run build && npm run test && npm run lint + - Update prd.json to set passes: true for completed features + - Append progress to progress.txt + - Commit with conventional commit message + +## Constraints + +- Always run tests before committing +- Never commit failing code +- Do not implement features from other phases +- Make reasonable decisions - do not ask questions +- Update prd.json when features complete + +## Verification + +After completing all Phase 3 tasks: +```bash +npm run build && npm run test && npm run lint +``` + +All commands must pass with zero errors. + +## Completion + +When ALL Phase 3 tasks are complete and verified: +- All features for this phase pass their acceptance criteria +- prd.json shows passes: true for all Phase 3 features +- Build, test, and lint all pass + +Output: PHASE_3_COMPLETE + +If blocked and cannot proceed: +Output: ABORT_BLOCKED diff --git a/prompts/phase4-prompt.txt b/prompts/phase4-prompt.txt new file mode 100644 index 0000000..44d91a8 --- /dev/null +++ b/prompts/phase4-prompt.txt @@ -0,0 +1,58 @@ +# Phase 4: Polish + +## Context + +Read PROMPT.md for full project requirements and context. +This prompt focuses ONLY on Phase 4: Polish. + +## Phase Objective + +Documentation, optimization, testing, and final packaging + +## Phase 4 Tasks + +- [ ] GTD Project Management: Project hierarchy supporting multi-step outcomes with next actions, reference material attachments, notes, and project status tracking. ConnectWise projects with zero tickets surface as planning tasks requiring work breakdown + - Acceptance: Projects created with name, description, desired outcome, and domain +- [ ] Notifications and Rescheduling Alerts: Real-time notification system via WebSocket, email, and optional webhook when automatic rescheduling occurs due to calendar conflicts, when Waiting For follow-ups are due, or when Tickler items activate + - Acceptance: WebSocket push notifications to active browser sessions when tasks reschedule + +## Working Instructions + +1. Read PROMPT.md to understand the full project context +2. Focus ONLY on the tasks listed above for this phase +3. For each task: + - Implement the feature + - Write tests + - Run: npm run build && npm run test && npm run lint + - Update prd.json to set passes: true for completed features + - Append progress to progress.txt + - Commit with conventional commit message + +## Constraints + +- Always run tests before committing +- Never commit failing code +- Do not implement features from other phases +- Make reasonable decisions - do not ask questions +- Update prd.json when features complete + +## Verification + +After completing all Phase 4 tasks: +```bash +npm run build && npm run test && npm run lint +``` + +All commands must pass with zero errors. + +## Completion + +When ALL Phase 4 tasks are complete and verified: +- All features for this phase pass their acceptance criteria +- prd.json shows passes: true for all Phase 4 features +- Build, test, and lint all pass + +Output: PHASE_4_COMPLETE + +If blocked and cannot proceed: +Output: ABORT_BLOCKED