All checks were successful
Build and Push Docker Image / build (push) Successful in 3m47s
RunPod mounts network volumes at /runpod-volume, not /userdata. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2.2 KiB
2.2 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
ComfyUI-based image-to-video generation service deployed on RunPod Serverless. Accepts base64 images and text prompts via RunPod API, processes them through ComfyUI workflows, and returns generated videos.
Architecture
RunPod API Request → handler.py → ComfyUI Server (port 8188) → GPU Inference → Response
↓
Network Volume (/runpod-volume) for models
Key flow in handler.py:
start_comfyui()- Launches ComfyUI serverupload_image()- Uploads base64 image to ComfyUIinject_wan22_params()- Injects parameters into workflow nodesqueue_workflow()- Submits to ComfyUI queuepoll_for_completion()- Polls until done (max 600s)fetch_output()- Retrieves generated video as base64
Build Commands
# Build Docker image
docker build -t comfyui-runpod:latest .
# Push to Gitea registry
docker push gitea.voyager.sh/nick/comfyui-serverless:latest
CI/CD via Gitea Actions triggers on push to main branch.
Local Testing
docker run --gpus all -p 8188:8188 \
-v /path/to/models:/runpod-volume/models \
comfyui-runpod:latest
API Input Schema
{
"image": "base64 encoded image (required)",
"prompt": "positive prompt (required)",
"negative_prompt": "optional",
"resolution": 720,
"steps": 8,
"split_step": 4,
"timeout": 600
}
Workflow Node Mapping (Wan22-I2V-Remix)
| Node ID | Purpose |
|---|---|
| 148 | LoadImage (input) |
| 134 | CLIPTextEncode (positive prompt) |
| 137 | CLIPTextEncode (negative prompt) |
| 147 | Resolution |
| 150 | Steps |
| 151 | Split Step |
| 117 | SaveVideo (output) |
Stack
- CUDA 12.8.1, Python 3.12, PyTorch 2.8.0+cu128
- SageAttention 2.2 (compiled from source with
--no-build-isolation) - Nunchaku 1.0.2
- 12 ComfyUI custom nodes (see Dockerfile)
Key Considerations
- Models stored on RunPod Network Volume at
/runpod-volume/models/ - Cold start ~30-60s for ComfyUI initialization
- Large outputs (>10MB) returned as file paths, not base64
- Workflow files in
workflows/directory (API format)