Ways of Working
How to Build Client Apps on the Platform
The Pattern (Proven by CRM Demo)
Each client app is a separate stack: its own React frontend, its own BFF (backend-for-frontend) API, and its own database. The BFF contains all domain-specific logic and consumes Condelo platform APIs for AI/intelligence capabilities. Zero client-specific code lives in the platform.
- Create a space for the client with appropriate sources and feeds configured
- Build the client app as a separate stack (React frontend + BFF API + DB)
- BFF consumes Condelo APIs via API Key + X-Space-Id for all AI/intelligence features
- Use webhooks for data ingestion from the client's systems (CRM, ERP, etc.)
- Use SSE events for real-time updates from the platform
- Domain-specific aggregation lives in the client's BFF, not in the platform
What We Should Formalize
1. App Starter Template (saves 1-2 weeks per client app)
condelo-app-template/
├── app/ # React frontend
│ ├── lib/
│ │ ├── condelo-client.ts # Typed Condelo API client
│ │ ├── sse-client.ts # Real-time event subscription
│ │ └── types.ts # Shared response types
│ ├── components/
│ │ ├── DataTable.tsx # Reusable data display
│ │ ├── KPICard.tsx # Metric display
│ │ ├── ActivityFeed.tsx # Real-time event feed
│ │ └── Chart.tsx # Visualization wrapper
│ └── routes/
│ └── _index.tsx # Dashboard template
├── bff/ # Backend-for-frontend API
│ ├── src/
│ │ ├── routes/ # Domain-specific routes
│ │ ├── services/ # Domain logic, Condelo API calls
│ │ └── index.ts
│ └── package.json
├── .env.example
└── package.json
2. Webhook Integration Pattern
- Client systems push data via
POST /webhook/source/:sourceId(platform capability) - Data gets ingested, embedded, and analyzed automatically by the platform
- Agent runs trigger on new data, generating inferences
- Client BFF reads inferences via platform APIs and applies domain-specific logic
Development Plan
Team Model: 2 Developers + Claude Code
Realistic velocity estimate:
- 2 developers, each using Claude Code effectively = ~3-4x velocity vs raw coding
- Effective output: ~6-8 developer-weeks per calendar week
- But: reviews, testing, debugging, and coordination overhead = ~4-5 effective dev-weeks/week
How to work as a team of 2 with Claude Code:
- Each dev runs their own Claude Code session on a separate branch
- Merge to main frequently (daily) to avoid conflicts
- Use feature branches:
feat/llm-package,feat/deployment, etc. - One dev reviews the other's PRs before merge (even if Claude wrote it — human eyes catch integration issues)
- Weekly 30-min sync: what shipped, what's blocked, what's next
Phase 0: Foundation (Weeks 1-3) — CRITICAL PATH
Dev 1: Deployment & Infrastructure
- Dockerfiles for API, Data Plane, Web, Workers (multi-stage builds)
-
docker-compose.prod.ymlwith Caddy reverse proxy, environment configs - Provision Hetzner VPS (EU region), set up UFW firewall
- Domain setup (api.condelo.ai, etc.), Caddy auto-TLS
- Environment-aware CORS (replace hardcoded localhost ports)
- GitHub Actions CI: lint → typecheck → test → build Docker images
- GitHub Actions CD: SSH deploy to VPS on main push
- Automated daily backups:
pg_dump+ Qdrant snapshot → Cloudflare R2 - Secrets management via environment variables on VPS (not .env files)
Dev 2: @condelo/llm Package + Usage Metering
- Create
packages/llm/— unified LLM abstraction- Merge
apps/api/src/lib/llm/andapps/data-plane/src/lib/llm/into one package - Metered client wrapper: extracts
response.usageafter every call - Cost estimation: model pricing lookup + token count → cost per call
- Usage logger: writes to
usage_eventstable with{ orgId, spaceId, taskType, model, promptTokens, completionTokens, cost } - Provider registry: OpenAI, OpenRouter, Ollama, LM Studio, custom
- Unified config: DB settings + env var fallbacks (single source of truth)
- Merge
-
usage_eventstable inpackages/db/src/schema/- Columns: id, org_id, space_id, event_type (llm_chat, llm_embedding, doc_process, agent_run, api_call), model, tokens_in, tokens_out, estimated_cost, metadata (JSONB), created_at
- Partitioned by month (for query performance at scale)
- Update API to import from
@condelo/llm(removeapps/api/src/lib/llm/) - Update Data Plane to import from
@condelo/llm(removeapps/data-plane/src/lib/llm/) - Verify: all 38 LLM call sites now auto-log to
usage_events
Phase 1: Auth, Orgs & Route Reorganization (Weeks 3-5)
Dev 1: API Key Management + Route Groups
-
api_keystable (key_hash, org_id, space_id, scopes[], rate_limit, last_used_at) - API key auth middleware (validate key → set org/space context, alongside existing Bearer auth)
- Basic rate limiting per key (Redis-backed token bucket)
- Reorganize
apps/api/src/routes/into logical groups:core/— health, spaces, documents, sources, webhook, threads, messagesintelligence/— feeds, agents, inferences, explorations, stories, surfacesengagement/— signals, notifications, events, experiencesadmin/— settings, wiki, billing, organizations, usage
- Update
index.tsroute registration to use new structure
Dev 2: Organization Model + Stripe Connect
-
organizationstable (id, name, slug, stripe_connect_account_id, created_at) -
organization_memberstable (org_id, user_id, role: owner/admin/member) - Spaces belong to orgs (add
org_idto spaces, keepuser_idfor creator tracking) - RLS policy updates: org membership check instead of direct user ownership
- Org management API routes (CRUD, invite members)
- Stripe Connect platform setup + partner onboarding flow
- End-user subscription billing via Stripe Connect
- Automatic 30/70 split via
application_fee_percent - Webhook handler for Stripe events
Phase 2: Operator Dashboard + Partner Experience (Weeks 5-7)
Dev 1: Operator Dashboard
- Admin-only section in web app (or separate lightweight admin app)
- Partner list: name, status, plan, Stripe Connect status
- Per-partner usage dashboard (aggregate from
usage_events):- LLM tokens consumed (by model, by task type)
- Documents processed
- Agent runs
- Storage used
- API calls
- Revenue dashboard: total platform revenue, per-partner commission, payouts
- Health dashboard: service status, queue depths, error rates
- Partner onboarding wizard: create org → provision space → generate API keys → Stripe Connect onboarding → configure webhooks
Dev 2: Partner Dashboard + App Template
- Partner-facing dashboard (what the app partner sees):
- Their revenue, your commission, net payout
- End-user subscription metrics
- Platform usage breakdown
- API key management (create, rotate, revoke)
- App starter template (
condelo-app-template/):- Typed Condelo API client with auth
- SSE event subscription helper
- Common components (DataTable, KPICard, ActivityFeed, Chart)
- Example dashboard route
.env.examplewith Condelo config
- Package the CRM compliance app as the reference implementation
Phase 3: Hardening & First Paying Client (Weeks 7-10)
Dev 1: Production Hardening
- Comprehensive error handling and standardized error codes
- Request validation tightening (Zod on all inputs)
- Performance profiling (slow queries, N+1s, unnecessary LLM calls)
- Better Stack dashboards: request latency, error rate, queue depth
- Alerting rules: downtime, high error rate, queue backlogs, LLM API failures
- Load testing (k6) to establish baselines and find bottlenecks
- Security hardening: rate limit auth endpoints, validate webhook signatures
Dev 2: First Client Deployment
- Deploy CRM compliance app to production for paying client
- Client onboarding: org setup, space config, API keys, webhook integration
- Stripe Connect: set up end-user billing for the client's app
- Monitor usage, costs, and revenue for first 2 weeks
- Fix issues, iterate based on real production usage
Phase 4: Second Client + Platform Polish (Weeks 10-14)
Both developers:
- Second client engagement: requirements → domain route → app build → deploy
- Use app template — validate it saves significant time
- Iterate on template based on second build experience
- Usage-based billing refinement (adjust pricing if margins aren't right)
- Automated backup testing (restore drill)
- Documentation: partner onboarding guide, API reference
Deployment Strategy (VPS now → AWS/GCP later)
Phase 1: VPS + Docker Compose (Months 1-6, 1-5 clients)
Why start here: Mirrors your dev setup exactly, cheapest option, you control everything, no vendor lock-in decisions needed yet.
Recommended setup:
- Hetzner dedicated (EU region for data residency) — AX102 or similar (8 CPU, 64GB RAM, 2x NVMe, ~€75/mo)
- All services in Docker Compose (Postgres, Redis, Qdrant, MinIO, API, Data Plane, Web)
- Caddy as reverse proxy (automatic HTTPS via Let's Encrypt, zero-config TLS)
- Automated daily backups:
pg_dump+ Qdrant snapshots → Cloudflare R2 (cheap S3-compatible storage) - GitHub Actions: build Docker images → SSH deploy to VPS
- UFW firewall: only 80/443 open, services communicate on internal Docker network
Estimated cost: ~€100-150/mo total (VPS + R2 backup storage + domain)
What makes this production-ready despite being a single VPS:
- Docker Compose restart policies handle process crashes
- Caddy handles TLS termination
- Health checks trigger alerts via Better Stack
- Automated backups protect against data loss
- RLS provides client data isolation
- The bottleneck isn't infrastructure — it's workload. A beefy VPS handles 5+ clients easily
Phase 2: Managed Services (Months 6-12, 5-15 clients)
Migration trigger: When you need HA (uptime SLA), or a client demands it, or you're hitting resource limits.
What changes:
- Postgres → Neon or RDS (managed, automated backups, read replicas)
- Redis → Upstash (serverless, auto-scaling)
- Qdrant → Qdrant Cloud (managed, no maintenance)
- MinIO → Cloudflare R2 or S3 (no egress fees on R2)
- App containers → Fly.io or Railway (auto-deploy from Docker images)
- Estimated cost: ~£300-600/mo
Phase 3: AWS/GCP (12+ months, 15+ clients or enterprise demands)
Migration trigger: Enterprise client requiring AWS/GCP compliance, multi-region needs, or scaling past what managed PaaS handles.
What changes:
- ECS Fargate or GKE for container orchestration
- RDS/Cloud SQL for Postgres
- ElastiCache/Memorystore for Redis
- S3/GCS for storage
- CloudFront/Cloud CDN for client app delivery
- Estimated cost: £500-1,500/mo
Key principle: Dockerize everything from day 1. If your services run in Docker containers, migrating between VPS → PaaS → cloud is mostly changing deployment config, not rewriting code. This is why the Dockerfiles are Phase 0 priority.