HydraCore

Architecture

The composition model and how the primitives fit together.

HydraCore's architecture is a small set of orthogonal primitives that compose into agent instances. The same primitives power a tradesperson chatbot, an AI website builder, and a property-management vertical — the differences are recipe configuration, not platform forks.

Instance = Runtime × Brain × Channels[]

Every running agent is an Instance row in the control plane that points to:

  • A runtime (OpenClaw / Hermes / goose) executing on a tenant-owned VPS over WireGuard.
  • A brain: the agent's agent_config — system prompt, model policy, skill list, MCP server list, voice config, approval rules.
  • Zero or more channel endpoints: routing bindings for WhatsApp / Voice / SMS / Telegram / Web / Email / Image-and-file.

The runtime is interchangeable; the brain and channels travel with the recipe.

Platform primitives

PrimitivePurpose
RuntimesThe long-running process on each VPS. AgentRuntime Protocol abstracts health / send_message / get_usage / restart so the chat proxy is runtime-agnostic.
Skills (Haldir)MCP-mounted, attestation-verified tools the agent can call. Skill activation is per-recipe via preset YAML definitions.
WorkflowsHeartbeat-resumable durable executions, used for multi-step playbooks (workflow_records + scheduled_actions).
ChannelsProvider-managed inbound surfaces. Per-tenant or per-platform credentials, identity resolution layered on top.
Entity StoreTenant-defined schemas, JSONB-backed, GIN-indexed, RLS-isolated. Airtable-for-agents.
Knowledge / RAGpgvector against documents in R2.
FormsTypeform-style with magic-link auth, R2 uploads, conditional logic.
SchedulingCron / interval / one-shot / relative — timezone-aware.
AuthCustomer connections via OAuth broker, capability grants, AES-256-GCM JIT credential vault, worker principal.
BillingMulti-cost-type credit ledger (tokens, voice minutes, WhatsApp messages, SMS, plugin tool calls), Stripe Connect for tenant payouts.
VoicePipeline mode (STT→LLM→TTS) and Realtime (Gemini Live), outbound batch, SmallWebRTC.
CompositionRecipes, merge (7 strategies), auto-wire, manifest override, sidecar injection.
Security10-layer: RLS, customer isolation, capability grants, AES-256-GCM, proxy key, channel trust, identity resolution, runtime guardrails, webhook verification, infrastructure.

How a deploy flows

  1. Wizard / API submits CreateInstanceRequest (or DeployRequest).
  2. Route handler validates tier + subscription + manifest availability + experimental gate.
  3. provision_instance Celery task runs the 17-step orchestration: resolve manifest → compose cloud-init → run policy gates → encrypt bundle to R2 → create VPS → return.
  4. VPS boots cloud-init → bootstraps WireGuard → calls back for the bundle → decrypts → starts the runtime under systemd.
  5. Runtime registers with the chat proxy; the wizard's deploy stepper reaches running.

Reading further

The deeper specs live at:

  • docs/HYDRACORE_EVOLUTION_PLAN_v5.md — master plan across all tracks.
  • docs/PLATFORM_VISION.md — v2.0 vision + tenant hierarchy.
  • docs/ARCHITECTURE.md — v3.0 composition-model deep dive.
  • docs/HALDIR_SPEC.md — skill + workflow attestation.
  • docs/CREDIT_SYSTEM_SPEC.md — multi-cost-type billing.
  • docs/RUNTIME_PLATFORM.md — runtime track plan + ranking.

These are repository-internal markdown today; this site will absorb them as the docs surface matures.