Q.01
What's the difference between AI Workflow Automation and AI Agents on this site?
Workflow Automation is a deterministic flow with AI nodes — the path is mostly fixed, AI helps with the judgment-call steps (classify, enrich, summarise, route), and the system runs on a schedule or a trigger. Agents are open-ended — the model decides which tool to call next, in what order, with state and retries across multiple steps. If the workflow's path is mostly known, it's automation; if the path has to be reasoned out at run-time, it's an agent. Most briefs pick one cleanly; we route the ones that don't in week one.
Q.02
How do you choose between n8n, Make, Zapier, and first-party serverless?
Per flow, not per project. Zapier is fastest when the flow is short, the connectors exist, and the buyer's team owns operating the result. Make handles richer logic and visual debugging without going custom. n8n (self-hosted) is the right call when the flow needs custom code nodes, sits inside the client's network, or needs costs decoupled from a per-task billing model. First-party serverless (Lambda, Cloud Functions, Vercel Functions, Inngest, Temporal) is the right call when reliability, durable execution, or vendor-lock concerns dominate. The architecture readout in week one names the platform per flow with the trade-offs written down.
Q.03
Where does AI actually go in a workflow?
The honest places: classification (route this ticket / record / lead), enrichment (extract structured data from unstructured text), summarisation (collapse a thread into a short note for a human reviewer), draft generation (a first-pass reply, summary, or follow-up that a human approves), and routing decisions (which branch of the flow to take). AI nodes always come with a structured output contract (JSON schema or function-calling), retries on bad responses, and a per-node cost ceiling. We don't add AI nodes for novelty.
Q.04
How do you stop the Zapier-graveyard problem in production?
Three layers. First, every flow has one named owner inside the client's team — written into the runbook. Second, observability ships before production access — success / error dashboards, alerting on silent failure, and a per-flow heartbeat. Third, dead-letter queues and idempotency keys so a partial failure can be replayed safely. Most graveyards we inherit fail on layer two — they were built with no monitoring at all.
Q.05
What about cost run-away when AI is in the loop?
Per-node cost ceilings, alerting on per-day spend deltas, and a circuit-breaker that pauses the flow above a threshold the buyer sets. Cost dashboards land before the production rollout step, not after. Most cost incidents we've seen come from a model node inside a high-volume flow with no per-run ceiling — that's the failure mode we design out first.
Q.06
What's the smallest engagement you'll take?
Production-grade automation work isn't a one-week capability. The most common starting shape is a 2–4 week scoped engagement — either inside an Overflow Pods SOW (a queue of bounded automation jobs) or as the first quarter of a Capability Expansion engagement (standing AI Workflow Automation up as a new service line). Single-flow audits and Zapier-graveyard rescues are quoted as a tighter window against the same weekly rate.
Q.07
How does the pricing work for a multi-track or multi-capability AI pod?
The starting weekly rate for a single-capability AI pod is $3,200 per week. Multi-track AI pods (Automation + RAG, Automation + Agents) and pods that mix capabilities (Automation + Backend, Automation + UI/UX) are quoted at the highest applicable rate. Final SOW is scoped against the brief; the rate is the floor, not a ceiling.
Q.08
What's the right way to support an automation system after launch?
Most automations graduate cleanly into a Support & Maintenance retainer post-launch — third-party API changes (the connectors that quietly deprecate), model drift on AI nodes, schema changes upstream, observability upkeep, and v1.x feature work inside a monthly envelope. Either the same pod or a smaller maintenance crew carries it on the same MSA, no second sales cycle.
Q.09
Do we own the work the pod produces?
Yes. IP ownership and assignment on delivered flows, code, prompts, runbooks, and supporting artefacts is written into the MSA — the work belongs to your agency (and onwards to your client per your own client contract) on payment of the relevant invoice. The Salt Technologies templates are counsel-reviewed and shared before signing.