Q.01
What's the difference between AI Agents and RAG on this site?
RAG grounds an answer in your client's content — the model retrieves and cites, but doesn't act. Agents take actions — they call tools, update systems of record, kick off workflows. Most agentic systems we ship combine both (an agent with a RAG tool in its toolbox), but the *primary* capability on the SOW is the one that defines the system. We make this call inside the brief & feasibility step in week one.
Q.02
How do you stop an agent from doing the wrong thing in production?
Three layers. First, scoped tool permissions — the agent literally can't call tools it isn't authorised for. Second, an eval suite that runs in CI on every prompt or tool change against a labelled task set the buyer signs off on. Third, human-in-the-loop checkpoints on irreversible actions (sending money, deleting data, contacting a customer). All three are part of default scope; we don't ship without them.
Q.03
Which agent platform do you use?
We don't have a single default. The choice between OpenAI Agents SDK, LangGraph, Anthropic tool use, Vercel AI SDK, and a custom orchestration sits on platform maturity, your team's existing stack, and the latency / cost envelope for the workload. The architecture readout in week one names the platform with the trade-offs written down so your team and your client can refer back.
Q.04
Whose API keys and model-provider accounts do you use?
By default the client's. We work inside their OpenAI, Anthropic, or other model-provider accounts under named seats. Where you (the agency) hold the account on behalf of the client, we operate inside yours. Either way, the account ownership and cost-attribution boundary is documented in the SOW so there's no ambiguity at month-end.
Q.05
How do you handle agent cost run-away?
Cost ceiling per agent run, alerting on per-day spend deltas, and a circuit-breaker that pauses the agent surface above a threshold the buyer sets. Cost dashboards land before the production rollout step, not after. We've seen enough cost incidents in our previous lives to design for them up front.
Q.06
What's the smallest engagement you'll take?
Production agent systems aren't a one-week capability. The most common starting shape is a 4–6 week pod inside an Invisible Delivery Team SOW, sized around the live brief and the integration surface. For shorter agent work (a tightly scoped feature, an eval-suite stand-up, a guardrail audit), we'll quote a tighter window against the same weekly rate.
Q.07
How does the pricing work for a multi-track or multi-capability AI pod?
The starting weekly rate for a single-capability AI pod is $3,200 per week. Multi-track AI pods (Agents + RAG, Agents + Integrations) and pods that mix capabilities (Agents + UI/UX, Agents + Backend) are quoted at the highest applicable rate. Final SOW is scoped against the brief; the rate is the floor, not a ceiling.
Q.08
What's the right way to support an agent system after launch?
Most agent systems graduate cleanly into a Support & Maintenance retainer post-launch — model drift monitoring, eval regression on new model versions, guardrail tuning, and v1.x feature work inside a monthly envelope. Either the same pod or a smaller maintenance crew carries it on the same MSA, no second sales cycle.
Q.09
Do we own the work the pod produces?
Yes. IP ownership and assignment on delivered code, prompts, evals, tool wrappers, and supporting artefacts is written into the MSA — the work belongs to your agency (and onwards to your client per your own client contract) on payment of the relevant invoice. The Salt Technologies templates are counsel-reviewed and shared before signing.