Q.01
Why is AI Implementation a hub instead of a single capability?
Because the four AI tracks behind it solve different problems and use different toolchains. Agents take actions, RAG grounds answers in content, Workflow Automation runs reliable internal pipelines, and Integrations wire AI into existing tools. Most briefs combine two of those — the hub work figures out which combination, then build runs inside the right child capability. One page, one rate, one pod, four possible builds.
Q.02
What's the difference between this and a freelance prompt engineer?
Prompt engineering is one piece of one of the four tracks. AI Implementation engagements are end-to-end systems work — feasibility, architecture, model and routing strategy, evals, guardrails, observability, integration, runbook. Prompt design is part of the work; it isn't the work.
Q.03
Are these senior engineers, or junior bench dressed up as senior?
Senior. Every AI Implementation engagement is staffed with engineers who have shipped real AI systems to production inside agencies and product companies — not a junior pool with a senior badge in the project channel. If continuity is ever at risk on a long-arc engagement, we tell you before the next sprint review.
Q.04
Whose API keys and model-provider accounts do you use?
By default the client's. We work inside their OpenAI, Anthropic, or other model-provider accounts under named seats. Where you (the agency) hold the account on behalf of the client, we operate inside yours. Either way, the account ownership and cost-attribution boundary is documented in the SOW so there's no ambiguity at month-end.
Q.05
Can you ship a chatbot demo by next Friday?
Sometimes — but it's almost never the right answer. If the brief deserves a real production system, we say so before the calendar invite goes out. Most demo-shaped engagements end up needing rebuild work for evals, guardrails, and integration; we'd rather route the brief into a Proposal Rescue Desk feasibility window first and ship the right thing once.
Q.06
Do you handle compliance and data-residency questions?
Yes — they're part of the architecture readout in week one. Model selection, data routing, retention, PII handling, and (where applicable) HIPAA / SOC 2 / EU data-residency posture are written into the readout. We work inside the compliance envelope your client already operates in; we don't invent a parallel one.
Q.07
How does the pricing work for a multi-track or multi-capability pod?
The starting weekly rate for a single-capability AI pod is $3,200 per week. Multi-track pods (e.g. RAG + Integrations) and multi-capability pods (AI + UI/UX, AI + Backend) are quoted at the highest applicable rate. Final SOW is scoped against the brief; the rate is the floor, not a ceiling.
Q.08
What's the right way to support an AI system after launch?
Most AI systems graduate cleanly into a Support & Maintenance retainer post-launch — model drift monitoring, prompt regression on new model versions, cost optimisation, eval upkeep, and v1.x feature work inside a monthly envelope. Either the same pod or a smaller maintenance crew carries it on the same MSA.
Q.09
Do we own the work the pod produces?
Yes. IP ownership and assignment on delivered code, prompts, evals, and supporting artefacts is written into the MSA — the work belongs to your agency (and onwards to your client per your own client contract) on payment of the relevant invoice. The Salt Technologies templates are counsel-reviewed and shared before signing.