AgencyRelay
Capability hub · AI Implementation

White-label AI implementation that ships, not just demos

Strategy and architecture across the four AI tracks — agents, retrieval, automation, integrations — followed by build inside your delivery process. Not a research project. Not a chat box.

  • Hub for four AI capability tracks
  • Feasibility, architecture, then build
  • White-label safe by default
What an AI engagement looks likeHub

Hub work, then build inside the right track

  • FormatStrategy + architecture, then build pod
  • CadenceWeekly delivery · async daily standup
  • TracksAgents · RAG · Automation · Integrations
  • Starting at$3,200 / week
Final SOW is scoped against your brief. AI Implementation sits at the top of the single-capability rate band because of the architecture surface that runs on top of the build itself.
When agencies bring us in

Four moments where the AI brief lands harder than the in-house bench

These are the conversations agency owners describe when an AI brief is in front of them and the in-house team is two people deep on prompt experiments but hasn't yet shipped a production system.

Signal 0101 / 04

A client wants AI in their product and you said yes before scoping it

The pitch landed. The buyer is excited. The spec is one paragraph. We come in for a feasibility readout — what's actually buildable, on what timeline, against which model, with what guardrails — before the SOW gets locked at the wrong shape.

Signal 0202 / 04

The team has shipped a chatbot demo and the buyer wants something real

The demo got applause. Production wants evals, fallback paths, rate limits, citation discipline, and a fixed response time. We re-architect from a demo into a system that survives a billing-day incident review.

Signal 0303 / 04

Internal automation work is hitting the limits of the AI tools the team picked

n8n + a model + good intentions only carries a workflow so far. We come in to redesign for reliability — orchestration, retries, observability, ownership boundaries — without throwing away the working pieces.

Signal 0404 / 04

AI is becoming a real service line and needs a leader behind it

Demand is recurring. The team can't keep saying yes on a single specialist. We stand AI Implementation up as a new service line via Capability Expansion — capability lead, pricing template, delivery bench behind it.

What this hub is — and isn't

Strategy + architecture + build across the four AI tracks. Not an ML research lab, not a chat box vendor.

What it covers
  • AI feasibility readouts for live client briefs (often inside a Proposal Rescue Desk window)
  • Reference architecture across agents, retrieval, workflow automation, and integrations
  • Eval suites, guardrails, and the "I don't know" path before anything ships to a user
  • Model selection — OpenAI, Anthropic, open-weights, multi-model routing — with a defensible reason
  • Production-grade build inside the right child capability (Agents, RAG, Automation, Integrations)
  • Observability, cost dashboards, and a written runbook for your team
What it doesn't do
  • Train foundation models or fine-tune at the research-lab scale
  • Ship a chat surface as the answer when integration into existing systems is the answer
  • One-off prompt-engineering deliverables without a feasibility or architecture readout behind them
  • Direct-to-client pitching — the pod sits inside your team, not in front of the client
  • Recruit, place, or staff-augment a developer onto your payroll
The four AI tracks under this hub

AI Implementation routes a brief into one of four child capabilities

Most AI briefs we see are a mix of two of these tracks. The hub work decides which one is primary, which one is supporting, and which one is being asked for but isn't actually the answer.

The hub engagement decides which track owns the build. Most briefs end up combining two of these — the architecture readout in week one names the primary and the supporting.

See the full capability bench
How an AI engagement runs

From brief to architecture readout in a week, then build

AI work runs feasibility-first. The hub engagement decides whether the brief is buildable and which child capability owns the build, before any production code ships.

  1. Step 01Days 1–4

    Brief & feasibility

    Working session with your delivery lead and the buyer-side stakeholder. We pressure-test the brief against current model capability, compliance posture, and your team's existing stack. NDA and SOW signed under Salt Technologies, Inc.

  2. Step 02Week 1

    Architecture readout

    A short, written architecture readout: chosen track (or tracks), model and routing strategy, evals, guardrails, observability, and the things we are explicitly not building. Your team and your client refer back to it for the rest of the engagement.

  3. Step 03Weeks 2–8+

    Build inside the right child capability

    Build runs inside Agents, RAG, Workflow Automation, or Integrations — under the same SOW, the same delivery lead, and the same pod. Weekly working review with named owners on both sides. Evals tracked from build one.

  4. Step 04Week 8+

    Production handoff

    Production deployment, observability dashboards, cost monitoring, and a written runbook your team can operate without us. Post-launch support transitions cleanly into a Support & Maintenance retainer for model drift, dependency upgrades, and v1.x feature work.

How to engage

Three engagement shapes — pick the one that matches your brief

AI Implementation is most often launched as a new service line via Capability Expansion. The other two shapes are the entry points when a live brief is already on the table.

Capability rate
$3,200per week

Starting rate for a single-capability AI pod. Multi-track pods (e.g. RAG + Integrations) and pods that mix capabilities are quoted at the highest applicable rate. Final SOW is scoped against the brief.

Stack & deliverables

Senior AI engineers, your tools, ship-ready output

We work inside the AI tooling your team and your client already use — no parallel platform, no "we'll just rebuild it our way" surprise.

Models & orchestration
  • OpenAI · Anthropic
  • Open-weights (Llama, Mistral)
  • LangChain · LlamaIndex
  • Vercel AI SDK
Retrieval & data
  • pgvector · Pinecone · Weaviate
  • Unstructured · LlamaParse
  • Reranker pipelines
  • Eval datasets
Automation & integrations
  • n8n · Make · Zapier
  • HubSpot · Slack · CRMs
  • Webhooks + queues
  • First-party serverless
Outputs we ship
  • Architecture readout + decision log
  • Production AI system inside the right track
  • Eval suite + observability dashboard
  • Cost monitoring + alerting
  • Runbook for your team
  • Optional support retainer
Operating principles

Partner-safe inside your top AI accounts

Every AI Implementation engagement runs on the same operating spine that protects long-arc retainers and Dedicated Partner Pods — contracted through Salt Technologies, Inc.

Principle

No client-facing footprint

We don't email your client, join their calls, or appear in the proposal — unless you explicitly white-list a named engineer in the SOW.

Principle

Inside your accounts

We work in your GitHub, your model-provider accounts, your hosting, and your shared channel under aliases that fit your team's naming.

Principle

Mutual no-poach

Mutual non-solicitation written into every MSA, with a defined window after the engagement ends. Same clause across every track.

Principle

Salt Technologies, Inc.

MSA, NDA, and engagement SOW are issued by Salt — the Delaware C-Corp behind AgencyRelay.

The same operating spine sits underneath every AgencyRelay capability. Read the no-poach and confidentiality page for the contractual instruments behind these defaults.

AI Implementation FAQ

What agency owners ask before sizing an AI engagement

Direct answers to the questions that come up on almost every AI Implementation scoping call.

See full FAQ
  • Q.01

    Why is AI Implementation a hub instead of a single capability?

    Because the four AI tracks behind it solve different problems and use different toolchains. Agents take actions, RAG grounds answers in content, Workflow Automation runs reliable internal pipelines, and Integrations wire AI into existing tools. Most briefs combine two of those — the hub work figures out which combination, then build runs inside the right child capability. One page, one rate, one pod, four possible builds.

  • Q.02

    What's the difference between this and a freelance prompt engineer?

    Prompt engineering is one piece of one of the four tracks. AI Implementation engagements are end-to-end systems work — feasibility, architecture, model and routing strategy, evals, guardrails, observability, integration, runbook. Prompt design is part of the work; it isn't the work.

  • Q.03

    Are these senior engineers, or junior bench dressed up as senior?

    Senior. Every AI Implementation engagement is staffed with engineers who have shipped real AI systems to production inside agencies and product companies — not a junior pool with a senior badge in the project channel. If continuity is ever at risk on a long-arc engagement, we tell you before the next sprint review.

  • Q.04

    Whose API keys and model-provider accounts do you use?

    By default the client's. We work inside their OpenAI, Anthropic, or other model-provider accounts under named seats. Where you (the agency) hold the account on behalf of the client, we operate inside yours. Either way, the account ownership and cost-attribution boundary is documented in the SOW so there's no ambiguity at month-end.

  • Q.05

    Can you ship a chatbot demo by next Friday?

    Sometimes — but it's almost never the right answer. If the brief deserves a real production system, we say so before the calendar invite goes out. Most demo-shaped engagements end up needing rebuild work for evals, guardrails, and integration; we'd rather route the brief into a Proposal Rescue Desk feasibility window first and ship the right thing once.

  • Q.06

    Do you handle compliance and data-residency questions?

    Yes — they're part of the architecture readout in week one. Model selection, data routing, retention, PII handling, and (where applicable) HIPAA / SOC 2 / EU data-residency posture are written into the readout. We work inside the compliance envelope your client already operates in; we don't invent a parallel one.

  • Q.07

    How does the pricing work for a multi-track or multi-capability pod?

    The starting weekly rate for a single-capability AI pod is $3,200 per week. Multi-track pods (e.g. RAG + Integrations) and multi-capability pods (AI + UI/UX, AI + Backend) are quoted at the highest applicable rate. Final SOW is scoped against the brief; the rate is the floor, not a ceiling.

  • Q.08

    What's the right way to support an AI system after launch?

    Most AI systems graduate cleanly into a Support & Maintenance retainer post-launch — model drift monitoring, prompt regression on new model versions, cost optimisation, eval upkeep, and v1.x feature work inside a monthly envelope. Either the same pod or a smaller maintenance crew carries it on the same MSA.

  • Q.09

    Do we own the work the pod produces?

    Yes. IP ownership and assignment on delivered code, prompts, evals, and supporting artefacts is written into the MSA — the work belongs to your agency (and onwards to your client per your own client contract) on payment of the relevant invoice. The Salt Technologies templates are counsel-reviewed and shared before signing.

Bring the brief, get the right shape

Tell us the AI work you're sizing — we'll respond with a clean read on track, pod shape, and starting rate.

Agents, RAG, workflow automation, integrations, or all four. Either way, the conversation starts with the work — not with a deck.

Operating defaultsMSA / NDA / SOW issued by Salt Technologies, Inc.US-aligned working hoursNo-poach commitmentsWhite-label safe by default