Step 3 of 10

Phase 2 — Resolve your first model

Time: ~15 minutes
Prerequisites: Phase 1 complete (packages installed, project created, Gateway Key in .env, Restormel Doctor passes)
You'll need: Terminal access, a running dev server (or curl/httpie), your Gateway Key and Project ID from the Dashboard

This phase wires a single Restormel Resolve call into your backend. By the end, your app can ask Restormel "which provider and model should I use for this request?" and get a concrete answer. You verify the response shape, then plug it into the feature-flag branch from Phase 0.

Step 2.1 — Understand the resolve flow

Before writing code, understand what happens when your backend calls resolve:

  1. Your backend sends a POST to the resolve endpoint with your project ID and environment.
  2. Restormel evaluates the project's routes (fallback chain) and policies (allowlists, budgets, etc.).
  3. Restormel returns a JSON object telling you which provider, model, and key source to use.
  4. Your backend calls the AI provider directly using that information.

Restormel does not proxy the AI request. It tells you where to send it; you send it yourself. This keeps latency low and means your provider API keys never transit through Restormel (unless you've stored provider credentials in the dashboard and want Restormel to supply them).

Step 2.2 — Test resolve with curl

Before writing any application code, confirm the resolve endpoint works with a raw HTTP call. This isolates Restormel from your app so you can verify the plumbing.

bash
curl -X POST \
  "https://restormel.dev/keys/dashboard/api/projects/${RESTORMEL_PROJECT_ID}/resolve" \
  -H "Authorization: Bearer ${RESTORMEL_GATEWAY_KEY}" \
  -H "Content-Type: application/json" \
  -d '{ "environmentId": "production" }'
Tip — If you use the Cloud API through the Zuplo gateway, the URL is your gateway URL and you authenticate with your consumer key (zpka_…). For this walkthrough we use the dashboard API directly with the Gateway Key. See Cloud API for the gateway path.

You'll see

A JSON response wrapped under data:

json
{
  "data": {
    "routeId": "route_123",
    "providerType": "openai",
    "modelId": "gpt-4o",
    "explanation": "route=route_123 step=0 provider=openai model=gpt-4o"
  }
}

Reading the response:

FieldMeaning
data.routeIdThe matched route ID (created in the dashboard in Phase 3)
data.providerTypeThe AI provider to call (e.g. openai, anthropic, google)
data.modelIdThe model to use; may be null if none configured
data.explanationA human-readable trace of the resolution path (useful for debugging)

If you have not created a route for this environmentId yet, resolve returns 404 with { "error": "no_route" }. That is expected — you create your first route in Phase 3.

How to test

The curl command returns HTTP 200 and a JSON body with data.providerType. If you get:

  • 401 Unauthorized — your Gateway Key is wrong or missing. Check RESTORMEL_GATEWAY_KEY.
  • 404 Not Found — your project ID is wrong. Check RESTORMEL_PROJECT_ID against the dashboard.
  • 404 no_route — your project has no active route for the environmentId you sent. Create a route in Phase 3.
If you see "no_key_available" — Your route's steps point at providers that are not currently usable. Create a route with at least one enabled step (Phase 3). If you only want Restormel to choose a route while you supply provider credentials in your own app, use local resolve (Step 2.6) instead of the dashboard resolve endpoint.
Step 2.3 — Create a typed resolve client

Add a small server-side module that calls the resolve endpoint. This becomes the single place your app asks Restormel for routing decisions.

Next.js / generic Node (TypeScript):

// src/lib/server/restormel.ts
interface ResolveRequest { environmentId: string; routeId?: string; }
interface ResolveResponse {
  data: { routeId: string; providerType: string | null; modelId: string | null; explanation: string; };
}
const RESTORMEL_BASE_URL = process.env.RESTORMEL_BASE_URL ?? 'https://restormel.dev/keys/dashboard';
const RESTORMEL_GATEWAY_KEY = process.env.RESTORMEL_GATEWAY_KEY ?? '';
const RESTORMEL_PROJECT_ID = process.env.RESTORMEL_PROJECT_ID ?? '';

export async function restormelResolve(request: ResolveRequest): Promise<ResolveResponse> {
  const url = `${RESTORMEL_BASE_URL}/api/projects/${RESTORMEL_PROJECT_ID}/resolve`;
  const res = await fetch(url, {
    method: 'POST',
    headers: { 'Authorization': `Bearer ${RESTORMEL_GATEWAY_KEY}`, 'Content-Type': 'application/json' },
    body: JSON.stringify(request),
  });
  if (!res.ok) {
    const body = await res.text();
    throw new Error(`Restormel resolve failed (${res.status}): ${body.slice(0, 200)}`);
  }
  return res.json() as Promise<ResolveResponse>;
}

SvelteKit: Same implementation — SvelteKit server modules use the same Node/fetch APIs. Import with import { restormelResolve } from '$lib/server/restormel';

You'll see

A new file at src/lib/server/restormel.ts (or equivalent). No UI or behaviour changes yet.

How to test

ts
// scripts/test-resolve.ts (run with tsx or ts-node)
import { restormelResolve } from '../src/lib/server/restormel';
const result = await restormelResolve({ environmentId: 'production' });
console.log('Resolved:', JSON.stringify(result, null, 2));
bash
npx tsx scripts/test-resolve.ts

You should see the same JSON structure as the curl response from Step 2.2.

Step 2.4 — Wire resolve into the feature flag

Connect the resolve client to the feature-flag branch you created in Phase 0 (Step 0.5). This is the moment your app can optionally route through Restormel — but still defaults to the old path.

ts
// src/lib/server/resolve-provider.ts
import { USE_RESTORMEL_KEYS } from '../feature-flags';
import { restormelResolve } from './restormel';

export async function resolveProvider(preferredModel?: string) {
  if (USE_RESTORMEL_KEYS) {
    const result = await restormelResolve({
      environmentId: process.env.RESTORMEL_ENVIRONMENT_ID ?? 'production',
    });
    return {
      provider: result.data.providerType ?? process.env.DEFAULT_AI_PROVIDER ?? 'openai',
      model: result.data.modelId ?? preferredModel ?? null,
      source: 'restormel',
    };
  }
  return legacyResolve(preferredModel);
}

You'll see

Your existing routing code still runs (the flag is false). The new path exists but is not active.

How to test

Legacy path (flag off): pnpm dev — make a request that triggers an AI call; it should behave identically to before.

Restormel path (flag on): USE_RESTORMEL_KEYS=true pnpm dev — make the same request; it should now resolve via Restormel. Turn the flag back off after testing. Production cutover happens in Phase 6.

Pitfall — If the Restormel path returns no_key_available and your app crashes, add error handling: catch the resolve error and fall back to the legacy path. This is your safety net during the parallel-run period.
Step 2.5 — Add error handling and local fallback

The resolve call is a network request. It can fail (network error, Restormel downtime, misconfiguration). Your app should handle this gracefully.

ts
if (USE_RESTORMEL_KEYS) {
  try {
    const result = await restormelResolve({ environmentId: process.env.RESTORMEL_ENVIRONMENT_ID ?? 'production' });
    return {
      provider: result.data.providerType ?? process.env.DEFAULT_AI_PROVIDER ?? 'openai',
      model: result.data.modelId ?? preferredModel ?? null,
      source: 'restormel',
    };
  } catch (err) {
    console.error('[restormel] Resolve failed, falling back to legacy:', err);
  }
}
return legacyResolve(preferredModel);

You'll see

If Restormel is unreachable or returns an error, your app logs the error and continues with the legacy routing path. No user-facing impact.

How to test

Temporarily set an invalid Gateway Key and enable the flag: RESTORMEL_GATEWAY_KEY=rk_invalid USE_RESTORMEL_KEYS=true pnpm dev. Make a request that triggers an AI call. You should see a console error and the request should succeed via the legacy path. Restore your real Gateway Key after testing.

Implementors: See “Agent prompts for this phase” below for a sequenced set of prompts (review → implement).

Step 2.6 — (Optional) Use the npm package resolve locally

If your backend is Node/TypeScript and you prefer to resolve locally (no HTTP call to the dashboard), you can use @restormel/keys in-process. This uses the same routing engine but runs locally. Useful if you want zero added latency, manage provider keys in your own env vars, and don't need dashboard-configured routes yet.

See the full code sample in the repo at docs/walkthrough/04-phase-2-resolve.md (Step 2.6). You get the same resolution shape but resolved locally.

Tip — You can start with local resolve (this step) and switch to HTTP resolve (Step 2.3) later when you're ready to use dashboard-managed routes and policies. The resolveProvider wrapper from Step 2.4 makes this a one-line change.

Prompts for this phase

These are optional and collapsed by default. Use them if you're implementing Phase 2 with a coding agent.

Checkpoint checklist: mark each step complete as you finish it.

Checklist

Checkpoint

You now have:

  • A typed resolve client that calls the Restormel API (src/lib/server/restormel.ts).
  • (Optional) A local resolve instance using @restormel/keys.
  • A resolveProvider function that branches on the feature flag: Restormel path (with error fallback) or legacy path.
  • A test script that confirms the resolve endpoint works.
  • Error handling that falls back to legacy routing if Restormel is unreachable.

Your app still defaults to the legacy routing path. You've confirmed the Restormel path works when the flag is on.