Go Dual-Provider LLM Routing (OpenAI + Claude): Timeout Tiers, Cost Caps, and Fallback Control

If your Go service relies on one LLM provider, two failures hurt the most, timeout spikes and billing spikes. A real production setup is not just “add another provider”, it is a single control plane for routing, timeout tiers, cost caps, and fallback. This guide gives you a practical OpenAI + Claude dual-provider pattern with one priority, keep uptime first, then optimize quality. ...

April 8, 2026 · 2 min · mengboy

OpenAI Responses + Go Stream Recovery: Delta Persistence, Resume Tokens, and Duplicate Chunk Dedup

In production, the painful part is not “streaming is slow.” It’s “streaming breaks and then duplicates output after reconnect.” This guide gives you a practical recovery loop: delta persistence + resume token + idempotent dedup, so reconnection does not replay garbage. ...

March 23, 2026 · 4 min · mengboy

Go + OpenAI Responses Agent Memory Layering: Short-Term Context, Long-Term Index, and Cost Caps

In production Go agents, the first thing that breaks is usually not model quality. It is memory management: context grows, bills spike, and answers drift. Use a 3-layer memory design: L1: short-term conversational window (seconds) L2: rolling summary (minutes) L3: long-term retrieval memory (days) ...

March 18, 2026 · 3 min · mengboy

Handling OpenAI 429/5xx Storms in Go: Token Bucket, Exponential Backoff, and Circuit Breakers

Most Go teams are not killed by a single API error. They are killed by a retry storm they created themselves. ...

March 18, 2026 · 3 min · mengboy

OpenAI Batch API with Go: Offline Batching, Failure Replay, and Cost Boundaries

Short answer: if your workload is delay-tolerant, batchable, and replay-safe, move it from online calls to Batch API. The savings are real, but only if you design splitting, failure routing, and replay first. Many teams treat Batch API as a cheaper sync endpoint. That usually creates a replay mess instead of stable savings. A conservative rollout starts with cost boundaries and SLOs, then implements offline batching and controlled replay. ...

March 13, 2026 · 3 min · mengboy

OpenAI Responses Structured Outputs with Go: Schema Evolution, Bad-Case Fallbacks, and Gradual Rollback

The hardest part of Structured Outputs is not getting JSON once. It is surviving schema changes without turning production into a small fire with excellent logs and terrible business results. Once a Go service starts evolving prompts and response contracts, the usual failure modes show up fast: a new required field breaks older consumers, an enum expands and strict validation kills valid requests, or one bad sample drags the whole chain into retries and rollback panic. ...

March 11, 2026 · 6 min · mengboy

OpenAI Realtime + Go in Production: WebRTC Token Rotation, Interruption Recovery, and End-to-End Latency Budgets

If you plan to put OpenAI Realtime into production, do not let a passing demo fool you. What usually breaks the system is not the model itself. It is non-rotating short-lived auth, missing interruption state, and zero end-to-end latency budgeting. Miss those three and your voice UX starts sounding like an angry walkie-talkie. ...

March 9, 2026 · 6 min · mengboy

Go + OpenAI Responses: Connection Pooling and Timeout Budgets from HTTP/2 Reuse to Error-Budget Control

When Go services call the OpenAI Responses API in production, the real failures are rarely about model quality. Most incidents come from transport instability: weak connection pooling, conflicting timeout layers, and retry storms. This guide gives you a practical baseline: HTTP/2 reuse, layered timeout budgets, bounded retries, and error-budget driven operations. ...

March 6, 2026 · 3 min · mengboy

OpenAI Responses + Go: Taming Retry Storms with Idempotency Keys, Jittered Backoff, and Circuit Breakers

The most expensive outage is not a single failure — it is a failure amplified by retries. In an OpenAI Responses + Go tool-calling stack, missing idempotency, jittered backoff, and breaker thresholds can turn 10 failing requests into 1000 downstream calls in minutes. ...

March 4, 2026 · 3 min · mengboy

Taming Context Explosion in OpenAI Assistants/Responses with Go: Truncation, Summary Backfill, and Cost Caps

Long-running agent sessions usually fail the same way: context keeps growing, latency spikes, costs blow up, and answer quality gets worse. That is rarely a model-quality issue. It is almost always missing context governance. ...

March 2, 2026 · 2 min · mengboy