<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Responses API on Mengboy 技术笔记</title>
    <link>https://www.mfun.ink/tags/responses-api/</link>
    <description>Recent content in Responses API on Mengboy 技术笔记</description>
    <generator>Hugo -- 0.156.0</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 11 Mar 2026 01:08:00 +0000</lastBuildDate>
    <atom:link href="https://www.mfun.ink/tags/responses-api/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>OpenAI Responses Structured Outputs &#43; Go：Schema 演进、坏样本兜底与灰度回滚</title>
      <link>https://www.mfun.ink/2026/03/11/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</link>
      <pubDate>Wed, 11 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/11/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</guid>
      <description>&lt;p&gt;Structured Outputs 最容易翻车的地方，不是“模型不听话”，而是&lt;strong&gt;你把 schema 当成了永远不变的圣旨&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;线上一旦进入版本演进期，最常见的事故就是：字段新增后老消费端崩、枚举值扩展后校验误杀、坏样本把整条链路拖死，最后只能半夜回滚，像在给自己写惊悚片。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses Structured Outputs with Go: Schema Evolution, Bad-Case Fallbacks, and Gradual Rollback</title>
      <link>https://www.mfun.ink/english/post/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</link>
      <pubDate>Wed, 11 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</guid>
      <description>&lt;p&gt;The hardest part of Structured Outputs is not getting JSON once. It is surviving schema changes without turning production into a small fire with excellent logs and terrible business results.&lt;/p&gt;
&lt;p&gt;Once a Go service starts evolving prompts and response contracts, the usual failure modes show up fast: a new required field breaks older consumers, an enum expands and strict validation kills valid requests, or one bad sample drags the whole chain into retries and rollback panic.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI Responses: Connection Pooling and Timeout Budgets from HTTP/2 Reuse to Error-Budget Control</title>
      <link>https://www.mfun.ink/english/post/go-openai-responses-connection-pool-timeout-budget/</link>
      <pubDate>Fri, 06 Mar 2026 01:13:12 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/go-openai-responses-connection-pool-timeout-budget/</guid>
      <description>&lt;p&gt;When Go services call the OpenAI Responses API in production, the real failures are rarely about model quality. Most incidents come from transport instability: weak connection pooling, conflicting timeout layers, and retry storms.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical baseline: HTTP/2 reuse, layered timeout budgets, bounded retries, and error-budget driven operations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go 调 OpenAI Responses 的连接池与超时预算：HTTP/2 复用到错误预算闭环</title>
      <link>https://www.mfun.ink/2026/03/06/go-openai-responses-connection-pool-timeout-budget/</link>
      <pubDate>Fri, 06 Mar 2026 01:13:12 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/06/go-openai-responses-connection-pool-timeout-budget/</guid>
      <description>&lt;p&gt;线上 Go 服务调用 OpenAI Responses 时，最容易踩的坑不是“模型不准”，而是链路抖动：连接池不稳、超时预算乱配、重试叠加把自己打挂。&lt;/p&gt;
&lt;p&gt;这篇给一套可落地的基线配置：HTTP/2 连接复用、分层超时、错误预算和退避重试，目标是把 5xx 与超时比例压到可控范围，并且能快速定位瓶颈。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go 工具调用重试风暴治理：幂等键、退避抖动与熔断阈值</title>
      <link>https://www.mfun.ink/2026/03/04/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</link>
      <pubDate>Wed, 04 Mar 2026 01:10:40 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/04/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;线上最可怕的不是一次失败，而是&lt;strong&gt;失败后被重试放大&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;在 OpenAI Responses + Go 的工具调用链路里，如果没有幂等键、退避抖动和熔断阈值，10 个请求很快就能打成 1000 个下游调用，账单和延迟一起爆炸。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go: Taming Retry Storms with Idempotency Keys, Jittered Backoff, and Circuit Breakers</title>
      <link>https://www.mfun.ink/english/post/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</link>
      <pubDate>Wed, 04 Mar 2026 01:10:40 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;The most expensive outage is not a single failure — it is a failure amplified by retries.&lt;/p&gt;
&lt;p&gt;In an OpenAI Responses + Go tool-calling stack, missing idempotency, jittered backoff, and breaker thresholds can turn 10 failing requests into 1000 downstream calls in minutes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Assistants/Responses 在 Go 里的上下文爆炸治理：截断策略、摘要回填与成本上限</title>
      <link>https://www.mfun.ink/2026/03/02/openai-assistants-responses-go/</link>
      <pubDate>Mon, 02 Mar 2026 12:44:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/02/openai-assistants-responses-go/</guid>
      <description>&lt;p&gt;线上 Agent 一跑久了就会遇到同一个坑：上下文越来越长，延迟飙升、费用失控，最后还更容易答偏。&lt;/p&gt;
&lt;p&gt;这不是模型“变笨”了，通常是上下文治理没做：该留的没留、该删的没删、该摘要的摘要坏了。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Taming Context Explosion in OpenAI Assistants/Responses with Go: Truncation, Summary Backfill, and Cost Caps</title>
      <link>https://www.mfun.ink/english/post/openai-assistants-responses-go/</link>
      <pubDate>Mon, 02 Mar 2026 12:44:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-assistants-responses-go/</guid>
      <description>&lt;p&gt;Long-running agent sessions usually fail the same way: context keeps growing, latency spikes, costs blow up, and answer quality gets worse.&lt;/p&gt;
&lt;p&gt;That is rarely a model-quality issue. It is almost always missing context governance.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses API Streaming in Go: Timeouts, Retries, and Observability</title>
      <link>https://www.mfun.ink/english/post/openai-responses-api-streaming-go-timeout-retry-observability/</link>
      <pubDate>Mon, 23 Feb 2026 01:15:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-api-streaming-go-timeout-retry-observability/</guid>
      <description>&lt;p&gt;Production streaming fails in two predictable ways: users wait while the stream silently drops, and your logs say &amp;ldquo;timeout&amp;rdquo; without telling you where it actually broke.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical Go pattern for OpenAI Responses API streaming with strict timeout boundaries, safe retries, and useful telemetry.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses API 流式输出在 Go 中的工程化实践：超时、重试与可观测性</title>
      <link>https://www.mfun.ink/2026/02/23/openai-responses-api-streaming-go-timeout-retry-observability/</link>
      <pubDate>Mon, 23 Feb 2026 01:15:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/02/23/openai-responses-api-streaming-go-timeout-retry-observability/</guid>
      <description>&lt;p&gt;线上流式生成最怕两件事：用户在等，你的连接先断；日志里报错一堆，你却不知道是哪一层炸了。&lt;/p&gt;
&lt;p&gt;这篇给你一个能直接落地的 Go 工程模板：把 OpenAI Responses API 的流式调用做成&lt;strong&gt;可超时、可重试、可观测&lt;/strong&gt;的生产级链路。&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
