<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>AI Engineering on Mengboy 技术笔记</title>
    <link>https://www.mfun.ink/categories/ai-engineering/</link>
    <description>Recent content in AI Engineering on Mengboy 技术笔记</description>
    <generator>Hugo -- 0.156.0</generator>
    <language>zh-cn</language>
    <lastBuildDate>Mon, 06 Apr 2026 01:17:34 +0000</lastBuildDate>
    <atom:link href="https://www.mfun.ink/categories/ai-engineering/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Claude Code &#43; GitHub Actions CI Self-Healing Pipeline: Error Attribution, Minimal Patches, and Human Approval Gates</title>
      <link>https://www.mfun.ink/english/post/claude-code-github-actions-ci-self-healing-pipeline/</link>
      <pubDate>Mon, 06 Apr 2026 01:17:34 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-code-github-actions-ci-self-healing-pipeline/</guid>
      <description>&lt;p&gt;If your CI keeps failing and engineers keep babysitting logs, you&amp;rsquo;re paying an invisible velocity tax. A production-grade AI self-healing pipeline is not &amp;ldquo;let the agent edit anything&amp;rdquo;. It&amp;rsquo;s a controlled loop: &lt;strong&gt;attribution, patching, approval, rollback&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This post gives you a deployable baseline: Claude Code proposes a minimal fix patch, GitHub Actions enforces risk gates and regression checks, and humans only approve at high-impact checkpoints.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude API Rate-Limit Storm Playbook: Adaptive Concurrency, Jittered Backoff, and Quota Isolation</title>
      <link>https://www.mfun.ink/english/post/claude-api-rate-limit-storm-adaptive-concurrency-backoff-quota-isolation/</link>
      <pubDate>Fri, 03 Apr 2026 01:15:05 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-api-rate-limit-storm-adaptive-concurrency-backoff-quota-isolation/</guid>
      <description>&lt;p&gt;When Claude API starts returning 429 under high load, most systems don&amp;rsquo;t just slow down—they collapse: queue buildup, retry storms, upstream timeout chains, and pager noise.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude API 高并发限流雪崩应对：自适应并发、退避抖动与配额隔离</title>
      <link>https://www.mfun.ink/2026/04/03/claude-api-rate-limit-storm-adaptive-concurrency-backoff-quota-isolation/</link>
      <pubDate>Fri, 03 Apr 2026 01:15:05 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/04/03/claude-api-rate-limit-storm-adaptive-concurrency-backoff-quota-isolation/</guid>
      <description>&lt;p&gt;当 Claude API 在高并发下开始返回 429，很多系统不是“慢一点”，而是直接雪崩：队列堆积、重试风暴、上游超时、下游告警连锁。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses Streaming in Production: Backpressure, Chunk Reassembly, and Timeout Budget</title>
      <link>https://www.mfun.ink/english/post/openai-responses-streaming-backpressure-chunk-reassembly-timeout-budget/</link>
      <pubDate>Fri, 27 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-streaming-backpressure-chunk-reassembly-timeout-budget/</guid>
      <description>&lt;p&gt;Most streaming failures are not about “can it stream”, but “does it stay stable under load”: broken chunks, stuck clients, timeout cascades, and retry storms.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses 流式输出生产稳态：背压控制、分片重组与超时预算闭环</title>
      <link>https://www.mfun.ink/2026/03/27/openai-responses-streaming-backpressure-chunk-reassembly-timeout-budget/</link>
      <pubDate>Fri, 27 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/27/openai-responses-streaming-backpressure-chunk-reassembly-timeout-budget/</guid>
      <description>&lt;p&gt;线上最容易把流式输出做坏的，不是“能不能流出来”，而是&lt;strong&gt;流量一上来就抖&lt;/strong&gt;：token 断片、客户端卡死、超时雪崩、重试风暴。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI Model Routing Gateway: Latency Tiers, Cost Caps, and Quality Guardrails</title>
      <link>https://www.mfun.ink/english/post/claude-openai-model-routing-gateway-latency-cost-quality/</link>
      <pubDate>Wed, 25 Mar 2026 01:16:31 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-openai-model-routing-gateway-latency-cost-quality/</guid>
      <description>&lt;p&gt;Connecting both Claude and OpenAI in production is the easy part. The hard part is keeping the system stable across the quality-latency-cost triangle.&lt;br&gt;
Without a routing gateway, you usually get latency spikes, runaway bills, and ugly cascading failures.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI 模型路由网关实战：延迟分层、成本阈值与质量守门</title>
      <link>https://www.mfun.ink/2026/03/25/claude-openai-model-routing-gateway-latency-cost-quality/</link>
      <pubDate>Wed, 25 Mar 2026 01:16:31 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/25/claude-openai-model-routing-gateway-latency-cost-quality/</guid>
      <description>&lt;p&gt;你把 Claude 和 OpenAI 一起接进生产环境后，真正的难题不是“能不能调通”，而是&lt;strong&gt;怎么在质量、延迟、成本三角里稳定跑&lt;/strong&gt;。&lt;br&gt;
如果没有路由网关，最常见结果就是：高峰期延迟抖动、账单失控、异常时全站雪崩。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go Stream Recovery: Delta Persistence, Resume Tokens, and Duplicate Chunk Dedup</title>
      <link>https://www.mfun.ink/english/post/openai-responses-go-stream-resume-delta-dedup/</link>
      <pubDate>Mon, 23 Mar 2026 01:13:09 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-go-stream-resume-delta-dedup/</guid>
      <description>&lt;p&gt;In production, the painful part is not “streaming is slow.” It’s “streaming breaks and then duplicates output after reconnect.”&lt;br&gt;
This guide gives you a practical recovery loop: &lt;strong&gt;delta persistence + resume token + idempotent dedup&lt;/strong&gt;, so reconnection does not replay garbage.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses in Go Multi-Tenant Quota Governance: Token Buckets, Budget Circuit Breakers, and Cost Attribution</title>
      <link>https://www.mfun.ink/english/post/openai-responses-go-multitenant-quota-governance/</link>
      <pubDate>Fri, 20 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-go-multitenant-quota-governance/</guid>
      <description>&lt;p&gt;Most multi-tenant AI platforms fail for two boring reasons: one tenant saturates shared capacity, and finance discovers the burn too late.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical Go blueprint: token-bucket throttling, budget circuit breakers, and request-level cost attribution.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses 在 Go 多租户中的配额治理：令牌桶限流、预算熔断与账单归因</title>
      <link>https://www.mfun.ink/2026/03/20/openai-responses-go-multitenant-quota-governance/</link>
      <pubDate>Fri, 20 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/20/openai-responses-go-multitenant-quota-governance/</guid>
      <description>&lt;p&gt;多租户 AI 服务最容易死在两件事：&lt;strong&gt;一个租户打爆全局配额&lt;/strong&gt;，以及&lt;strong&gt;月底账单炸了才发现&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;这篇给你一套可直接落地的 Go 方案：令牌桶限流 + 预算熔断 + 账单归因，目标是“先活下来，再精细化”。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI Responses Agent Memory Layering: Short-Term Context, Long-Term Index, and Cost Caps</title>
      <link>https://www.mfun.ink/english/post/go-openai-responses-agent-memory-layering/</link>
      <pubDate>Wed, 18 Mar 2026 16:33:52 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/go-openai-responses-agent-memory-layering/</guid>
      <description>&lt;p&gt;In production Go agents, the first thing that breaks is usually not model quality. It is memory management: context grows, bills spike, and answers drift.&lt;/p&gt;
&lt;p&gt;Use a 3-layer memory design:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;L1: short-term conversational window (seconds)&lt;/li&gt;
&lt;li&gt;L2: rolling summary (minutes)&lt;/li&gt;
&lt;li&gt;L3: long-term retrieval memory (days)&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI Responses Agent 记忆分层实战：短期上下文、长期索引与成本封顶</title>
      <link>https://www.mfun.ink/2026/03/18/go-openai-responses-agent-memory-layering/</link>
      <pubDate>Wed, 18 Mar 2026 16:33:52 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/18/go-openai-responses-agent-memory-layering/</guid>
      <description>&lt;p&gt;你在 Go 里做 Agent，最容易翻车的不是推理能力，而是“记忆”失控：上下文越来越长、账单越来越高、回答却越来越飘。&lt;/p&gt;
&lt;p&gt;这篇给你一个可落地的三层方案：&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;L1：短期会话上下文（秒级，强相关）&lt;/li&gt;
&lt;li&gt;L2：中期摘要记忆（分钟级，压缩）&lt;/li&gt;
&lt;li&gt;L3：长期检索记忆（天级，向量索引）&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    <item>
      <title>Go 服务调用 OpenAI 的 429/5xx 风暴应对：令牌桶、指数退避与熔断恢复</title>
      <link>https://www.mfun.ink/2026/03/18/go-openai-429-5xx-storm-defense-token-bucket-backoff-circuit-breaker/</link>
      <pubDate>Wed, 18 Mar 2026 01:14:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/18/go-openai-429-5xx-storm-defense-token-bucket-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;你不是被 OpenAI API「偶尔报错」打败的；你是被&lt;strong&gt;并发放大后的重试风暴&lt;/strong&gt;打败的。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Handling OpenAI 429/5xx Storms in Go: Token Bucket, Exponential Backoff, and Circuit Breakers</title>
      <link>https://www.mfun.ink/english/post/go-openai-429-5xx-storm-defense-token-bucket-backoff-circuit-breaker/</link>
      <pubDate>Wed, 18 Mar 2026 01:14:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/go-openai-429-5xx-storm-defense-token-bucket-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;Most Go teams are not killed by a single API error. They are killed by a retry storm they created themselves.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; GitHub Actions PR Risk Gate: Automated Evals, Tiered Blocking, and One-Click Rollback</title>
      <link>https://www.mfun.ink/english/post/openai-responses-github-actions-pr-risk-gate/</link>
      <pubDate>Mon, 16 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-github-actions-pr-risk-gate/</guid>
      <description>&lt;p&gt;You don&amp;rsquo;t need an AI reviewer that “sounds smart.” You need a gate that &lt;strong&gt;stops risky PRs before they hit main&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This post shows a production-ready minimum setup: OpenAI Responses generates structured risk output, GitHub Actions enforces tiered policies, and critical failures can trigger a one-click rollback.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; GitHub Actions 的 PR 风险闸门：自动评测、分级阻断与一键回滚</title>
      <link>https://www.mfun.ink/2026/03/16/openai-responses-github-actions-pr-risk-gate/</link>
      <pubDate>Mon, 16 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/16/openai-responses-github-actions-pr-risk-gate/</guid>
      <description>&lt;p&gt;你不需要一个“会聊天”的 AI 审查器，你需要一个&lt;strong&gt;能阻断坏改动进主干&lt;/strong&gt;的风险闸门。&lt;/p&gt;
&lt;p&gt;这篇给一套可上线的最小方案：OpenAI Responses 负责生成结构化审查结论，GitHub Actions 负责分级阻断，发现高风险时自动回滚到安全提交。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Batch API &#43; Go 降本实战：离线拆批、失败重放与成本边界</title>
      <link>https://www.mfun.ink/2026/03/13/openai-batch-api-go-cost-control-offline-batching-failure-replay/</link>
      <pubDate>Fri, 13 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/13/openai-batch-api-go-cost-control-offline-batching-failure-replay/</guid>
      <description>&lt;p&gt;一句话结论：如果你的调用是&lt;strong&gt;可延迟、可批处理、可回放&lt;/strong&gt;，就该把在线请求下沉到 Batch API；省钱最明显，但前提是你把拆批、失败分流和回放链路先做好。&lt;/p&gt;
&lt;p&gt;很多团队把 Batch API 当“便宜版同步接口”来用，结果不是省钱，而是把失败样本堆成事故池。真正的保守做法是：先定义成本边界和SLO，再做离线拆批与失败回放。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Batch API with Go: Offline Batching, Failure Replay, and Cost Boundaries</title>
      <link>https://www.mfun.ink/english/post/openai-batch-api-go-cost-control-offline-batching-failure-replay/</link>
      <pubDate>Fri, 13 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-batch-api-go-cost-control-offline-batching-failure-replay/</guid>
      <description>&lt;p&gt;Short answer: if your workload is &lt;strong&gt;delay-tolerant, batchable, and replay-safe&lt;/strong&gt;, move it from online calls to Batch API. The savings are real, but only if you design splitting, failure routing, and replay first.&lt;/p&gt;
&lt;p&gt;Many teams treat Batch API as a cheaper sync endpoint. That usually creates a replay mess instead of stable savings. A conservative rollout starts with cost boundaries and SLOs, then implements offline batching and controlled replay.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses Structured Outputs &#43; Go：Schema 演进、坏样本兜底与灰度回滚</title>
      <link>https://www.mfun.ink/2026/03/11/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</link>
      <pubDate>Wed, 11 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/11/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</guid>
      <description>&lt;p&gt;Structured Outputs 最容易翻车的地方，不是“模型不听话”，而是&lt;strong&gt;你把 schema 当成了永远不变的圣旨&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;线上一旦进入版本演进期，最常见的事故就是：字段新增后老消费端崩、枚举值扩展后校验误杀、坏样本把整条链路拖死，最后只能半夜回滚，像在给自己写惊悚片。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses Structured Outputs with Go: Schema Evolution, Bad-Case Fallbacks, and Gradual Rollback</title>
      <link>https://www.mfun.ink/english/post/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</link>
      <pubDate>Wed, 11 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</guid>
      <description>&lt;p&gt;The hardest part of Structured Outputs is not getting JSON once. It is surviving schema changes without turning production into a small fire with excellent logs and terrible business results.&lt;/p&gt;
&lt;p&gt;Once a Go service starts evolving prompts and response contracts, the usual failure modes show up fast: a new required field breaks older consumers, an enum expands and strict validation kills valid requests, or one bad sample drags the whole chain into retries and rollback panic.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Realtime &#43; Go in Production: WebRTC Token Rotation, Interruption Recovery, and End-to-End Latency Budgets</title>
      <link>https://www.mfun.ink/english/post/openai-realtime-go-webrtc-auth-recovery-latency-budget/</link>
      <pubDate>Mon, 09 Mar 2026 01:13:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-realtime-go-webrtc-auth-recovery-latency-budget/</guid>
      <description>&lt;p&gt;If you plan to put OpenAI Realtime into production, do not let a passing demo fool you.&lt;/p&gt;
&lt;p&gt;What usually breaks the system is not the model itself. It is &lt;strong&gt;non-rotating short-lived auth, missing interruption state, and zero end-to-end latency budgeting&lt;/strong&gt;. Miss those three and your voice UX starts sounding like an angry walkie-talkie.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Realtime &#43; Go 生产落地：WebRTC 鉴权轮换、打断恢复与端到端延迟预算</title>
      <link>https://www.mfun.ink/2026/03/09/openai-realtime-go-webrtc-auth-recovery-latency-budget/</link>
      <pubDate>Mon, 09 Mar 2026 01:13:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/09/openai-realtime-go-webrtc-auth-recovery-latency-budget/</guid>
      <description>&lt;p&gt;如果你准备把 OpenAI Realtime 真上生产，先别被“能跑通 demo”骗了。&lt;/p&gt;
&lt;p&gt;真正把系统打爆的，通常不是模型本身，而是 &lt;strong&gt;短时鉴权没轮换、打断恢复没状态机、端到端延迟没预算&lt;/strong&gt;。这三件事不补，语音体验会像在和一台卡顿的对讲机吵架。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI Responses: Connection Pooling and Timeout Budgets from HTTP/2 Reuse to Error-Budget Control</title>
      <link>https://www.mfun.ink/english/post/go-openai-responses-connection-pool-timeout-budget/</link>
      <pubDate>Fri, 06 Mar 2026 01:13:12 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/go-openai-responses-connection-pool-timeout-budget/</guid>
      <description>&lt;p&gt;When Go services call the OpenAI Responses API in production, the real failures are rarely about model quality. Most incidents come from transport instability: weak connection pooling, conflicting timeout layers, and retry storms.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical baseline: HTTP/2 reuse, layered timeout budgets, bounded retries, and error-budget driven operations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go 调 OpenAI Responses 的连接池与超时预算：HTTP/2 复用到错误预算闭环</title>
      <link>https://www.mfun.ink/2026/03/06/go-openai-responses-connection-pool-timeout-budget/</link>
      <pubDate>Fri, 06 Mar 2026 01:13:12 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/06/go-openai-responses-connection-pool-timeout-budget/</guid>
      <description>&lt;p&gt;线上 Go 服务调用 OpenAI Responses 时，最容易踩的坑不是“模型不准”，而是链路抖动：连接池不稳、超时预算乱配、重试叠加把自己打挂。&lt;/p&gt;
&lt;p&gt;这篇给一套可落地的基线配置：HTTP/2 连接复用、分层超时、错误预算和退避重试，目标是把 5xx 与超时比例压到可控范围，并且能快速定位瓶颈。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go 工具调用重试风暴治理：幂等键、退避抖动与熔断阈值</title>
      <link>https://www.mfun.ink/2026/03/04/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</link>
      <pubDate>Wed, 04 Mar 2026 01:10:40 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/04/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;线上最可怕的不是一次失败，而是&lt;strong&gt;失败后被重试放大&lt;/strong&gt;。&lt;/p&gt;
&lt;p&gt;在 OpenAI Responses + Go 的工具调用链路里，如果没有幂等键、退避抖动和熔断阈值，10 个请求很快就能打成 1000 个下游调用，账单和延迟一起爆炸。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go: Taming Retry Storms with Idempotency Keys, Jittered Backoff, and Circuit Breakers</title>
      <link>https://www.mfun.ink/english/post/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</link>
      <pubDate>Wed, 04 Mar 2026 01:10:40 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;The most expensive outage is not a single failure — it is a failure amplified by retries.&lt;/p&gt;
&lt;p&gt;In an OpenAI Responses + Go tool-calling stack, missing idempotency, jittered backoff, and breaker thresholds can turn 10 failing requests into 1000 downstream calls in minutes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Assistants/Responses 在 Go 里的上下文爆炸治理：截断策略、摘要回填与成本上限</title>
      <link>https://www.mfun.ink/2026/03/02/openai-assistants-responses-go/</link>
      <pubDate>Mon, 02 Mar 2026 12:44:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/02/openai-assistants-responses-go/</guid>
      <description>&lt;p&gt;线上 Agent 一跑久了就会遇到同一个坑：上下文越来越长，延迟飙升、费用失控，最后还更容易答偏。&lt;/p&gt;
&lt;p&gt;这不是模型“变笨”了，通常是上下文治理没做：该留的没留、该删的没删、该摘要的摘要坏了。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Taming Context Explosion in OpenAI Assistants/Responses with Go: Truncation, Summary Backfill, and Cost Caps</title>
      <link>https://www.mfun.ink/english/post/openai-assistants-responses-go/</link>
      <pubDate>Mon, 02 Mar 2026 12:44:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-assistants-responses-go/</guid>
      <description>&lt;p&gt;Long-running agent sessions usually fail the same way: context keeps growing, latency spikes, costs blow up, and answer quality gets worse.&lt;/p&gt;
&lt;p&gt;That is rarely a model-quality issue. It is almost always missing context governance.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI API Timeout Troubleshooting: DNS, TLS, Proxy, and Connection Pool</title>
      <link>https://www.mfun.ink/english/post/go-openai-api-timeout-troubleshooting-dns-tls-proxy-connection-pool/</link>
      <pubDate>Mon, 02 Mar 2026 01:12:10 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/go-openai-api-timeout-troubleshooting-dns-tls-proxy-connection-pool/</guid>
      <description>&lt;p&gt;When OpenAI API calls start timing out in production, the real problem is usually not “OpenAI is down.”&lt;/p&gt;
&lt;p&gt;The real problem is you don’t know which hop is failing: DNS, TLS handshake, proxy path, or your own connection pool.&lt;/p&gt;</description>
    </item>
    <item>
      <title>GitHub Actions &#43; AI Agent Auto-Fix Pipeline: Failure Tiers, Regression Gates, and Security Guardrails</title>
      <link>https://www.mfun.ink/english/post/github-actions-ai-agent-auto-fix-pipeline/</link>
      <pubDate>Fri, 27 Feb 2026 01:18:38 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/github-actions-ai-agent-auto-fix-pipeline/</guid>
      <description>&lt;p&gt;When CI keeps failing, the real risk is not “slow fixes” — it is “fast bad fixes.”
This guide gives you a practical &lt;strong&gt;GitHub Actions + AI Agent auto-fix pipeline&lt;/strong&gt; with failure tiering, strict edit boundaries, and merge-time gates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>GitHub Actions &#43; AI Agent 自动修复流水线：失败分级、回归测试与安全闸门</title>
      <link>https://www.mfun.ink/2026/02/27/github-actions-ai-agent-auto-fix-pipeline/</link>
      <pubDate>Fri, 27 Feb 2026 01:18:38 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/02/27/github-actions-ai-agent-auto-fix-pipeline/</guid>
      <description>&lt;p&gt;线上 CI 一旦连续红灯，团队最怕的不是“修得慢”，而是“修坏更多”。
这篇给你一套可落地的 &lt;strong&gt;GitHub Actions + AI Agent 自动修复流水线&lt;/strong&gt;：先做失败分级，再限制 AI 修改范围，最后用回归与安全闸门兜底。&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Agents SDK with Go: Tool Calling, Session Memory, and Error Recovery</title>
      <link>https://www.mfun.ink/english/post/openai-agents-sdk-go-tool-calling/</link>
      <pubDate>Wed, 25 Feb 2026 01:18:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-agents-sdk-go-tool-calling/</guid>
      <description>&lt;p&gt;Most teams can connect an LLM in a demo. The real pain starts in production: multi-step tasks, flaky tool calls, unclear retries, and rising cost.&lt;/p&gt;
&lt;p&gt;This guide gives you a pragmatic Go-first blueprint for shipping an Agent workflow that can survive real incidents.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses API Streaming in Go: Timeouts, Retries, and Observability</title>
      <link>https://www.mfun.ink/english/post/openai-responses-api-streaming-go-timeout-retry-observability/</link>
      <pubDate>Mon, 23 Feb 2026 01:15:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/openai-responses-api-streaming-go-timeout-retry-observability/</guid>
      <description>&lt;p&gt;Production streaming fails in two predictable ways: users wait while the stream silently drops, and your logs say &amp;ldquo;timeout&amp;rdquo; without telling you where it actually broke.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical Go pattern for OpenAI Responses API streaming with strict timeout boundaries, safe retries, and useful telemetry.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude Code &#43; Codex for Multi-Model Development: Cost, Speed, and Quality (Practical Workflow)</title>
      <link>https://www.mfun.ink/english/post/claude-code-codex-multi-model-collaboration/</link>
      <pubDate>Sun, 15 Feb 2026 10:30:00 +0800</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-code-codex-multi-model-collaboration/</guid>
      <description>&lt;p&gt;If you still use one model for everything, you usually pay in one of three ways: higher cost, slower delivery, or more rework.&lt;/p&gt;
&lt;p&gt;A better setup is role-based collaboration: Claude Code for planning and quality gates, Codex for fast implementation and batch edits.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
