<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Claude on Mengboy 技术笔记</title>
    <link>https://www.mfun.ink/tags/claude/</link>
    <description>Recent content in Claude on Mengboy 技术笔记</description>
    <generator>Hugo -- 0.156.0</generator>
    <language>zh-cn</language>
    <lastBuildDate>Wed, 08 Apr 2026 01:22:53 +0000</lastBuildDate>
    <atom:link href="https://www.mfun.ink/tags/claude/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Go Dual-Provider LLM Routing (OpenAI &#43; Claude): Timeout Tiers, Cost Caps, and Fallback Control</title>
      <link>https://www.mfun.ink/english/post/go-dual-model-routing-openai-claude-timeout-cost-fallback/</link>
      <pubDate>Wed, 08 Apr 2026 01:22:53 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/go-dual-model-routing-openai-claude-timeout-cost-fallback/</guid>
      <description>&lt;p&gt;If your Go service relies on one LLM provider, two failures hurt the most, timeout spikes and billing spikes. A real production setup is not just “add another provider”, it is a single control plane for routing, timeout tiers, cost caps, and fallback.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical OpenAI + Claude dual-provider pattern with one priority, keep uptime first, then optimize quality.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go 服务双栈模型路由（OpenAI/Claude）：超时分层、成本上限与降级回退</title>
      <link>https://www.mfun.ink/2026/04/08/go-dual-model-routing-openai-claude-timeout-cost-fallback/</link>
      <pubDate>Wed, 08 Apr 2026 01:22:53 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/04/08/go-dual-model-routing-openai-claude-timeout-cost-fallback/</guid>
      <description>&lt;p&gt;线上接入单一模型供应商，最怕两件事，突发超时和账单失控。真正可落地的方案不是“多接一个模型”这么简单，而是把路由、超时、成本、回退放进同一个控制面。&lt;/p&gt;
&lt;p&gt;这篇给你一套 Go 可直接落地的双栈路由框架，目标是三件事，稳定性优先、成本可控、故障可快速止血。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude 3.7 &#43; OpenAI Responses Dual-Stack Degradation Playbook: Timeout Probing, Circuit Cutover, and Error-Budget Dashboard</title>
      <link>https://www.mfun.ink/english/post/claude-openai-dual-stack-degrade-runbook/</link>
      <pubDate>Wed, 01 Apr 2026 01:19:20 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-openai-dual-stack-degrade-runbook/</guid>
      <description>&lt;p&gt;Running both Claude and OpenAI in production sounds resilient—until a &lt;strong&gt;slow failure&lt;/strong&gt; hits: latency climbs, 429s spike, quality drifts, and everything still looks “up.”&lt;/p&gt;
&lt;p&gt;This guide gives you a practical dual-stack degradation runbook: timeout probing first, circuit-based cutover second, and an error-budget dashboard to keep business impact bounded.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude 3.7 &#43; OpenAI Responses 双栈降级实战：超时探测、熔断切流与错误预算看板</title>
      <link>https://www.mfun.ink/2026/04/01/claude-openai-dual-stack-degrade-runbook/</link>
      <pubDate>Wed, 01 Apr 2026 01:19:20 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/04/01/claude-openai-dual-stack-degrade-runbook/</guid>
      <description>&lt;p&gt;你在生产里同时接 Claude 和 OpenAI，最怕的不是单点故障，而是&lt;strong&gt;慢故障&lt;/strong&gt;：超时变多、429 变密、质量飘忽，系统还“看起来活着”。&lt;/p&gt;
&lt;p&gt;这篇给一套可直接落地的双栈降级方案：先做超时探测，再做熔断切流，最后用错误预算看板兜住业务节奏。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI Dual-Provider Gateway Failover: Health Probes, Circuit Breaking, and SLA Fallback</title>
      <link>https://www.mfun.ink/english/post/claude-openai-dual-provider-gateway-failover-sla/</link>
      <pubDate>Mon, 30 Mar 2026 01:14:00 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-openai-dual-provider-gateway-failover-sla/</guid>
      <description>&lt;p&gt;If your production stack calls both Claude and OpenAI, the hard part is not API integration. The hard part is keeping user experience stable when one provider starts throwing 429/5xx spikes, regional latency, or timeout storms.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical dual-provider gateway playbook: health probes, circuit breaking, SLA-aware fallback, and observability loops. The goal is not “never fail.” The goal is &lt;strong&gt;controlled failure with controlled cost and controlled latency&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI 双供应商网关容灾：健康探测、熔断切换与 SLA 回退策略</title>
      <link>https://www.mfun.ink/2026/03/30/claude-openai-dual-provider-gateway-failover-sla/</link>
      <pubDate>Mon, 30 Mar 2026 01:14:00 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/30/claude-openai-dual-provider-gateway-failover-sla/</guid>
      <description>&lt;p&gt;当你的生产系统同时接入 Claude 和 OpenAI，真正难的不是“接上 API”，而是&lt;strong&gt;在故障发生时还能稳态服务&lt;/strong&gt;。一个供应商偶发 429/5xx、区域波动或模型超时，都会把下游体验打穿。&lt;/p&gt;
&lt;p&gt;这篇给你一套可直接落地的双供应商网关方案：健康探测、熔断切换、SLA 分级回退、以及可观测性闭环。目标不是追求“永不失败”，而是&lt;strong&gt;失败可控、成本可控、体验可控&lt;/strong&gt;。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI Model Routing Gateway: Latency Tiers, Cost Caps, and Quality Guardrails</title>
      <link>https://www.mfun.ink/english/post/claude-openai-model-routing-gateway-latency-cost-quality/</link>
      <pubDate>Wed, 25 Mar 2026 01:16:31 +0000</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-openai-model-routing-gateway-latency-cost-quality/</guid>
      <description>&lt;p&gt;Connecting both Claude and OpenAI in production is the easy part. The hard part is keeping the system stable across the quality-latency-cost triangle.&lt;br&gt;
Without a routing gateway, you usually get latency spikes, runaway bills, and ugly cascading failures.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI 模型路由网关实战：延迟分层、成本阈值与质量守门</title>
      <link>https://www.mfun.ink/2026/03/25/claude-openai-model-routing-gateway-latency-cost-quality/</link>
      <pubDate>Wed, 25 Mar 2026 01:16:31 +0000</pubDate>
      <guid>https://www.mfun.ink/2026/03/25/claude-openai-model-routing-gateway-latency-cost-quality/</guid>
      <description>&lt;p&gt;你把 Claude 和 OpenAI 一起接进生产环境后，真正的难题不是“能不能调通”，而是&lt;strong&gt;怎么在质量、延迟、成本三角里稳定跑&lt;/strong&gt;。&lt;br&gt;
如果没有路由网关，最常见结果就是：高峰期延迟抖动、账单失控、异常时全站雪崩。&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude / Codex / OpenAI CLI 工作流对比：开发效率怎么选</title>
      <link>https://www.mfun.ink/2026/02/09/claude-codex-openai-cli-workflow-comparison/</link>
      <pubDate>Mon, 09 Feb 2026 23:28:00 +0800</pubDate>
      <guid>https://www.mfun.ink/2026/02/09/claude-codex-openai-cli-workflow-comparison/</guid>
      <description>&lt;p&gt;如果你把 AI 只当“聊天工具”，三家看起来差不多；但一旦进入真实开发链路，差异会非常明显。&lt;/p&gt;
&lt;p&gt;我的结论先放前面：&lt;strong&gt;日常编码+项目内改动优先 Codex，长文推理和方案拆解用 Claude，OpenAI CLI 适合做标准化自动化和跨工具串联。&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude vs Codex vs OpenAI CLI: Which Workflow Actually Improves Dev Productivity</title>
      <link>https://www.mfun.ink/english/post/claude-codex-openai-cli-workflow-comparison/</link>
      <pubDate>Mon, 09 Feb 2026 23:28:00 +0800</pubDate>
      <guid>https://www.mfun.ink/english/post/claude-codex-openai-cli-workflow-comparison/</guid>
      <description>&lt;p&gt;If you use AI as a chatbot only, these tools feel similar. In real engineering workflows, they behave very differently.&lt;/p&gt;
&lt;p&gt;My conclusion first: &lt;strong&gt;use Codex for repo-native coding changes, Claude for deep reasoning and long-form planning, and OpenAI CLI for standardized automation pipelines.&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
