<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Mengboy Tech Notes</title>
    <link>https://www.mfun.ink/en/</link>
    <description>Recent content on Mengboy Tech Notes</description>
    <generator>Hugo -- 0.156.0</generator>
    <language>en</language>
    <lastBuildDate>Wed, 08 Apr 2026 01:22:53 +0000</lastBuildDate>
    <atom:link href="https://www.mfun.ink/en/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Go Dual-Provider LLM Routing (OpenAI &#43; Claude): Timeout Tiers, Cost Caps, and Fallback Control</title>
      <link>https://www.mfun.ink/en/2026/04/08/go-dual-model-routing-openai-claude-timeout-cost-fallback/</link>
      <pubDate>Wed, 08 Apr 2026 01:22:53 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/04/08/go-dual-model-routing-openai-claude-timeout-cost-fallback/</guid>
      <description>&lt;p&gt;If your Go service relies on one LLM provider, two failures hurt the most, timeout spikes and billing spikes. A real production setup is not just “add another provider”, it is a single control plane for routing, timeout tiers, cost caps, and fallback.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical OpenAI + Claude dual-provider pattern with one priority, keep uptime first, then optimize quality.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude Code &#43; GitHub Actions CI Self-Healing Pipeline: Error Attribution, Minimal Patches, and Human Approval Gates</title>
      <link>https://www.mfun.ink/en/2026/04/06/claude-code-github-actions-ci-self-healing-pipeline/</link>
      <pubDate>Mon, 06 Apr 2026 01:17:34 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/04/06/claude-code-github-actions-ci-self-healing-pipeline/</guid>
      <description>&lt;p&gt;If your CI keeps failing and engineers keep babysitting logs, you&amp;rsquo;re paying an invisible velocity tax. A production-grade AI self-healing pipeline is not &amp;ldquo;let the agent edit anything&amp;rdquo;. It&amp;rsquo;s a controlled loop: &lt;strong&gt;attribution, patching, approval, rollback&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This post gives you a deployable baseline: Claude Code proposes a minimal fix patch, GitHub Actions enforces risk gates and regression checks, and humans only approve at high-impact checkpoints.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude API Rate-Limit Storm Playbook: Adaptive Concurrency, Jittered Backoff, and Quota Isolation</title>
      <link>https://www.mfun.ink/en/2026/04/03/claude-api-rate-limit-storm-adaptive-concurrency-backoff-quota-isolation/</link>
      <pubDate>Fri, 03 Apr 2026 01:15:05 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/04/03/claude-api-rate-limit-storm-adaptive-concurrency-backoff-quota-isolation/</guid>
      <description>&lt;p&gt;When Claude API starts returning 429 under high load, most systems don&amp;rsquo;t just slow down—they collapse: queue buildup, retry storms, upstream timeout chains, and pager noise.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude 3.7 &#43; OpenAI Responses Dual-Stack Degradation Playbook: Timeout Probing, Circuit Cutover, and Error-Budget Dashboard</title>
      <link>https://www.mfun.ink/en/2026/04/01/claude-openai-dual-stack-degrade-runbook/</link>
      <pubDate>Wed, 01 Apr 2026 01:19:20 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/04/01/claude-openai-dual-stack-degrade-runbook/</guid>
      <description>&lt;p&gt;Running both Claude and OpenAI in production sounds resilient—until a &lt;strong&gt;slow failure&lt;/strong&gt; hits: latency climbs, 429s spike, quality drifts, and everything still looks “up.”&lt;/p&gt;
&lt;p&gt;This guide gives you a practical dual-stack degradation runbook: timeout probing first, circuit-based cutover second, and an error-budget dashboard to keep business impact bounded.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI Dual-Provider Gateway Failover: Health Probes, Circuit Breaking, and SLA Fallback</title>
      <link>https://www.mfun.ink/en/2026/03/30/claude-openai-dual-provider-gateway-failover-sla/</link>
      <pubDate>Mon, 30 Mar 2026 01:14:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/30/claude-openai-dual-provider-gateway-failover-sla/</guid>
      <description>&lt;p&gt;If your production stack calls both Claude and OpenAI, the hard part is not API integration. The hard part is keeping user experience stable when one provider starts throwing 429/5xx spikes, regional latency, or timeout storms.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical dual-provider gateway playbook: health probes, circuit breaking, SLA-aware fallback, and observability loops. The goal is not “never fail.” The goal is &lt;strong&gt;controlled failure with controlled cost and controlled latency&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses Streaming in Production: Backpressure, Chunk Reassembly, and Timeout Budget</title>
      <link>https://www.mfun.ink/en/2026/03/27/openai-responses-streaming-backpressure-chunk-reassembly-timeout-budget/</link>
      <pubDate>Fri, 27 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/27/openai-responses-streaming-backpressure-chunk-reassembly-timeout-budget/</guid>
      <description>&lt;p&gt;Most streaming failures are not about “can it stream”, but “does it stay stable under load”: broken chunks, stuck clients, timeout cascades, and retry storms.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude &#43; OpenAI Model Routing Gateway: Latency Tiers, Cost Caps, and Quality Guardrails</title>
      <link>https://www.mfun.ink/en/2026/03/25/claude-openai-model-routing-gateway-latency-cost-quality/</link>
      <pubDate>Wed, 25 Mar 2026 01:16:31 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/25/claude-openai-model-routing-gateway-latency-cost-quality/</guid>
      <description>&lt;p&gt;Connecting both Claude and OpenAI in production is the easy part. The hard part is keeping the system stable across the quality-latency-cost triangle.&lt;br&gt;
Without a routing gateway, you usually get latency spikes, runaway bills, and ugly cascading failures.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go Stream Recovery: Delta Persistence, Resume Tokens, and Duplicate Chunk Dedup</title>
      <link>https://www.mfun.ink/en/2026/03/23/openai-responses-go-stream-resume-delta-dedup/</link>
      <pubDate>Mon, 23 Mar 2026 01:13:09 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/23/openai-responses-go-stream-resume-delta-dedup/</guid>
      <description>&lt;p&gt;In production, the painful part is not “streaming is slow.” It’s “streaming breaks and then duplicates output after reconnect.”&lt;br&gt;
This guide gives you a practical recovery loop: &lt;strong&gt;delta persistence + resume token + idempotent dedup&lt;/strong&gt;, so reconnection does not replay garbage.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses in Go Multi-Tenant Quota Governance: Token Buckets, Budget Circuit Breakers, and Cost Attribution</title>
      <link>https://www.mfun.ink/en/2026/03/20/openai-responses-go-multitenant-quota-governance/</link>
      <pubDate>Fri, 20 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/20/openai-responses-go-multitenant-quota-governance/</guid>
      <description>&lt;p&gt;Most multi-tenant AI platforms fail for two boring reasons: one tenant saturates shared capacity, and finance discovers the burn too late.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical Go blueprint: token-bucket throttling, budget circuit breakers, and request-level cost attribution.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI Responses Agent Memory Layering: Short-Term Context, Long-Term Index, and Cost Caps</title>
      <link>https://www.mfun.ink/en/2026/03/18/go-openai-responses-agent-memory-layering/</link>
      <pubDate>Wed, 18 Mar 2026 16:33:52 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/18/go-openai-responses-agent-memory-layering/</guid>
      <description>&lt;p&gt;In production Go agents, the first thing that breaks is usually not model quality. It is memory management: context grows, bills spike, and answers drift.&lt;/p&gt;
&lt;p&gt;Use a 3-layer memory design:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;L1: short-term conversational window (seconds)&lt;/li&gt;
&lt;li&gt;L2: rolling summary (minutes)&lt;/li&gt;
&lt;li&gt;L3: long-term retrieval memory (days)&lt;/li&gt;
&lt;/ul&gt;</description>
    </item>
    <item>
      <title>Handling OpenAI 429/5xx Storms in Go: Token Bucket, Exponential Backoff, and Circuit Breakers</title>
      <link>https://www.mfun.ink/en/2026/03/18/go-openai-429-5xx-storm-defense-token-bucket-backoff-circuit-breaker/</link>
      <pubDate>Wed, 18 Mar 2026 01:14:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/18/go-openai-429-5xx-storm-defense-token-bucket-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;Most Go teams are not killed by a single API error. They are killed by a retry storm they created themselves.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; GitHub Actions PR Risk Gate: Automated Evals, Tiered Blocking, and One-Click Rollback</title>
      <link>https://www.mfun.ink/en/2026/03/16/openai-responses-github-actions-pr-risk-gate/</link>
      <pubDate>Mon, 16 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/16/openai-responses-github-actions-pr-risk-gate/</guid>
      <description>&lt;p&gt;You don&amp;rsquo;t need an AI reviewer that “sounds smart.” You need a gate that &lt;strong&gt;stops risky PRs before they hit main&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;This post shows a production-ready minimum setup: OpenAI Responses generates structured risk output, GitHub Actions enforces tiered policies, and critical failures can trigger a one-click rollback.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Batch API with Go: Offline Batching, Failure Replay, and Cost Boundaries</title>
      <link>https://www.mfun.ink/en/2026/03/13/openai-batch-api-go-cost-control-offline-batching-failure-replay/</link>
      <pubDate>Fri, 13 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/13/openai-batch-api-go-cost-control-offline-batching-failure-replay/</guid>
      <description>&lt;p&gt;Short answer: if your workload is &lt;strong&gt;delay-tolerant, batchable, and replay-safe&lt;/strong&gt;, move it from online calls to Batch API. The savings are real, but only if you design splitting, failure routing, and replay first.&lt;/p&gt;
&lt;p&gt;Many teams treat Batch API as a cheaper sync endpoint. That usually creates a replay mess instead of stable savings. A conservative rollout starts with cost boundaries and SLOs, then implements offline batching and controlled replay.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses Structured Outputs with Go: Schema Evolution, Bad-Case Fallbacks, and Gradual Rollback</title>
      <link>https://www.mfun.ink/en/2026/03/11/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</link>
      <pubDate>Wed, 11 Mar 2026 01:08:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/11/openai-responses-structured-outputs-go-schema-evolution-fallback-rollback/</guid>
      <description>&lt;p&gt;The hardest part of Structured Outputs is not getting JSON once. It is surviving schema changes without turning production into a small fire with excellent logs and terrible business results.&lt;/p&gt;
&lt;p&gt;Once a Go service starts evolving prompts and response contracts, the usual failure modes show up fast: a new required field breaks older consumers, an enum expands and strict validation kills valid requests, or one bad sample drags the whole chain into retries and rollback panic.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Realtime &#43; Go in Production: WebRTC Token Rotation, Interruption Recovery, and End-to-End Latency Budgets</title>
      <link>https://www.mfun.ink/en/2026/03/09/openai-realtime-go-webrtc-auth-recovery-latency-budget/</link>
      <pubDate>Mon, 09 Mar 2026 01:13:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/09/openai-realtime-go-webrtc-auth-recovery-latency-budget/</guid>
      <description>&lt;p&gt;If you plan to put OpenAI Realtime into production, do not let a passing demo fool you.&lt;/p&gt;
&lt;p&gt;What usually breaks the system is not the model itself. It is &lt;strong&gt;non-rotating short-lived auth, missing interruption state, and zero end-to-end latency budgeting&lt;/strong&gt;. Miss those three and your voice UX starts sounding like an angry walkie-talkie.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI Responses: Connection Pooling and Timeout Budgets from HTTP/2 Reuse to Error-Budget Control</title>
      <link>https://www.mfun.ink/en/2026/03/06/go-openai-responses-connection-pool-timeout-budget/</link>
      <pubDate>Fri, 06 Mar 2026 01:13:12 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/06/go-openai-responses-connection-pool-timeout-budget/</guid>
      <description>&lt;p&gt;When Go services call the OpenAI Responses API in production, the real failures are rarely about model quality. Most incidents come from transport instability: weak connection pooling, conflicting timeout layers, and retry storms.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical baseline: HTTP/2 reuse, layered timeout budgets, bounded retries, and error-budget driven operations.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses &#43; Go: Taming Retry Storms with Idempotency Keys, Jittered Backoff, and Circuit Breakers</title>
      <link>https://www.mfun.ink/en/2026/03/04/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</link>
      <pubDate>Wed, 04 Mar 2026 01:10:40 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/04/openai-responses-go-retry-storm-idempotency-backoff-circuit-breaker/</guid>
      <description>&lt;p&gt;The most expensive outage is not a single failure — it is a failure amplified by retries.&lt;/p&gt;
&lt;p&gt;In an OpenAI Responses + Go tool-calling stack, missing idempotency, jittered backoff, and breaker thresholds can turn 10 failing requests into 1000 downstream calls in minutes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Taming Context Explosion in OpenAI Assistants/Responses with Go: Truncation, Summary Backfill, and Cost Caps</title>
      <link>https://www.mfun.ink/en/2026/03/02/openai-assistants-responses-go/</link>
      <pubDate>Mon, 02 Mar 2026 12:44:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/02/openai-assistants-responses-go/</guid>
      <description>&lt;p&gt;Long-running agent sessions usually fail the same way: context keeps growing, latency spikes, costs blow up, and answer quality gets worse.&lt;/p&gt;
&lt;p&gt;That is rarely a model-quality issue. It is almost always missing context governance.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI API Timeout Troubleshooting: DNS, TLS, Proxy, and Connection Pool</title>
      <link>https://www.mfun.ink/en/2026/03/02/go-openai-api-timeout-troubleshooting-dns-tls-proxy-connection-pool/</link>
      <pubDate>Mon, 02 Mar 2026 01:12:10 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/02/go-openai-api-timeout-troubleshooting-dns-tls-proxy-connection-pool/</guid>
      <description>&lt;p&gt;When OpenAI API calls start timing out in production, the real problem is usually not “OpenAI is down.”&lt;/p&gt;
&lt;p&gt;The real problem is you don’t know which hop is failing: DNS, TLS handshake, proxy path, or your own connection pool.&lt;/p&gt;</description>
    </item>
    <item>
      <title>GitHub Actions &#43; AI Agent Auto-Fix Pipeline: Failure Tiers, Regression Gates, and Security Guardrails</title>
      <link>https://www.mfun.ink/en/2026/02/27/github-actions-ai-agent-auto-fix-pipeline/</link>
      <pubDate>Fri, 27 Feb 2026 01:18:38 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/27/github-actions-ai-agent-auto-fix-pipeline/</guid>
      <description>&lt;p&gt;When CI keeps failing, the real risk is not “slow fixes” — it is “fast bad fixes.”
This guide gives you a practical &lt;strong&gt;GitHub Actions + AI Agent auto-fix pipeline&lt;/strong&gt; with failure tiering, strict edit boundaries, and merge-time gates.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Agents SDK with Go: Tool Calling, Session Memory, and Error Recovery</title>
      <link>https://www.mfun.ink/en/2026/02/25/openai-agents-sdk-go-tool-calling/</link>
      <pubDate>Wed, 25 Feb 2026 01:18:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/25/openai-agents-sdk-go-tool-calling/</guid>
      <description>&lt;p&gt;Most teams can connect an LLM in a demo. The real pain starts in production: multi-step tasks, flaky tool calls, unclear retries, and rising cost.&lt;/p&gt;
&lt;p&gt;This guide gives you a pragmatic Go-first blueprint for shipping an Agent workflow that can survive real incidents.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses API Streaming in Go: Timeouts, Retries, and Observability</title>
      <link>https://www.mfun.ink/en/2026/02/23/openai-responses-api-streaming-go-timeout-retry-observability/</link>
      <pubDate>Mon, 23 Feb 2026 01:15:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/23/openai-responses-api-streaming-go-timeout-retry-observability/</guid>
      <description>&lt;p&gt;Production streaming fails in two predictable ways: users wait while the stream silently drops, and your logs say &amp;ldquo;timeout&amp;rdquo; without telling you where it actually broke.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical Go pattern for OpenAI Responses API streaming with strict timeout boundaries, safe retries, and useful telemetry.&lt;/p&gt;</description>
    </item>
    <item>
      <title>A Stable FreshRSS &#43; RSSHub Deployment on Linux (Docker Compose)</title>
      <link>https://www.mfun.ink/en/2026/02/20/freshrss-rsshub-linux-deployment/</link>
      <pubDate>Fri, 20 Feb 2026 01:11:51 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/20/freshrss-rsshub-linux-deployment/</guid>
      <description>&lt;p&gt;Getting FreshRSS + RSSHub running is easy. Keeping it stable is the hard part: lost state after restarts, repeated 403 errors, broken proxy headers, and risky upgrades.&lt;/p&gt;
&lt;p&gt;This guide gives a production-ready Linux baseline focused on &lt;strong&gt;long-term stability, observability, and safe upgrades&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Redis Distributed Lock Best Practices (with Common Misuse Cases)</title>
      <link>https://www.mfun.ink/en/2026/02/19/redis-distributed-lock-best-practices/</link>
      <pubDate>Thu, 19 Feb 2026 09:55:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/19/redis-distributed-lock-best-practices/</guid>
      <description>&lt;p&gt;In high-concurrency scenarios, distributed locks are essential for ensuring data consistency. However, many developers&amp;rsquo; understanding of Redis distributed locks stops at &amp;ldquo;SETNX&amp;rdquo;, leading to frequent production incidents.&lt;/p&gt;
&lt;p&gt;This article comprehensively covers the correct usage of Redis distributed locks from principles, implementation, common misuse cases to production-grade solutions.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Nginx Reverse Proxy WebSocket Disconnects: 7 Common Traps and Reliable Fixes</title>
      <link>https://www.mfun.ink/en/2026/02/18/nginx-websocket-disconnect-traps/</link>
      <pubDate>Wed, 18 Feb 2026 08:20:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/18/nginx-websocket-disconnect-traps/</guid>
      <description>&lt;p&gt;If your WebSocket keeps reconnecting every few seconds, it&amp;rsquo;s usually not your app code. In most production incidents, the root cause is an incomplete reverse-proxy chain in Nginx (or one proxy layer before it).&lt;/p&gt;
&lt;p&gt;This guide gives you a copy-paste checklist to stabilize WebSocket connections fast.&lt;/p&gt;</description>
    </item>
    <item>
      <title>RAG Accuracy Playbook: Retrieval Recall, Re-Ranking, and Evaluation Loop</title>
      <link>https://www.mfun.ink/en/2026/02/17/rag-retrieval-rerank-eval-loop/</link>
      <pubDate>Tue, 17 Feb 2026 10:56:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/17/rag-retrieval-rerank-eval-loop/</guid>
      <description>&lt;p&gt;If your RAG system feels unreliable, switching to a more expensive LLM is usually the wrong first move. In most cases, the bottleneck is retrieval quality: weak recall, poor ranking, and no measurement loop.&lt;/p&gt;
&lt;p&gt;This guide gives a practical path: make recall broader, make ranking sharper, then close the loop with offline + online evaluation.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Hugo Auto Deploy with GitHub Actions: Safe Config and Troubleshooting</title>
      <link>https://www.mfun.ink/en/2026/02/16/github-actions-hugo-auto-deploy-safe-config/</link>
      <pubDate>Mon, 16 Feb 2026 11:59:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/16/github-actions-hugo-auto-deploy-safe-config/</guid>
      <description>&lt;p&gt;Your local &lt;code&gt;hugo&lt;/code&gt; build works, but GitHub Actions fails randomly. Classic.
The root cause is usually not the workflow syntax. It is environment drift, missing permissions, and unstable dependencies.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude Code &#43; Codex for Multi-Model Development: Cost, Speed, and Quality (Practical Workflow)</title>
      <link>https://www.mfun.ink/en/2026/02/15/claude-code-codex-multi-model-collaboration/</link>
      <pubDate>Sun, 15 Feb 2026 10:30:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/15/claude-code-codex-multi-model-collaboration/</guid>
      <description>&lt;p&gt;If you still use one model for everything, you usually pay in one of three ways: higher cost, slower delivery, or more rework.&lt;/p&gt;
&lt;p&gt;A better setup is role-based collaboration: Claude Code for planning and quality gates, Codex for fast implementation and batch edits.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go Memory Leak Triage in Production: pprof &#43; FlameGraph Step by Step</title>
      <link>https://www.mfun.ink/en/2026/02/14/go-memory-leak-pprof-flamegraph/</link>
      <pubDate>Sat, 14 Feb 2026 21:20:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/14/go-memory-leak-pprof-flamegraph/</guid>
      <description>&lt;p&gt;If your Go service RSS keeps climbing, drops after restart, then climbs again, you likely have a memory retention problem (or an actual leak pattern).&lt;/p&gt;
&lt;p&gt;Do not start with random code edits. Run a clean evidence chain: &lt;strong&gt;metrics trend check → pprof snapshots → FlameGraph comparison → object growth path → regression validation&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses API &#43; MCP in Practice: From Function Calling to Agent Workflows</title>
      <link>https://www.mfun.ink/en/2026/02/11/openai-responses-api-mcp-agent-workflow/</link>
      <pubDate>Wed, 11 Feb 2026 23:15:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/11/openai-responses-api-mcp-agent-workflow/</guid>
      <description>&lt;p&gt;If you&amp;rsquo;ve already used function calling but keep writing glue code for every non-trivial task, you&amp;rsquo;re likely at the point where &lt;strong&gt;Responses API + MCP&lt;/strong&gt; makes more sense.&lt;/p&gt;
&lt;p&gt;This guide is practical: how to move from single tool calls to a scalable agent workflow where retrieval, execution, validation, and write-back follow a consistent structure.&lt;/p&gt;</description>
    </item>
    <item>
      <title>WSL2 &#43; Docker Network Troubleshooting: Fix DNS Timeouts and Image Pull Failures</title>
      <link>https://www.mfun.ink/en/2026/02/11/wsl2-docker-network-troubleshooting/</link>
      <pubDate>Wed, 11 Feb 2026 22:50:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/11/wsl2-docker-network-troubleshooting/</guid>
      <description>&lt;p&gt;If your &lt;strong&gt;WSL2 + Docker&lt;/strong&gt; setup suddenly fails with &lt;code&gt;docker pull&lt;/code&gt; timeouts, &lt;code&gt;Temporary failure in name resolution&lt;/code&gt;, or containers that start but cannot access the internet, don&amp;rsquo;t nuke your environment yet. Most cases are recoverable in 15 minutes.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical sequence: identify whether the fault is DNS, proxy/VPN, virtual NIC, or Docker daemon config—then apply the smallest fix first.&lt;/p&gt;</description>
    </item>
    <item>
      <title>MCP in Practice: Automated Browser Debugging with DevTools MCP</title>
      <link>https://www.mfun.ink/en/2026/02/10/mcp-devtools-mcp-auto-browser-debugging/</link>
      <pubDate>Tue, 10 Feb 2026 22:10:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/10/mcp-devtools-mcp-auto-browser-debugging/</guid>
      <description>&lt;p&gt;MCP sounds great in theory, but real-world setup often fails at browser debugging: AI cannot reach Chrome, cannot inspect network requests, and cannot collect performance traces. This guide gives you a copy-paste setup that works on Windows + WSL.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Claude vs Codex vs OpenAI CLI: Which Workflow Actually Improves Dev Productivity</title>
      <link>https://www.mfun.ink/en/2026/02/09/claude-codex-openai-cli-workflow-comparison/</link>
      <pubDate>Mon, 09 Feb 2026 23:28:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/09/claude-codex-openai-cli-workflow-comparison/</guid>
      <description>&lt;p&gt;If you use AI as a chatbot only, these tools feel similar. In real engineering workflows, they behave very differently.&lt;/p&gt;
&lt;p&gt;My conclusion first: &lt;strong&gt;use Codex for repo-native coding changes, Claude for deep reasoning and long-form planning, and OpenAI CLI for standardized automation pipelines.&lt;/strong&gt;&lt;/p&gt;</description>
    </item>
    <item>
      <title>Software Engineering History: From Software Crisis to AI Co-Creation</title>
      <link>https://www.mfun.ink/en/2025/12/31/software-engineering-history-ai/</link>
      <pubDate>Wed, 31 Dec 2025 12:30:15 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2025/12/31/software-engineering-history-ai/</guid>
      <description>&lt;p&gt;Large language models are changing how we clarify requirements, generate code, and design tests, and many teams feel that traditional workflows are being rewritten. To understand what is truly changing, it helps to place today inside the longer history of software engineering.&lt;/p&gt;
&lt;p&gt;This article walks through the major stages of software engineering and ends with the AI-era variables and a simple checklist so you can map your current problems to the right time scale.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Configure Chrome DevTools MCP in WSL</title>
      <link>https://www.mfun.ink/en/2025/12/28/chrome-devtools-mcp-wsl/</link>
      <pubDate>Sun, 28 Dec 2025 22:26:08 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2025/12/28/chrome-devtools-mcp-wsl/</guid>
      <description>&lt;p&gt;Chrome DevTools MCP lets MCP clients connect to Chrome&amp;rsquo;s remote debugging endpoint. Because WSL2 and Windows are isolated at the network layer, you need port forwarding and a firewall rule. The commands below are split into clear steps.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Installing Cursor MCP Servers in WSL for a Seamless Dev Experience</title>
      <link>https://www.mfun.ink/en/2025/05/13/installing-cursor-mcp-in-wsl/</link>
      <pubDate>Tue, 13 May 2025 23:17:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2025/05/13/installing-cursor-mcp-in-wsl/</guid>
      <description>&lt;p&gt;For developers who love the power of Linux tools but work on Windows, the Windows Subsystem for Linux (WSL) is a game-changer. Cursor, the AI-first code editor, can further enhance this setup by integrating with Model Control Program (MCP) servers. Running these MCP servers directly within your WSL environment keeps your development workflow clean and consolidated. This guide will walk you through configuring Cursor to use MCP servers running in WSL.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Complete Guide to Migrating from Hexo to Hugo</title>
      <link>https://www.mfun.ink/en/2025/05/04/hexo-to-hugo-migration/</link>
      <pubDate>Sun, 04 May 2025 13:30:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2025/05/04/hexo-to-hugo-migration/</guid>
      <description>Detailed step-by-step instructions on how to smoothly migrate your Hexo blog to Hugo, including article conversion, multilingual setup, and metadata preservation</description>
    </item>
    <item>
      <title>About Me</title>
      <link>https://www.mfun.ink/en/about/</link>
      <pubDate>Sat, 03 May 2025 12:00:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/about/</guid>
      <description>&lt;h2 id=&#34;about-me&#34;&gt;About Me&lt;/h2&gt;
&lt;p&gt;I&amp;rsquo;m Mengboy. I work a lot on backend systems, cloud-native tooling, and automation, and I turn hard-earned debugging experience into reusable notes.&lt;/p&gt;
&lt;p&gt;This blog is not about generic theory. It&amp;rsquo;s about &lt;strong&gt;real problems + executable solutions&lt;/strong&gt;: commands you can run, configs you can apply, and workflows you can reuse.&lt;/p&gt;
&lt;h2 id=&#34;what-i-write-here&#34;&gt;What I Write Here&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Engineering Practice&lt;/strong&gt;: troubleshooting and performance tuning for Go, Redis, Nginx, Docker, Linux/WSL&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI in Production Workflows&lt;/strong&gt;: MCP, automation pipelines, and integrating AI into day-to-day development&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Deployment &amp;amp; Ops&lt;/strong&gt;: stable release strategies, rollback safety, and production-minded setup&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reusable Methods&lt;/strong&gt;: decision frameworks, checklists, and templates that save time&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;tech-preferences&#34;&gt;Tech Preferences&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Backend&lt;/strong&gt;: Go, Python&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Infrastructure&lt;/strong&gt;: Docker, Kubernetes, Linux&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Data Layer&lt;/strong&gt;: MySQL, Redis, MongoDB&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Engineering Habits&lt;/strong&gt;: incremental delivery, scripting-first, docs-first&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;writing-principles&#34;&gt;Writing Principles&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;Conclusion first, then explanation.&lt;/li&gt;
&lt;li&gt;Focus on why failures happen and how to locate them fast.&lt;/li&gt;
&lt;li&gt;Include copy-paste-ready commands and configuration whenever possible.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If a post helps you solve a real issue faster, it has done its job.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
