<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Go on Mengboy Tech Notes</title>
    <link>https://www.mfun.ink/en/categories/go/</link>
    <description>Recent content in Go on Mengboy Tech Notes</description>
    <generator>Hugo -- 0.156.0</generator>
    <language>en</language>
    <lastBuildDate>Mon, 02 Mar 2026 12:44:00 +0000</lastBuildDate>
    <atom:link href="https://www.mfun.ink/en/categories/go/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Taming Context Explosion in OpenAI Assistants/Responses with Go: Truncation, Summary Backfill, and Cost Caps</title>
      <link>https://www.mfun.ink/en/2026/03/02/openai-assistants-responses-go/</link>
      <pubDate>Mon, 02 Mar 2026 12:44:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/02/openai-assistants-responses-go/</guid>
      <description>&lt;p&gt;Long-running agent sessions usually fail the same way: context keeps growing, latency spikes, costs blow up, and answer quality gets worse.&lt;/p&gt;
&lt;p&gt;That is rarely a model-quality issue. It is almost always missing context governance.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go &#43; OpenAI API Timeout Troubleshooting: DNS, TLS, Proxy, and Connection Pool</title>
      <link>https://www.mfun.ink/en/2026/03/02/go-openai-api-timeout-troubleshooting-dns-tls-proxy-connection-pool/</link>
      <pubDate>Mon, 02 Mar 2026 01:12:10 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/03/02/go-openai-api-timeout-troubleshooting-dns-tls-proxy-connection-pool/</guid>
      <description>&lt;p&gt;When OpenAI API calls start timing out in production, the real problem is usually not “OpenAI is down.”&lt;/p&gt;
&lt;p&gt;The real problem is you don’t know which hop is failing: DNS, TLS handshake, proxy path, or your own connection pool.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Agents SDK with Go: Tool Calling, Session Memory, and Error Recovery</title>
      <link>https://www.mfun.ink/en/2026/02/25/openai-agents-sdk-go-tool-calling/</link>
      <pubDate>Wed, 25 Feb 2026 01:18:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/25/openai-agents-sdk-go-tool-calling/</guid>
      <description>&lt;p&gt;Most teams can connect an LLM in a demo. The real pain starts in production: multi-step tasks, flaky tool calls, unclear retries, and rising cost.&lt;/p&gt;
&lt;p&gt;This guide gives you a pragmatic Go-first blueprint for shipping an Agent workflow that can survive real incidents.&lt;/p&gt;</description>
    </item>
    <item>
      <title>OpenAI Responses API Streaming in Go: Timeouts, Retries, and Observability</title>
      <link>https://www.mfun.ink/en/2026/02/23/openai-responses-api-streaming-go-timeout-retry-observability/</link>
      <pubDate>Mon, 23 Feb 2026 01:15:00 +0000</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/23/openai-responses-api-streaming-go-timeout-retry-observability/</guid>
      <description>&lt;p&gt;Production streaming fails in two predictable ways: users wait while the stream silently drops, and your logs say &amp;ldquo;timeout&amp;rdquo; without telling you where it actually broke.&lt;/p&gt;
&lt;p&gt;This guide gives you a practical Go pattern for OpenAI Responses API streaming with strict timeout boundaries, safe retries, and useful telemetry.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Go Memory Leak Triage in Production: pprof &#43; FlameGraph Step by Step</title>
      <link>https://www.mfun.ink/en/2026/02/14/go-memory-leak-pprof-flamegraph/</link>
      <pubDate>Sat, 14 Feb 2026 21:20:00 +0800</pubDate>
      <guid>https://www.mfun.ink/en/2026/02/14/go-memory-leak-pprof-flamegraph/</guid>
      <description>&lt;p&gt;If your Go service RSS keeps climbing, drops after restart, then climbs again, you likely have a memory retention problem (or an actual leak pattern).&lt;/p&gt;
&lt;p&gt;Do not start with random code edits. Run a clean evidence chain: &lt;strong&gt;metrics trend check → pprof snapshots → FlameGraph comparison → object growth path → regression validation&lt;/strong&gt;.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
