<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"
    xmlns:content="http://purl.org/rss/1.0/modules/content/"
    xmlns:wfw="http://wellformedweb.org/CommentAPI/"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:atom="http://www.w3.org/2005/Atom"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
    xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
    >
<channel>
    <title>Claude Generations</title>
    <atom:link href="https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/rss.xml" rel="self" type="application/rss+xml" />
    <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/</link>
    <description><![CDATA[
    
    ]]></description>
    
    
    <item>
        <title>OpenAI &#x4E0E; Anthropic API &#x5BF9;&#x6BD4;&#xFF1A;Chat Completions &#x4E0E; Messages &#x63A5;&#x53E3;&#x5F00;&#x53D1;&#x8005;&#x6307;&#x5357;</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/15202FA6-2CC3-402F-8A28-2DA8267BAD84/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/15202FA6-2CC3-402F-8A28-2DA8267BAD84/</guid>
        <pubDate>Wed, 22 Apr 2026 05:30:00 -0700</pubDate>
        
        
        <description><![CDATA[
            <p>如果你正在开发一个调用大语言模型的应用，那么你很可能需要对接 OpenAI 的 <code>/v1/chat/completions</code> 或 Anthropic 的 <code>/v1/messages</code> 接口。虽然两者的基本目标相同——将对话发送给 LLM 并获取回复——但它们在认证方式、请求结构、响应格式、工具调用、流式传输等方面存在显著差异。</p> 
<p>本文详细梳理了两者的每一个主要区别，帮助你在选择服务商或构建统一抽象层时做出明智的决策。</p> 
<h2>认证方式</h2> 
<table> 
 <thead>
  <tr>
   <th>方面</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>认证头</td>
   <td><code>Authorization: Bearer sk-...</code></td>
   <td><code>x-api-key: sk-ant-...</code></td>
  </tr> 
  <tr>
   <td>版本头</td>
   <td>无需提供</td>
   <td><code>anthropic-version: 2023-06-01</code>（必填）</td>
  </tr> 
  <tr>
   <td>组织/项目头</td>
   <td><code>OpenAI-Organization</code>、<code>OpenAI-Project</code>（可选）</td>
   <td>无</td>
  </tr> 
 </tbody> 
</table> 
<p>Anthropic 强制要求 <code>anthropic-version</code> 头来锁定 API 行为版本，将 API 版本管理与模型名称解耦。OpenAI 则通过模型名称和接口变更来实现版本控制。</p> 
<h2>系统提示词</h2> 
<p>这是两者之间最直观的架构差异之一。</p> 
<p><strong>OpenAI</strong> 将系统提示词放在 <code>messages</code> 数组<em>内部</em>：</p> 
<pre><code>{
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello"}
  ]
}</code></pre> 
<p><strong>Anthropic</strong> 使用独立的顶层 <code>system</code> 参数，与消息数组完全分离：</p> 
<pre><code>{
  "system": "You are a helpful assistant.",
  "messages": [
    {"role": "user", "content": "Hello"}
  ]
}</code></pre> 
<p>Anthropic 的 <code>system</code> 字段还支持内容块数组（不仅仅是字符串），从而可以通过 <code>cache_control</code> 对系统提示词进行缓存。</p> 
<h2>消息角色与排序</h2> 
<table> 
 <thead>
  <tr>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td><code>system</code> / <code>developer</code>（o 系列模型）</td>
   <td>无（system 为顶层参数）</td>
  </tr> 
  <tr>
   <td><code>user</code></td>
   <td><code>user</code></td>
  </tr> 
  <tr>
   <td><code>assistant</code></td>
   <td><code>assistant</code></td>
  </tr> 
  <tr>
   <td><code>tool</code>（用于工具结果）</td>
   <td>无（工具结果放在 <code>user</code> 消息内部）</td>
  </tr> 
 </tbody> 
</table> 
<p>OpenAI 有 4 种不同角色，Anthropic 只有 2 种（<code>user</code> 和 <code>assistant</code>）。Anthropic <strong>严格要求消息交替排列</strong>——不能连续出现两条相同角色的消息。OpenAI 则更灵活，会自动拼接同角色的连续消息。</p> 
<h2>响应格式</h2> 
<p><strong>OpenAI</strong> 将响应包装在 <code>choices</code> 数组中（支持通过 <code>n</code> 参数生成多个补全）：</p> 
<pre><code>{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "choices": [{
    "index": 0,
    "message": {"role": "assistant", "content": "Hello!"},
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 7,
    "total_tokens": 20
  }
}</code></pre> 
<p><strong>Anthropic</strong> 直接返回类型化的内容块数组：</p> 
<pre><code>{
  "id": "msg_abc123",
  "type": "message",
  "role": "assistant",
  "content": [
    {"type": "text", "text": "Hello!"}
  ],
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 13,
    "output_tokens": 7,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0
  }
}</code></pre> 
<table> 
 <thead>
  <tr>
   <th>方面</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>内容类型</td>
   <td><code>message.content</code> 为字符串</td>
   <td><code>content</code> 为类型化块数组</td>
  </tr> 
  <tr>
   <td>停止标识</td>
   <td><code>finish_reason: "stop"</code></td>
   <td><code>stop_reason: "end_turn"</code></td>
  </tr> 
  <tr>
   <td>长度停止</td>
   <td><code>"length"</code></td>
   <td><code>"max_tokens"</code></td>
  </tr> 
  <tr>
   <td>工具调用停止</td>
   <td><code>"tool_calls"</code></td>
   <td><code>"tool_use"</code></td>
  </tr> 
  <tr>
   <td>多个补全</td>
   <td>支持（通过 <code>n</code> 参数）</td>
   <td>不支持（始终返回 1 个）</td>
  </tr> 
  <tr>
   <td>总 token 数</td>
   <td>直接提供</td>
   <td>需自行计算</td>
  </tr> 
  <tr>
   <td>缓存统计</td>
   <td>响应中无</td>
   <td>内置（<code>cache_creation_input_tokens</code>、<code>cache_read_input_tokens</code>）</td>
  </tr> 
 </tbody> 
</table> 
<h2>关键参数</h2> 
<table> 
 <thead>
  <tr>
   <th>参数</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>最大输出 token</td>
   <td><code>max_completion_tokens</code>（可选）</td>
   <td><code>max_tokens</code>（<strong>必填</strong>）</td>
  </tr> 
  <tr>
   <td>温度</td>
   <td>0–2（默认 1）</td>
   <td>0–1（默认 1）</td>
  </tr> 
  <tr>
   <td>Top P</td>
   <td><code>top_p</code></td>
   <td><code>top_p</code></td>
  </tr> 
  <tr>
   <td>Top K</td>
   <td>不支持</td>
   <td><code>top_k</code></td>
  </tr> 
  <tr>
   <td>频率惩罚</td>
   <td><code>frequency_penalty</code>（-2 到 2）</td>
   <td>不支持</td>
  </tr> 
  <tr>
   <td>存在惩罚</td>
   <td><code>presence_penalty</code>（-2 到 2）</td>
   <td>不支持</td>
  </tr> 
  <tr>
   <td>停止序列</td>
   <td><code>stop</code>（字符串或数组）</td>
   <td><code>stop_sequences</code>（数组）</td>
  </tr> 
  <tr>
   <td>种子（可复现性）</td>
   <td><code>seed</code></td>
   <td>不支持</td>
  </tr> 
  <tr>
   <td>对数概率</td>
   <td><code>logprobs</code></td>
   <td>不支持</td>
  </tr> 
  <tr>
   <td>用户标识</td>
   <td><code>user</code></td>
   <td><code>metadata.user_id</code></td>
  </tr> 
  <tr>
   <td>扩展思考</td>
   <td>无（o 系列内部推理）</td>
   <td><code>thinking</code> 对象，含 <code>budget_tokens</code></td>
  </tr> 
 </tbody> 
</table> 
<p>迁移时最容易踩的两个坑：Anthropic <strong>强制要求</strong>每个请求都设置 <code>max_tokens</code>（OpenAI 默认使用模型最大值），以及 Anthropic 的温度范围上限为 1.0，而 OpenAI 可达 2.0。</p> 
<h2>工具调用 / 函数调用</h2> 
<p>这是两个 API 之间最大的架构分歧之一。</p> 
<h3>工具定义</h3> 
<p><strong>OpenAI</strong> 使用 <code>type</code>/<code>function</code> 包装结构：</p> 
<pre><code>{
  "tools": [{
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get weather for a location",
      "parameters": {
        "type": "object",
        "properties": {"location": {"type": "string"}},
        "required": ["location"]
      }
    }
  }]
}</code></pre> 
<p><strong>Anthropic</strong> 采用更扁平的结构，使用 <code>input_schema</code>：</p> 
<pre><code>{
  "tools": [{
    "name": "get_weather",
    "description": "Get weather for a location",
    "input_schema": {
      "type": "object",
      "properties": {"location": {"type": "string"}},
      "required": ["location"]
    }
  }]
}</code></pre> 
<h3>响应中的工具调用</h3> 
<p>OpenAI 通过独立的 <code>tool_calls</code> 数组返回工具调用，参数为需要解析的 <strong>JSON 字符串</strong>：</p> 
<pre><code>"tool_calls": [{
  "id": "call_abc123",
  "type": "function",
  "function": {
    "name": "get_weather",
    "arguments": "{\"location\":\"Paris\"}"
  }
}]</code></pre> 
<p>Anthropic 以内容块形式返回工具调用，<code>input</code> 为已解析的 <strong>JSON 对象</strong>：</p> 
<pre><code>"content": [{
  "type": "tool_use",
  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
  "name": "get_weather",
  "input": {"location": "Paris"}
}]</code></pre> 
<h3>返回工具结果</h3> 
<p>OpenAI 使用专用的 <code>tool</code> 角色：</p> 
<pre><code>{"role": "tool", "tool_call_id": "call_abc123", "content": "Sunny, 22C"}</code></pre> 
<p>Anthropic 将工具结果作为内容块放在 <code>user</code> 消息中，并提供显式的 <code>is_error</code> 标志：</p> 
<pre><code>{
  "role": "user",
  "content": [{
    "type": "tool_result",
    "tool_use_id": "toolu_01D7...",
    "content": "Sunny, 22C",
    "is_error": false
  }]
}</code></pre> 
<h3>工具选择</h3> 
<table> 
 <thead>
  <tr>
   <th>行为</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>模型自行决定</td>
   <td><code>"auto"</code></td>
   <td><code>{"type": "auto"}</code></td>
  </tr> 
  <tr>
   <td>必须使用工具</td>
   <td><code>"required"</code></td>
   <td><code>{"type": "any"}</code></td>
  </tr> 
  <tr>
   <td>指定工具</td>
   <td><code>{"type": "function", "name": "X"}</code></td>
   <td><code>{"type": "tool", "name": "X"}</code></td>
  </tr> 
  <tr>
   <td>不使用工具</td>
   <td><code>"none"</code></td>
   <td><code>{"type": "none"}</code></td>
  </tr> 
 </tbody> 
</table> 
<h2>视觉 / 多模态</h2> 
<p><strong>OpenAI</strong> 使用 data URL 方案编码 base64 图片：</p> 
<pre><code>{
  "type": "image_url",
  "image_url": {
    "url": "data:image/jpeg;base64,{BASE64_DATA}",
    "detail": "high"
  }
}</code></pre> 
<p><strong>Anthropic</strong> 使用独立字段分别指定媒体类型和数据：</p> 
<pre><code>{
  "type": "image",
  "source": {
    "type": "base64",
    "media_type": "image/jpeg",
    "data": "BASE64_DATA"
  }
}</code></pre> 
<table> 
 <thead>
  <tr>
   <th>方面</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>清晰度控制</td>
   <td><code>detail: "high"/"low"/"auto"</code></td>
   <td>无</td>
  </tr> 
  <tr>
   <td>PDF 支持</td>
   <td>Chat Completions 中不支持</td>
   <td>原生 <code>document</code> 内容块</td>
  </tr> 
  <tr>
   <td>音频支持</td>
   <td>支持</td>
   <td>不支持</td>
  </tr> 
 </tbody> 
</table> 
<p>Anthropic 拥有独特的一等公民 <code>document</code> 块类型，支持 PDF 和文本文件，并可选启用引用功能——这是 OpenAI Chat Completions 接口所不具备的。</p> 
<h2>流式传输</h2> 
<p>两个 API 都使用 Server-Sent Events，但事件结构截然不同。</p> 
<p><strong>OpenAI</strong> 使用扁平的无名称 <code>data:</code> 行流，以 <code>data: [DONE]</code> 结束：</p> 
<pre><code>data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{},"finish_reason":"stop"}]}
data: [DONE]</code></pre> 
<p><strong>Anthropic</strong> 使用命名事件类型，具有结构化的生命周期：</p> 
<pre><code>event: message_start
data: {"type":"message_start","message":{...}}

event: content_block_start
data: {"type":"content_block_start","index":0,...}

event: content_block_delta
data: {"type":"content_block_delta","delta":{"type":"text_delta","text":"Hello"}}

event: content_block_stop
data: {"type":"content_block_stop","index":0}

event: message_stop
data: {"type":"message_stop"}</code></pre> 
<p>Anthropic 的流式传输更为精细，包含 6 种以上的命名事件类型，覆盖完整的消息生命周期。这使得处理混合内容（文本与工具调用交织）更加方便，但增加了解析复杂度。OpenAI 的方式更简洁——本质上只有一种事件类型加一个终止标记。</p> 
<h2>错误处理</h2> 
<p><strong>OpenAI</strong> 包含 <code>param</code> 字段，指明是哪个参数导致了错误：</p> 
<pre><code>{
  "error": {
    "message": "Incorrect API key",
    "type": "invalid_request_error",
    "param": null,
    "code": "invalid_api_key"
  }
}</code></pre> 
<p><strong>Anthropic</strong> 使用基于 <code>type</code> 的辨别模式：</p> 
<pre><code>{
  "type": "error",
  "error": {
    "type": "authentication_error",
    "message": "Invalid API key"
  }
}</code></pre> 
<p>Anthropic 将 <code>overloaded_error</code> 与 <code>api_error</code> 区分开来，使得针对容量问题实现退避逻辑更加容易。</p> 
<h2>速率限制</h2> 
<table> 
 <thead>
  <tr>
   <th>方面</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>头部前缀</td>
   <td><code>x-ratelimit-</code>（小写）</td>
   <td><code>RateLimit-</code>（IETF 草案格式）</td>
  </tr> 
  <tr>
   <td>重置格式</td>
   <td>相对时长（<code>1s</code>、<code>6m0s</code>）</td>
   <td>ISO 8601 时间戳</td>
  </tr> 
  <tr>
   <td>重试头</td>
   <td>非标准</td>
   <td><code>Retry-After</code></td>
  </tr> 
 </tbody> 
</table> 
<p>两者在遇到速率限制时都返回 HTTP 429，并建议使用指数退避加随机抖动。</p> 
<h2>各自独有的功能</h2> 
<h3>OpenAI 独有</h3> 
<ul> 
 <li><strong>多个补全</strong>——<code>n</code> 参数可在单次请求中生成 N 个替代回复</li> 
 <li><strong>对数概率</strong>——<code>logprobs</code> 返回 token 级别的概率信息</li> 
 <li><strong>结构化输出</strong>——<code>response_format</code> 配合 <code>json_schema</code> 保证 JSON 结构</li> 
 <li><strong>频率/存在惩罚</strong>用于控制重复</li> 
 <li><strong>音频输入/输出</strong>多模态消息支持</li> 
 <li><strong>种子参数</strong>用于可复现输出</li> 
</ul> 
<h3>Anthropic 独有</h3> 
<ul> 
 <li><strong>扩展思考</strong>——显式 <code>thinking</code> 参数配合 <code>budget_tokens</code>，返回可见的思考块</li> 
 <li><strong>提示缓存</strong>——内容块上的 <code>cache_control</code> 支持 TTL 选项，使用量数据中包含缓存命中/未命中统计</li> 
 <li><strong>PDF/文档处理</strong>——原生 <code>document</code> 内容块，支持引用</li> 
 <li><strong>Top K 采样</strong>——<code>top_k</code> 参数控制 token 选择</li> 
 <li><strong>内置服务端工具</strong>——<code>web_search</code>、<code>code_execution</code>、<code>text_editor</code> 等运行在 Anthropic 基础设施上</li> 
 <li><strong>工具错误标志</strong>——工具结果上的 <code>is_error</code> 字段</li> 
</ul> 
<h2>SDK 快速参考</h2> 
<pre><code># OpenAI
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

# Anthropic
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-5-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.content[0].text)</code></pre> 
<h2>迁移清单</h2> 
<p>如果你正在两者之间切换，或者构建统一的抽象层，以下是需要注意的关键事项：</p> 
<ol> 
 <li><strong>移动系统提示词</strong>——从 <code>messages</code> 内部（OpenAI）移到顶层 <code>system</code> 字段（Anthropic），反之亦然</li> 
 <li><strong>设置 <code>max_tokens</code></strong>——在 Anthropic 中为必填，在 OpenAI 中为可选</li> 
 <li><strong>限制温度范围</strong>——Anthropic 上限为 1.0；OpenAI 允许到 2.0</li> 
 <li><strong>重构工具定义</strong>——<code>parameters</code> vs <code>input_schema</code>，包装对象差异</li> 
 <li><strong>处理工具结果的方式不同</strong>——<code>tool</code> 角色（OpenAI）vs <code>user</code> 消息中的内容块（Anthropic）</li> 
 <li><strong>解析工具调用参数</strong>——JSON 字符串（OpenAI）vs 已解析对象（Anthropic）</li> 
 <li><strong>消息交替排列</strong>——Anthropic 强制要求，OpenAI 灵活处理</li> 
 <li><strong>更新认证头</strong>——<code>Authorization: Bearer</code> vs <code>x-api-key</code> + <code>anthropic-version</code></li> 
 <li><strong>适配流式解析器</strong>——扁平数据块 vs 命名生命周期事件</li> 
 <li><strong>解包响应</strong>——<code>choices[0].message.content</code> vs <code>content[0].text</code></li> 
</ol> 
<p>两个 API 都很强大且设计精良，但它们体现了不同的设计哲学。OpenAI 的 Chat Completions API 倾向于灵活性和向后兼容，而 Anthropic 的 Messages API 则更注重显式性和结构化数据。理解这些差异将帮助你无论选择哪家服务商都能构建出稳健的集成方案。</p>
        ]]></description>
    </item>
    
    <item>
        <title>OpenAI vs Anthropic: A Developer&#x27;s Guide to Chat Completions and Messages APIs</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/0E4B45F4-7A16-441C-AFE7-1EEE1C6A79DF/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/0E4B45F4-7A16-441C-AFE7-1EEE1C6A79DF/</guid>
        <pubDate>Wed, 22 Apr 2026 05:00:00 -0700</pubDate>
        
        
        <description><![CDATA[
            <p>If you're building an application that talks to a large language model, chances are you'll be integrating with either OpenAI's <code>/v1/chat/completions</code> or Anthropic's <code>/v1/messages</code> endpoint. While both serve the same fundamental purpose—sending a conversation to an LLM and getting a response—they differ in meaningful ways across authentication, request structure, response format, tool use, streaming, and more.</p> 
<p>This guide covers every major difference so you can make informed decisions when choosing between them or building an abstraction layer that supports both.</p> 
<h2>Authentication</h2> 
<table> 
 <thead>
  <tr>
   <th>Aspect</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>Auth header</td>
   <td><code>Authorization: Bearer sk-...</code></td>
   <td><code>x-api-key: sk-ant-...</code></td>
  </tr> 
  <tr>
   <td>Version header</td>
   <td>None required</td>
   <td><code>anthropic-version: 2023-06-01</code> (mandatory)</td>
  </tr> 
  <tr>
   <td>Org/project headers</td>
   <td><code>OpenAI-Organization</code>, <code>OpenAI-Project</code> (optional)</td>
   <td>N/A</td>
  </tr> 
 </tbody> 
</table> 
<p>Anthropic's mandatory <code>anthropic-version</code> header pins the API behavior to a specific version, decoupling API versioning from model names. OpenAI versions through model names and endpoint changes instead.</p> 
<h2>System Prompts</h2> 
<p>This is one of the most visible architectural differences.</p> 
<p><strong>OpenAI</strong> places the system prompt <em>inside</em> the <code>messages</code> array:</p> 
<pre><code>{
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello"}
  ]
}</code></pre> 
<p><strong>Anthropic</strong> uses a dedicated top-level <code>system</code> parameter, separate from the messages array:</p> 
<pre><code>{
  "system": "You are a helpful assistant.",
  "messages": [
    {"role": "user", "content": "Hello"}
  ]
}</code></pre> 
<p>Anthropic's <code>system</code> field also accepts an array of content blocks (not just a string), which enables features like prompt caching on the system prompt via <code>cache_control</code>.</p> 
<h2>Message Roles and Ordering</h2> 
<table> 
 <thead>
  <tr>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td><code>system</code> / <code>developer</code> (o-series)</td>
   <td>N/A (system is top-level)</td>
  </tr> 
  <tr>
   <td><code>user</code></td>
   <td><code>user</code></td>
  </tr> 
  <tr>
   <td><code>assistant</code></td>
   <td><code>assistant</code></td>
  </tr> 
  <tr>
   <td><code>tool</code> (for tool results)</td>
   <td>N/A (tool results go inside <code>user</code> messages)</td>
  </tr> 
 </tbody> 
</table> 
<p>OpenAI has 4 distinct roles. Anthropic has only 2 (<code>user</code> and <code>assistant</code>). Anthropic <strong>strictly requires alternating</strong> user/assistant messages—you cannot have two consecutive messages of the same role. OpenAI is more flexible and will concatenate consecutive same-role messages.</p> 
<h2>Response Format</h2> 
<p><strong>OpenAI</strong> wraps the response in a <code>choices</code> array (supporting the <code>n</code> parameter for multiple completions):</p> 
<pre><code>{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "choices": [{
    "index": 0,
    "message": {"role": "assistant", "content": "Hello!"},
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 13,
    "completion_tokens": 7,
    "total_tokens": 20
  }
}</code></pre> 
<p><strong>Anthropic</strong> returns content directly as an array of typed content blocks:</p> 
<pre><code>{
  "id": "msg_abc123",
  "type": "message",
  "role": "assistant",
  "content": [
    {"type": "text", "text": "Hello!"}
  ],
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 13,
    "output_tokens": 7,
    "cache_creation_input_tokens": 0,
    "cache_read_input_tokens": 0
  }
}</code></pre> 
<table> 
 <thead>
  <tr>
   <th>Aspect</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>Content type</td>
   <td><code>message.content</code> is a string</td>
   <td><code>content</code> is an array of typed blocks</td>
  </tr> 
  <tr>
   <td>Stop indicator</td>
   <td><code>finish_reason: "stop"</code></td>
   <td><code>stop_reason: "end_turn"</code></td>
  </tr> 
  <tr>
   <td>Length stop</td>
   <td><code>"length"</code></td>
   <td><code>"max_tokens"</code></td>
  </tr> 
  <tr>
   <td>Tool call stop</td>
   <td><code>"tool_calls"</code></td>
   <td><code>"tool_use"</code></td>
  </tr> 
  <tr>
   <td>Multiple completions</td>
   <td>Yes (via <code>n</code> parameter)</td>
   <td>No (always returns 1)</td>
  </tr> 
  <tr>
   <td>Total tokens</td>
   <td>Provided</td>
   <td>Must be calculated</td>
  </tr> 
  <tr>
   <td>Cache stats</td>
   <td>Not in response</td>
   <td>Built-in (<code>cache_creation_input_tokens</code>, <code>cache_read_input_tokens</code>)</td>
  </tr> 
 </tbody> 
</table> 
<h2>Key Parameters</h2> 
<table> 
 <thead>
  <tr>
   <th>Parameter</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>Max output tokens</td>
   <td><code>max_completion_tokens</code> (optional)</td>
   <td><code>max_tokens</code> (<strong>required</strong>)</td>
  </tr> 
  <tr>
   <td>Temperature</td>
   <td>0–2 (default 1)</td>
   <td>0–1 (default 1)</td>
  </tr> 
  <tr>
   <td>Top P</td>
   <td><code>top_p</code></td>
   <td><code>top_p</code></td>
  </tr> 
  <tr>
   <td>Top K</td>
   <td>Not available</td>
   <td><code>top_k</code></td>
  </tr> 
  <tr>
   <td>Frequency penalty</td>
   <td><code>frequency_penalty</code> (-2 to 2)</td>
   <td>Not available</td>
  </tr> 
  <tr>
   <td>Presence penalty</td>
   <td><code>presence_penalty</code> (-2 to 2)</td>
   <td>Not available</td>
  </tr> 
  <tr>
   <td>Stop sequences</td>
   <td><code>stop</code> (string or array)</td>
   <td><code>stop_sequences</code> (array)</td>
  </tr> 
  <tr>
   <td>Seed (reproducibility)</td>
   <td><code>seed</code></td>
   <td>Not available</td>
  </tr> 
  <tr>
   <td>Log probabilities</td>
   <td><code>logprobs</code></td>
   <td>Not available</td>
  </tr> 
  <tr>
   <td>User ID</td>
   <td><code>user</code></td>
   <td><code>metadata.user_id</code></td>
  </tr> 
  <tr>
   <td>Extended thinking</td>
   <td>N/A (o-series reason internally)</td>
   <td><code>thinking</code> object with <code>budget_tokens</code></td>
  </tr> 
 </tbody> 
</table> 
<p>Two things often trip up developers migrating between them: Anthropic <strong>requires</strong> <code>max_tokens</code> in every request (OpenAI defaults to the model maximum), and Anthropic's temperature range caps at 1.0 while OpenAI goes up to 2.0.</p> 
<h2>Tool Use / Function Calling</h2> 
<p>This is one of the largest architectural divergences between the two APIs.</p> 
<h3>Tool Definition</h3> 
<p><strong>OpenAI</strong> wraps tools in a <code>type</code>/<code>function</code> structure:</p> 
<pre><code>{
  "tools": [{
    "type": "function",
    "function": {
      "name": "get_weather",
      "description": "Get weather for a location",
      "parameters": {
        "type": "object",
        "properties": {"location": {"type": "string"}},
        "required": ["location"]
      }
    }
  }]
}</code></pre> 
<p><strong>Anthropic</strong> uses a flatter structure with <code>input_schema</code>:</p> 
<pre><code>{
  "tools": [{
    "name": "get_weather",
    "description": "Get weather for a location",
    "input_schema": {
      "type": "object",
      "properties": {"location": {"type": "string"}},
      "required": ["location"]
    }
  }]
}</code></pre> 
<h3>Tool Calls in Response</h3> 
<p>OpenAI returns tool calls on a separate <code>tool_calls</code> array with arguments as a <strong>JSON string</strong> that must be parsed:</p> 
<pre><code>"tool_calls": [{
  "id": "call_abc123",
  "type": "function",
  "function": {
    "name": "get_weather",
    "arguments": "{\"location\":\"Paris\"}"
  }
}]</code></pre> 
<p>Anthropic returns tool calls as content blocks with <code>input</code> as a <strong>parsed JSON object</strong>:</p> 
<pre><code>"content": [{
  "type": "tool_use",
  "id": "toolu_01D7FLrfh4GYq7yT1ULFeyMV",
  "name": "get_weather",
  "input": {"location": "Paris"}
}]</code></pre> 
<h3>Returning Tool Results</h3> 
<p>OpenAI uses a dedicated <code>tool</code> role:</p> 
<pre><code>{"role": "tool", "tool_call_id": "call_abc123", "content": "Sunny, 22C"}</code></pre> 
<p>Anthropic places tool results as content blocks inside a <code>user</code> message, with an explicit <code>is_error</code> flag:</p> 
<pre><code>{
  "role": "user",
  "content": [{
    "type": "tool_result",
    "tool_use_id": "toolu_01D7...",
    "content": "Sunny, 22C",
    "is_error": false
  }]
}</code></pre> 
<h3>Tool Choice</h3> 
<table> 
 <thead>
  <tr>
   <th>Behavior</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>Model decides</td>
   <td><code>"auto"</code></td>
   <td><code>{"type": "auto"}</code></td>
  </tr> 
  <tr>
   <td>Must use a tool</td>
   <td><code>"required"</code></td>
   <td><code>{"type": "any"}</code></td>
  </tr> 
  <tr>
   <td>Specific tool</td>
   <td><code>{"type": "function", "name": "X"}</code></td>
   <td><code>{"type": "tool", "name": "X"}</code></td>
  </tr> 
  <tr>
   <td>No tools</td>
   <td><code>"none"</code></td>
   <td><code>{"type": "none"}</code></td>
  </tr> 
 </tbody> 
</table> 
<h2>Vision / Multimodal</h2> 
<p><strong>OpenAI</strong> uses the data URL scheme for base64 images:</p> 
<pre><code>{
  "type": "image_url",
  "image_url": {
    "url": "data:image/jpeg;base64,{BASE64_DATA}",
    "detail": "high"
  }
}</code></pre> 
<p><strong>Anthropic</strong> uses separate fields for media type and data:</p> 
<pre><code>{
  "type": "image",
  "source": {
    "type": "base64",
    "media_type": "image/jpeg",
    "data": "BASE64_DATA"
  }
}</code></pre> 
<table> 
 <thead>
  <tr>
   <th>Aspect</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>Detail control</td>
   <td><code>detail: "high"/"low"/"auto"</code></td>
   <td>None</td>
  </tr> 
  <tr>
   <td>PDF support</td>
   <td>Not in Chat Completions</td>
   <td>Native <code>document</code> content block</td>
  </tr> 
  <tr>
   <td>Audio support</td>
   <td>Yes</td>
   <td>No</td>
  </tr> 
 </tbody> 
</table> 
<p>Anthropic has a unique first-class <code>document</code> block type for PDFs and text files with optional citation support—a feature OpenAI's Chat Completions endpoint doesn't offer.</p> 
<h2>Streaming</h2> 
<p>Both APIs use Server-Sent Events, but with fundamentally different structures.</p> 
<p><strong>OpenAI</strong> uses a flat stream of unnamed <code>data:</code> lines, ending with <code>data: [DONE]</code>:</p> 
<pre><code>data: {"choices":[{"delta":{"content":"Hello"}}]}
data: {"choices":[{"delta":{},"finish_reason":"stop"}]}
data: [DONE]</code></pre> 
<p><strong>Anthropic</strong> uses named event types with a structured lifecycle:</p> 
<pre><code>event: message_start
data: {"type":"message_start","message":{...}}

event: content_block_start
data: {"type":"content_block_start","index":0,...}

event: content_block_delta
data: {"type":"content_block_delta","delta":{"type":"text_delta","text":"Hello"}}

event: content_block_stop
data: {"type":"content_block_stop","index":0}

event: message_stop
data: {"type":"message_stop"}</code></pre> 
<p>Anthropic's streaming is more granular with 6+ named event types covering the full message lifecycle. This makes mixed content (text interleaved with tool calls) easier to handle but adds parsing complexity. OpenAI's approach is simpler—essentially one event type plus a sentinel.</p> 
<h2>Error Handling</h2> 
<p><strong>OpenAI</strong> includes a <code>param</code> field indicating which parameter caused the error:</p> 
<pre><code>{
  "error": {
    "message": "Incorrect API key",
    "type": "invalid_request_error",
    "param": null,
    "code": "invalid_api_key"
  }
}</code></pre> 
<p><strong>Anthropic</strong> uses a <code>type</code>-based discrimination pattern:</p> 
<pre><code>{
  "type": "error",
  "error": {
    "type": "authentication_error",
    "message": "Invalid API key"
  }
}</code></pre> 
<p>Anthropic distinguishes <code>overloaded_error</code> from <code>api_error</code>, making it easier to implement backoff logic specifically for capacity issues.</p> 
<h2>Rate Limiting</h2> 
<table> 
 <thead>
  <tr>
   <th>Aspect</th>
   <th>OpenAI</th>
   <th>Anthropic</th>
  </tr>
 </thead> 
 <tbody> 
  <tr>
   <td>Header prefix</td>
   <td><code>x-ratelimit-</code> (lowercase)</td>
   <td><code>RateLimit-</code> (IETF draft format)</td>
  </tr> 
  <tr>
   <td>Reset format</td>
   <td>Relative duration (<code>1s</code>, <code>6m0s</code>)</td>
   <td>ISO 8601 timestamp</td>
  </tr> 
  <tr>
   <td>Retry header</td>
   <td>Not standard</td>
   <td><code>Retry-After</code></td>
  </tr> 
 </tbody> 
</table> 
<p>Both return HTTP 429 for rate limit errors and recommend exponential backoff with jitter.</p> 
<h2>Unique Features</h2> 
<h3>OpenAI-only</h3> 
<ul> 
 <li><strong>Multiple completions</strong> — the <code>n</code> parameter generates N alternative responses per request</li> 
 <li><strong>Log probabilities</strong> — <code>logprobs</code> returns token-level probability information</li> 
 <li><strong>Structured output</strong> — <code>response_format</code> with <code>json_schema</code> for guaranteed JSON structure</li> 
 <li><strong>Frequency/presence penalties</strong> for controlling repetition</li> 
 <li><strong>Audio input/output</strong> support in multimodal messages</li> 
 <li><strong>Seed parameter</strong> for reproducible outputs</li> 
</ul> 
<h3>Anthropic-only</h3> 
<ul> 
 <li><strong>Extended thinking</strong> — explicit <code>thinking</code> parameter with <code>budget_tokens</code>, returns visible thinking blocks</li> 
 <li><strong>Prompt caching</strong> — <code>cache_control</code> on content blocks with TTL options, with cache hit/miss reporting in usage</li> 
 <li><strong>PDF/document processing</strong> — native <code>document</code> content blocks with citation support</li> 
 <li><strong>Top K sampling</strong> — <code>top_k</code> parameter for controlling token selection</li> 
 <li><strong>Built-in server tools</strong> — <code>web_search</code>, <code>code_execution</code>, <code>text_editor</code>, etc. that run on Anthropic's infrastructure</li> 
 <li><strong>Tool error flag</strong> — <code>is_error</code> field on tool results</li> 
</ul> 
<h2>SDK Quick Reference</h2> 
<pre><code># OpenAI
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)

# Anthropic
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
    model="claude-sonnet-4-5-20250514",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Hello"}]
)
print(response.content[0].text)</code></pre> 
<h2>Migration Checklist</h2> 
<p>If you're switching between the two or building a unified abstraction, here are the key things to watch for:</p> 
<ol> 
 <li><strong>Move system prompts</strong> — from inside <code>messages</code> (OpenAI) to the top-level <code>system</code> field (Anthropic), or vice versa</li> 
 <li><strong>Set <code>max_tokens</code></strong> — it's required in Anthropic, optional in OpenAI</li> 
 <li><strong>Clamp temperature</strong> — Anthropic caps at 1.0; OpenAI allows up to 2.0</li> 
 <li><strong>Restructure tool definitions</strong> — <code>parameters</code> vs <code>input_schema</code>, wrapper object differences</li> 
 <li><strong>Handle tool results differently</strong> — <code>tool</code> role (OpenAI) vs content blocks in <code>user</code> message (Anthropic)</li> 
 <li><strong>Parse tool call arguments</strong> — JSON string (OpenAI) vs parsed object (Anthropic)</li> 
 <li><strong>Enforce message alternation</strong> — required for Anthropic, flexible in OpenAI</li> 
 <li><strong>Update auth headers</strong> — <code>Authorization: Bearer</code> vs <code>x-api-key</code> + <code>anthropic-version</code></li> 
 <li><strong>Adapt streaming parsers</strong> — flat chunks vs named lifecycle events</li> 
 <li><strong>Unwrap responses</strong> — <code>choices[0].message.content</code> vs <code>content[0].text</code></li> 
</ol> 
<p>Both APIs are powerful and well-designed, but they reflect different philosophies. OpenAI's Chat Completions API leans toward flexibility and backwards compatibility, while Anthropic's Messages API favors explicitness and structured data. Understanding these differences will help you build robust integrations regardless of which provider you choose.</p>
        ]]></description>
    </item>
    
    <item>
        <title>Palus &#x4E2D;&#x7684; Lens &#x8EAB;&#x4EFD;&#x9A8C;&#x8BC1;&#x4E0E; Grove &#x5B58;&#x50A8;&#x673A;&#x5236;&#x8BE6;&#x89E3;</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/3ECD7E45-B3B5-42CC-9F4F-9EC7912EF269/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/3ECD7E45-B3B5-42CC-9F4F-9EC7912EF269/</guid>
        <pubDate>Tue, 07 Apr 2026 05:01:00 -0700</pubDate>
        
        
        <description><![CDATA[
            <p>Palus 是一个基于 React、wagmi、viem 和 Apollo GraphQL 构建的 Lens Protocol 客户端。其中两个最值得深入了解的子系统是<strong>身份验证</strong>（基于 Lens Chain 的自定义挑战-签名流程）和 <strong>Grove 存储</strong>（Lens 的内容寻址存储层，用于媒体和元数据）。本文将详细介绍这两个系统的工作原理，并附上实际源代码文件的链接。</p> 
<h2>身份验证：Lens Chain 上的挑战-响应机制</h2> 
<p>Palus <em>没有</em>使用标准的「Sign in with Ethereum」（SIWE / EIP-4361）流程，而是实现了一套由 <a href="https://api.lens.xyz/graphql">Lens GraphQL API</a> 驱动的自定义挑战-响应系统。签名原语相同——通过 EVM 钱包调用 <code>personal_sign</code>——但消息格式和验证完全由 Lens 后端处理。</p> 
<h3>链配置</h3> 
<p>链定义从 <code>@lens-chain/sdk/viem</code> 导入，并在 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/data/constants.ts">constants.ts</a> 中配置：</p> 
<pre><code>import { chains } from "@lens-chain/sdk/viem";

export const IS_TESTNET = import.meta.env.VITE_USE_TESTNET === "true";
export const CHAIN = IS_TESTNET ? chains.testnet : chains.mainnet;</code></pre> 
<p><a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/components/Common/Providers/Web3Provider.tsx">Web3Provider.tsx</a> 中的 wagmi 配置为 Lens Chain（<code>https://rpc.lens.xyz</code>）和以太坊主网（通过 Infura）设置了传输层，并配置了四种钱包连接器：MetaMask SDK、浏览器注入钱包、WalletConnect 和 Family Accounts。</p> 
<h3>登录流程</h3> 
<p>整个流程位于 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/components/Shared/Auth/Login.tsx">Login.tsx</a>，分为三个步骤：</p> 
<h4>第一步：请求挑战</h4> 
<p>前端调用 <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/documents/mutations/auth/Challenge.graphql">Challenge.graphql</a> 中定义的 <code>challenge</code> GraphQL mutation：</p> 
<pre><code>mutation Challenge($request: ChallengeRequest!) {
  challenge(request: $request) {
    id
    text
  }
}</code></pre> 
<p><code>ChallengeRequest</code> 输入根据用户与账户的关系有两种形式：</p> 
<ul> 
 <li><code>accountOwner: { owner, account, app }</code> —— 当连接的钱包直接拥有该 Lens 账户时</li> 
 <li><code>accountManager: { manager, account, app }</code> —— 当钱包是授权管理者时</li> 
</ul> 
<p>API 返回一个唯一的 <code>id</code>（UUID）和一个 <code>text</code> 字符串——这是一条自定义挑战消息，而非 SIWE 格式。</p> 
<h4>第二步：签名挑战</h4> 
<p>挑战文本通过 wagmi 的 <code>useSignMessage</code> hook 进行签名，该 hook 在连接的钱包上调用 <code>personal_sign</code>：</p> 
<pre><code>const signature = await signMessageAsync({
  message: challenge?.data?.challenge?.text
});</code></pre> 
<p>在签名之前，<a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/hooks/useHandleWrongNetwork.tsx">useHandleWrongNetwork</a> 会确保钱包连接到正确的链。如果不在正确的链上，会通过 <code>useSwitchChain</code> 触发链切换。</p> 
<h4>第三步：验证身份</h4> 
<p>签名后的挑战通过 <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/documents/mutations/auth/Authenticate.graphql">Authenticate.graphql</a> 提交：</p> 
<pre><code>mutation Authenticate($request: SignedAuthChallenge!) {
  authenticate(request: $request) {
    ... on AuthenticationTokens {
      accessToken
      refreshToken
    }
    ... on ForbiddenError {
      reason
    }
  }
}</code></pre> 
<p>验证成功后，Lens API 返回 JWT <code>accessToken</code> 和 <code>refreshToken</code>。这些令牌存储在基于 localStorage 的 Zustand 持久化 store 中（<a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/store/persisted/useAuthStore.ts">useAuthStore.ts</a>）。</p> 
<h3>令牌管理</h3> 
<p>身份验证完成后，每个 GraphQL 请求都通过 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/authLink.ts">authLink.ts</a> 中的 Apollo Link 中间件携带 JWT，设置 <code>X-Access-Token</code> 请求头。</p> 
<p>令牌刷新由 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/tokenManager.ts">tokenManager.ts</a> 处理。它会检查访问令牌是否将在 5 分钟内过期，如果是，则调用 <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/documents/mutations/auth/Refresh.graphql">Refresh mutation</a>，使用指数退避策略（最多重试 5 次）。去重机制确保同一时间只有一个刷新请求在进行中。</p> 
<h3>钱包选择</h3> 
<p><a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/components/Shared/Auth/WalletSelector.tsx">WalletSelector.tsx</a> 负责钱包连接。它筛选并展示四种连接器类型：MetaMask SDK（由于已知 bug 在 Android 上禁用）、浏览器注入钱包提供者（仅在 <code>window.ethereum</code> 存在时显示）、WalletConnect v2 和 Family Accounts。</p> 
<h2>Grove：Lens 的内容寻址存储</h2> 
<p>Grove 是 Lens Protocol 的存储层，Palus 用它来存储所有用户生成的内容——图片、文件和 JSON 元数据（帖子内容、个人资料元数据等）。它取代 IPFS 成为主要存储后端，同时仍支持 IPFS 作为回退方案。</p> 
<h3>存储客户端</h3> 
<p>客户端在 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/storageClient.ts">storageClient.ts</a> 中以零配置方式初始化：</p> 
<pre><code>import { StorageClient } from "@lens-chain/storage-client";

export const storageClient = StorageClient.create();</code></pre> 
<p><code>@lens-chain/storage-client</code> 包处理与 Grove API（<code>https://api.grove.storage/</code>）的所有通信。</p> 
<h3>访问控制列表（ACL）</h3> 
<p>每次上传到 Grove 都包含一个 ACL，用于确定谁可以修改内容。Palus 使用 <code>@lens-chain/storage-client</code> 中的两种 ACL 类型：</p> 
<ul> 
 <li><code>immutable(chainId)</code> —— 内容永久存储，任何人都无法修改</li> 
 <li><code>lensAccountOnly(account, chainId)</code> —— 只有指定的 Lens 账户所有者可以修改内容</li> 
</ul> 
<p>当提供了账户地址时（例如针对特定个人资料的内容），使用账户范围的 ACL。对于公共/共享内容，则应用不可变 ACL。</p> 
<h3>上传文件</h3> 
<p>文件上传由 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/uploadFiles.ts">uploadFiles.ts</a> 处理。每个文件通过 <code>storageClient.uploadFile(file, { acl })</code> 单独上传，响应提供一个 <code>lens://</code> URI。该函数还支持 base64 编码的图片（用于分享通知截图），通过先将其转换为 <code>File</code> 对象来实现。</p> 
<pre><code>const storageNodeResponse = await storageClient.uploadFile(file, { acl });
return {
  mimeType: file.type || FALLBACK_TYPE,
  uri: storageNodeResponse.uri  // 例如 "lens://abc123..."
};</code></pre> 
<h3>上传元数据</h3> 
<p>JSON 元数据（帖子内容等）通过 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/uploadMetadata.ts">uploadMetadata.ts</a> 中的 <code>storageClient.uploadAsJson(data, { acl })</code> 上传。</p> 
<p>该函数包含<strong>回退机制</strong>：如果 Grove 上传失败，会回退到 <a href="https://thirdweb.com/">Thirdweb</a> 存储（上传到 IPFS），确保即使 Grove 暂时不可用，元数据也始终能够持久化。</p> 
<pre><code>try {
  const upload = await storageClient.uploadAsJson(data, { acl });
  uri = upload.uri;
} catch (e) {
  // 回退到 thirdweb/IPFS
  const storage = new ThirdwebStorage({ clientId: THIRD_WEB_CLIENT_ID });
  const file = new File([JSON.stringify(data)], "metadata.json", {
    type: "application/json"
  });
  uri = await storage.upload(file, { uploadWithoutDirectory: true });
}</code></pre> 
<h3>解析内容 URL</h3> 
<p>渲染内容时，<code>lens://</code> URI 需要被解析为 HTTP URL。这由 <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/functions/helpers/sanitizeDStorageUrl.ts">sanitizeDStorageUrl.ts</a> 完成，它处理多种存储协议：</p> 
<ul> 
 <li><code>lens://</code> → <code>https://api.grove.storage/</code></li> 
 <li><code>ipfs://</code> → <code>https://gw.ipfs-lens.dev/ipfs/</code></li> 
 <li><code>ar://</code> → <code>https://gateway.arweave.net/</code></li> 
 <li>原始 IPFS CID（以 <code>Qm</code> 开头）也会被检测并路由到 IPFS 网关</li> 
</ul> 
<h3>Lens API：无需密钥</h3> 
<p>位于 <code>api.lens.xyz</code> 的 Lens GraphQL API 是开放的——不需要 API 密钥。<a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/apollo/httpLink.ts">httpLink.ts</a> 中的 Apollo HTTP link 只需设置一个 <code>origin</code> 请求头即可直接连接。需要身份验证的操作使用通过上述挑战-响应流程获取的 JWT 访问令牌。</p> 
<h2>总结</h2> 
<p>Palus 展示了钱包级签名（由 wagmi/viem 在 Lens Chain 上处理）与应用级身份验证（由 Lens GraphQL API 的挑战-响应系统处理）之间的清晰分离。内容存储通过 Grove 进行抽象，并配有合理的回退机制。整个认证令牌生命周期——从初始登录到刷新和过期——都通过 Apollo 中间件和 Zustand 状态透明管理。</p>
        ]]></description>
    </item>
    
    <item>
        <title>How Lens Authentication and Grove Storage Work in Palus</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/DCCC1485-1CB9-45AD-AF1F-C3C4138E58A1/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/DCCC1485-1CB9-45AD-AF1F-C3C4138E58A1/</guid>
        <pubDate>Tue, 07 Apr 2026 05:00:00 -0700</pubDate>
        
        
        <description><![CDATA[
            <p>Palus is a Lens Protocol client built with React, wagmi, viem, and Apollo GraphQL. Two of its most interesting subsystems are <strong>authentication</strong> (a custom challenge-signature flow on Lens Chain) and <strong>Grove storage</strong> (Lens's content-addressed storage layer for media and metadata). This article walks through both in detail, with links to the actual source files.</p> 
<h2>Authentication: Challenge-Response on Lens Chain</h2> 
<p>Palus does <em>not</em> use the standard "Sign in with Ethereum" (SIWE / EIP-4361) flow. Instead, it implements a custom challenge-response system powered by the <a href="https://api.lens.xyz/graphql">Lens GraphQL API</a>. The signing primitive is the same — <code>personal_sign</code> via an EVM wallet — but the message format and verification are entirely handled by the Lens backend.</p> 
<h3>Chain Configuration</h3> 
<p>The chain is imported from <code>@lens-chain/sdk/viem</code> and configured in <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/data/constants.ts">constants.ts</a>:</p> 
<pre><code>import { chains } from "@lens-chain/sdk/viem";

export const IS_TESTNET = import.meta.env.VITE_USE_TESTNET === "true";
export const CHAIN = IS_TESTNET ? chains.testnet : chains.mainnet;</code></pre> 
<p>The wagmi config in <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/components/Common/Providers/Web3Provider.tsx">Web3Provider.tsx</a> wires up transports for both Lens Chain (<code>https://rpc.lens.xyz</code>) and Ethereum mainnet (via Infura), along with four wallet connectors: MetaMask SDK, injected browser wallets, WalletConnect, and Family Accounts.</p> 
<h3>The Sign-In Flow</h3> 
<p>The entire flow lives in <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/components/Shared/Auth/Login.tsx">Login.tsx</a> and follows three steps:</p> 
<h4>Step 1: Request a Challenge</h4> 
<p>The frontend calls the <code>challenge</code> GraphQL mutation defined in <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/documents/mutations/auth/Challenge.graphql">Challenge.graphql</a>:</p> 
<pre><code>mutation Challenge($request: ChallengeRequest!) {
  challenge(request: $request) {
    id
    text
  }
}</code></pre> 
<p>The <code>ChallengeRequest</code> input can take two shapes depending on the user's relationship to the account:</p> 
<ul> 
 <li><code>accountOwner: { owner, account, app }</code> — when the connected wallet directly owns the Lens account</li> 
 <li><code>accountManager: { manager, account, app }</code> — when the wallet is an authorized manager</li> 
</ul> 
<p>The API returns a unique <code>id</code> (UUID) and a <code>text</code> string — a custom challenge message, not a SIWE-formatted one.</p> 
<h4>Step 2: Sign the Challenge</h4> 
<p>The challenge text is signed using wagmi's <code>useSignMessage</code> hook, which calls <code>personal_sign</code> on the connected wallet:</p> 
<pre><code>const signature = await signMessageAsync({
  message: challenge?.data?.challenge?.text
});</code></pre> 
<p>Before signing, <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/hooks/useHandleWrongNetwork.tsx">useHandleWrongNetwork</a> ensures the wallet is connected to the correct chain. If it isn't, it triggers a chain switch via <code>useSwitchChain</code>.</p> 
<h4>Step 3: Authenticate</h4> 
<p>The signed challenge is submitted via <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/documents/mutations/auth/Authenticate.graphql">Authenticate.graphql</a>:</p> 
<pre><code>mutation Authenticate($request: SignedAuthChallenge!) {
  authenticate(request: $request) {
    ... on AuthenticationTokens {
      accessToken
      refreshToken
    }
    ... on ForbiddenError {
      reason
    }
  }
}</code></pre> 
<p>On success, the Lens API returns JWT <code>accessToken</code> and <code>refreshToken</code> pairs. These are stored in a Zustand persistent store (<a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/store/persisted/useAuthStore.ts">useAuthStore.ts</a>) backed by localStorage.</p> 
<h3>Token Management</h3> 
<p>Once authenticated, every GraphQL request includes the JWT via an Apollo Link middleware in <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/authLink.ts">authLink.ts</a>. It sets the <code>X-Access-Token</code> header on each request.</p> 
<p>Token refresh is handled by <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/tokenManager.ts">tokenManager.ts</a>, which checks if the access token is expiring within 5 minutes and, if so, calls the <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/documents/mutations/auth/Refresh.graphql">Refresh mutation</a> using exponential backoff (up to 5 retries). A deduplication mechanism ensures only one refresh is in-flight at a time.</p> 
<h3>Wallet Selection</h3> 
<p><a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/components/Shared/Auth/WalletSelector.tsx">WalletSelector.tsx</a> handles wallet connection. It filters and presents four connector types: MetaMask SDK (disabled on Android due to a known bug), injected providers (only shown when <code>window.ethereum</code> exists), WalletConnect v2, and Family Accounts.</p> 
<h2>Grove: Lens's Content-Addressed Storage</h2> 
<p>Grove is Lens Protocol's storage layer, used by Palus for all user-generated content — images, files, and JSON metadata (post content, profile metadata, etc.). It replaces IPFS as the primary storage backend while still supporting IPFS as a fallback.</p> 
<h3>Storage Client</h3> 
<p>The client is initialized in <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/storageClient.ts">storageClient.ts</a> with zero configuration:</p> 
<pre><code>import { StorageClient } from "@lens-chain/storage-client";

export const storageClient = StorageClient.create();</code></pre> 
<p>The <code>@lens-chain/storage-client</code> package handles all communication with the Grove API at <code>https://api.grove.storage/</code>.</p> 
<h3>Access Control Lists (ACLs)</h3> 
<p>Every upload to Grove includes an ACL that determines who can modify the content. Palus uses two ACL types from <code>@lens-chain/storage-client</code>:</p> 
<ul> 
 <li><code>immutable(chainId)</code> — content is permanent and cannot be modified by anyone</li> 
 <li><code>lensAccountOnly(account, chainId)</code> — only the specified Lens account owner can modify the content</li> 
</ul> 
<p>When an account address is provided (e.g., for profile-specific content), the account-scoped ACL is used. For public/shared content, the immutable ACL is applied.</p> 
<h3>Uploading Files</h3> 
<p>File uploads are handled by <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/uploadFiles.ts">uploadFiles.ts</a>. Each file is uploaded individually via <code>storageClient.uploadFile(file, { acl })</code>, and the response provides a <code>lens://</code> URI. The function also supports base64-encoded images (used for sharing notification screenshots) by converting them to <code>File</code> objects first.</p> 
<pre><code>const storageNodeResponse = await storageClient.uploadFile(file, { acl });
return {
  mimeType: file.type || FALLBACK_TYPE,
  uri: storageNodeResponse.uri  // e.g., "lens://abc123..."
};</code></pre> 
<h3>Uploading Metadata</h3> 
<p>JSON metadata (post content, etc.) is uploaded through <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/src/helpers/uploadMetadata.ts">uploadMetadata.ts</a> via <code>storageClient.uploadAsJson(data, { acl })</code>.</p> 
<p>This function includes a <strong>fallback mechanism</strong>: if the Grove upload fails, it falls back to <a href="https://thirdweb.com/">Thirdweb</a> storage (which uploads to IPFS), ensuring metadata is always persisted even if Grove is temporarily unavailable.</p> 
<pre><code>try {
  const upload = await storageClient.uploadAsJson(data, { acl });
  uri = upload.uri;
} catch (e) {
  // Fallback to thirdweb/IPFS
  const storage = new ThirdwebStorage({ clientId: THIRD_WEB_CLIENT_ID });
  const file = new File([JSON.stringify(data)], "metadata.json", {
    type: "application/json"
  });
  uri = await storage.upload(file, { uploadWithoutDirectory: true });
}</code></pre> 
<h3>Resolving Content URLs</h3> 
<p>When rendering content, <code>lens://</code> URIs must be resolved to HTTP URLs. This is done by <a href="https://github.com/ipaulpro/palus/blob/main/packages/web/functions/helpers/sanitizeDStorageUrl.ts">sanitizeDStorageUrl.ts</a>, which handles multiple storage protocols:</p> 
<ul> 
 <li><code>lens://</code> → <code>https://api.grove.storage/</code></li> 
 <li><code>ipfs://</code> → <code>https://gw.ipfs-lens.dev/ipfs/</code></li> 
 <li><code>ar://</code> → <code>https://gateway.arweave.net/</code></li> 
 <li>Raw IPFS CIDs (starting with <code>Qm</code>) are also detected and routed through the IPFS gateway</li> 
</ul> 
<h3>The Lens API: No Key Required</h3> 
<p>The Lens GraphQL API at <code>api.lens.xyz</code> is open — no API key is needed. The Apollo HTTP link in <a href="https://github.com/ipaulpro/palus/blob/main/packages/indexer/apollo/httpLink.ts">httpLink.ts</a> simply sets an <code>origin</code> header and connects directly. Authenticated operations use the JWT access token obtained through the challenge-response flow described above.</p> 
<h2>Summary</h2> 
<p>Palus demonstrates a clean separation between wallet-level signing (handled by wagmi/viem on Lens Chain) and application-level authentication (handled by the Lens GraphQL API's challenge-response system). Content storage is abstracted through Grove with sensible fallbacks, and the entire auth token lifecycle — from initial sign-in through refresh and expiry — is managed transparently via Apollo middleware and Zustand state.</p>
        ]]></description>
    </item>
    
    <item>
        <title>2026-04-06 Session: Fix AI chat empty response error for Gemma4 reasoning models</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/E968297A-8903-4CDD-8F10-07D40E95C127/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/E968297A-8903-4CDD-8F10-07D40E95C127/</guid>
        <pubDate>Mon, 06 Apr 2026 14:25:43 -0700</pubDate>
        
        
        <description><![CDATA[
            <h2>What was done</h2> 
<ul> 
 <li>Diagnosed and fixed the "AI response did not include content" error that appeared when using Gemma4 via Ollama</li> 
 <li>Root cause: Gemma4 sends thinking tokens in <code>delta.reasoning</code> with empty <code>delta.content</code>, and sometimes produces zero content tokens after tool calls</li> 
 <li>Added parsing for <code>delta.reasoning</code> and <code>delta.reasoning_content</code> fields in the SSE streaming parser</li> 
 <li>Reasoning text is streamed to the UI in real-time so users see the model thinking</li> 
 <li>When content is empty but reasoning exists, reasoning is used as fallback content</li> 
 <li>Added retry logic for empty responses after tool execution</li> 
</ul> 
<h2>Key decisions</h2> 
<ul> 
 <li>Chose to use reasoning as fallback content rather than silently discarding it, since for some models that's the only output produced</li> 
 <li>Added one retry on empty post-tool responses before throwing the error, giving the model a second chance</li> 
 <li>Supported both <code>reasoning</code> (Ollama/Gemma4) and <code>reasoning_content</code> (DeepSeek-style) field names for broader compatibility</li> 
</ul> 
<h2>Files changed</h2> 
<ul> 
 <li><code>Planet/Views/Articles/ArticleAIChatView.swift</code> — Added reasoning field parsing in SSE streaming, fallback content logic, and retry on empty post-tool responses</li> 
</ul>
        ]]></description>
    </item>
    
    <item>
        <title>0.22.0 - Version 2</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/A2F17A07-ADDB-491D-9ABB-245781EC902F/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/A2F17A07-ADDB-491D-9ABB-245781EC902F/</guid>
        <pubDate>Mon, 06 Apr 2026 11:32:18 -0700</pubDate>
        
        
        <description><![CDATA[
            <h3>New Features</h3> 
<ul> 
 <li><strong>Writer media</strong> — Continuity Camera import from iPhone, video compression controls with FPS display and revert-to-original flow, media paste in title and body fields, markdown and text file drop support</li> 
 <li><strong>Publishing destinations</strong> — Cloudflare Pages publishing with token verification and deploy logging, SSH rsync (#453), new Publishing settings tab with IPNS toggle</li> 
 <li><strong>AI article assistant</strong> — In-article AI chat with SSE streaming and tool use, write_article with append mode and heading extraction, support for local network AI servers</li> 
 <li><strong>Hybrid search</strong> — BM25 + vector semantic search, multi-language NLEmbedding with automatic language detection, CJK tokenization, search API endpoint with preview word boundary snapping</li> 
 <li><strong>Custom Finder app icon</strong> — Apply custom icon via NSWorkspace.setIcon with security-scoped bookmarks and bookmark validation</li> 
 <li><strong>IPFS tools sheet</strong> — View IPFS peer identity, publish logging</li> 
 <li><strong>Prevent computer sleep</strong> — Option to keep Mac awake during long-running operations</li> 
</ul> 
<h3>Improvements</h3> 
<ul> 
 <li><strong>Article selection &amp; navigation</strong> — Restore last selected article on launch, auto-scroll sidebar, preserve selection after saving/moving drafts, navigate to existing planet when re-following, FollowingPlanet avatar jump in toolbar</li> 
 <li><strong>QuickPost &amp; Writer editing</strong> — Auto-expand QuickPost height, numbered list and markdown todo autocomplete, discard confirmation, Writer auto-focus title with Tab/Enter/Shift-Tab navigation, Retina screenshot logical width in image tags</li> 
 <li><strong>Publish reliability</strong> — Concurrent publish guard preventing overlapping IPFS/rsync/Cloudflare deploys, atomic writes for all persistent data and published content, await HTML rendering before publish, skip rebuilding unchanged planet edits</li> 
 <li><strong>Performance</strong> — Optimized article list for unread view with many items, fixed main thread blocking on sidebar switch, full CPU utilization during rebuild, publish log building off main thread, improved search responsiveness</li> 
 <li><strong>UI polish</strong> — Smart Feeds icon shadows, white pinned article icon when selected, star and unread dot vertical alignment fixes, filter button layout for macOS 26, sidebar context menu icons, UUID row in Edit Planet info</li> 
 <li><strong>Dependencies</strong> — Replaced ENSKit with lightweight ENSDataKit, removed unused HDWalletKit, updated Sparkle to 2.9.0</li> 
 <li><strong>Site templates &amp; feeds</strong> — Multiple template updates, allow templates without assets directory, podcast author name, .jpeg media label support</li> 
 <li><strong>Developer tooling</strong> — Helper scripts for tag management, commit summaries, and changelog generation; Flask-based Sparkle release notes generator with search and Planet API sync</li> 
</ul> 
<h3>Bug Fixes</h3> 
<ul> 
 <li><strong>CJK input method</strong> — Fixed IME composing text being destroyed in Writer and QuickPost, extracted shared MarkdownEditorTextView</li> 
 <li><strong>Crash fixes</strong> — Force unwrap on invalid UUID in DraftModel.load, force unwraps in GPS stripping on corrupt EXIF data, try! crash on IPFS repo directory listing at launch</li> 
 <li><strong>Publish pipeline</strong> — Rebuild progress bar accuracy, premature dismissal, and Escape key bypass; publish log viewer hangs during long runs; ops persistence and concurrency; KeychainHelper silently losing save/delete errors</li> 
 <li><strong>Content &amp; rendering</strong> — DNS-type following webview blank on startup restore, .article import losing properties with batch atomicity fix, raw pixel width for image tags instead of DPI-adjusted, list rendering glitch on Enter, Quick Post media paste, aggregation correctness</li> 
</ul> 
<h3>Cleanup &amp; Refactoring</h3> 
<ul> 
 <li><strong>Farcaster removal</strong> — Removed all Farcaster-related features</li> 
 <li><strong>Memory management</strong> — Free CMarkRenderer AST node and HTML buffer, removed unnecessary C string pointer usage</li> 
 <li><strong>Code deduplication</strong> — Reduced duplicated code in MyPlanetSidebarItem, extracted shared settings layout components, refactored MyPlanetModel.prewarm, simplified dependency graph</li> 
</ul>
        ]]></description>
    </item>
    
    <item>
        <title>0.22.0 - Version 1</title>
        <link>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/63877800-D7B0-4F8C-9E12-C40C535C0886/</link>
        <guid>https://eth.sucks/ipns/k51qzi5uqu5dluwqy5gdyg2i4xsa18f70md07m5kyvshli6jisk5qs5b2ygw93/63877800-D7B0-4F8C-9E12-C40C535C0886/</guid>
        <pubDate>Mon, 06 Apr 2026 11:31:43 -0700</pubDate>
        
        
        <description><![CDATA[
            <h3>New Features</h3> 
<ul> 
 <li><strong>Continuity Camera</strong> — Import photos and videos directly from iPhone into Writer</li> 
 <li><strong>Video compression</strong> — Video info row in Writer, compression controls with real-time FPS readout during export, Revert to Original flow for compressed videos</li> 
 <li><strong>Media paste &amp; markdown drop</strong> — Paste images and videos into Writer body and title fields, drag-and-drop markdown and text files as article content</li> 
 <li><strong>Publishing destinations</strong> — Cloudflare Pages publishing with token verification, SSH rsync as optional destination (#453), new Publishing settings tab with IPNS toggle</li> 
 <li><strong>AI chat assistant</strong> — In-article AI chat view with SSE streaming, tool use support (write_article with append mode and h1 extraction), configurable AI servers including local network discovery</li> 
 <li><strong>QuickPost enhancements</strong> — Auto-expanding editor height, numbered list and todo autocomplete, discard confirmation dialog, unified autocomplete rules shared with Writer</li> 
 <li><strong>Prevent computer sleep</strong> — Option to keep the Mac awake while Planet is running</li> 
 <li><strong>External data monitoring</strong> — Directory monitor detects external changes to planet JSON data and triggers live updates</li> 
</ul> 
<h3>Improvements</h3> 
<ul> 
 <li><strong>Article selection &amp; navigation</strong> — Restore last selected article on launch, auto-scroll sidebar to selection, preserve selection after saving or moving drafts, add FollowingPlanet avatar jump in article toolbar</li> 
 <li><strong>Performance</strong> — Optimize article list for unread view with many items, avoid rebuilding unchanged planet edits, improve search responsiveness, fix main thread blocking when switching sidebar views</li> 
 <li><strong>Writer focus flow</strong> — Auto-focus title on open, Tab/Enter to move to body, Shift-Tab back to title</li> 
 <li><strong>Follow UX</strong> — Improved follow action with avatar resolution, navigate to existing planet when re-following a known feed</li> 
 <li><strong>IPFS &amp; publishing</strong> — Refined daemon status labels, reload article when IPFS comes online, added publish logging, downgrade IPNS keepalive errors to warning</li> 
 <li><strong>Dependencies</strong> — Replaced ENSKit with lightweight ENSDataKit, removed unused HDWalletKit, updated Sparkle to 2.9.0</li> 
 <li><strong>UI polish</strong> — Social symbols with Juicebox moved to Social tab, new symbol assets (git, markdown), Cloudflare and iTerm menu icons, .jpeg added to media labels, macOS 26 filter button layout, author name in podcast feed, site template updates</li> 
</ul> 
<h3>Bug Fixes</h3> 
<ul> 
 <li><strong>Article list alignment</strong> — Fix star icon, unread dot, and text vertical alignment across article list items; fix star visibility and pinned icon color when row is selected</li> 
 <li><strong>QuickPost CJK input</strong> — Fix input method (IME) composing text being destroyed during CJK entry</li> 
 <li><strong>Media handling</strong> — Fix Quick Post media paste not processing attachments correctly</li> 
 <li><strong>CMarkRenderer memory</strong> — Free AST node and HTML buffer after rendering, validate buffer, replace C string pointer usage with native Swift handling</li> 
 <li><strong>Ops persistence</strong> — Fix persistence and concurrency issues in the operations queue</li> 
</ul> 
<h3>Cleanup &amp; Refactoring</h3> 
<ul> 
 <li><strong>Remove Farcaster</strong> — Strip all Farcaster-related features from the codebase</li> 
 <li><strong>Code organization</strong> — Deduplicate MyPlanetSidebarItem, extract shared settings layout components, move ArticleAIChatView to its own file, simplify MyPlanetModel.prewarm</li> 
 <li><strong>Developer tooling</strong> — Shell scripts for tag changelogs (what.sh), commit summaries (progress.sh), and tag creation (tag.sh); improved pre-commit versioning hook; build documentation and agent context files</li> 
</ul>
        ]]></description>
    </item>
    
</channel>
</rss>
