Ask me what skills you need
What are you building?
Tell me what you're working on and I'll find the best agent skills for you.
An engineer ships stream_mode="values" to a token-level chat UI because it
"seemed the most complete." Every single token causes the full graph state —
message history, scratchpad, plan — to be re-sent and re-rendered. At ~60
tokens/sec the browser overdraws, the React reconciler can't keep up, the tab
freezes, and users blame the model. The correct answer was stream_mode="messages",
which emits an AIMessageChunk delta per token (typically 5-50 bytes) — one
token's worth of DOM work. This is pain-catalog entry P19 and it is the #1
LangGraph integration mistake in the 1.0 generation.
Then the same UI ships to Cloud Run and hangs forever. No error. No logs. The
server is emitting tokens; they just never reach the browser. Default proxy
buffering (Nginx, Cloud Run's HTTP/1.1 path, Cloudflare Free) holds the last
chunk waiting for more bytes. This is P46 — SSE streams from LangGraph
drop the final end event over proxies that buffer — and the fix is three
headers: X-Accel-Buffering: no, Cache-Control: no-cache, Connection: keep-alive.
And then the debug view starts crashing browser tabs on long runs. The engineer
npx skills add jeremylongshore/claude-code-plugins-plus-skills --skill langchain-langgraph-streamingHow clear and easy to understand the SKILL.md instructions are, rated from 1 to 5.
The SKILL.md content is hard to understand and quite ambiguous.
How directly an agent can act on the SKILL.md instructions, rated from 1 to 5.
The SKILL.md is hard to act on; an agent would not know what to do.