Files
Pulse/internal
rcourtman e5eb15918e Sanitize LLM control tokens from OpenAI-compatible responses
Some local models (llama.cpp, LM Studio) output internal control tokens
like <|channel|>, <|constrain|>, <|message|> instead of using proper
function calling. These tokens leak into the UI creating a poor UX.

This adds sanitization to strip these control tokens from both streaming
and non-streaming responses before they reach the user.
2026-02-03 13:12:17 +00:00
..
2026-01-25 21:08:44 +00:00
2026-01-25 21:08:44 +00:00
2026-01-25 21:08:44 +00:00