rcourtman
e5eb15918e
Sanitize LLM control tokens from OpenAI-compatible responses
...
Some local models (llama.cpp, LM Studio) output internal control tokens
like <|channel|>, <|constrain|>, <|message|> instead of using proper
function calling. These tokens leak into the UI creating a poor UX.
This adds sanitization to strip these control tokens from both streaming
and non-streaming responses before they reach the user.
2026-02-03 13:12:17 +00:00
..
2026-01-22 00:45:04 +00:00
2026-01-04 17:57:51 +00:00
2026-01-22 16:43:24 +00:00
2026-02-03 13:12:17 +00:00
2026-02-03 12:49:41 +00:00
2026-02-02 23:17:40 +00:00
2025-12-02 22:31:44 +00:00
2025-12-29 17:25:21 +00:00
2026-02-02 22:53:24 +00:00
2025-12-29 17:25:21 +00:00
2026-01-03 11:14:17 +00:00
2026-01-22 13:49:05 +00:00
2025-12-02 14:48:57 +00:00
2026-01-28 16:52:35 +00:00
2026-01-29 12:08:38 +00:00
2026-01-30 19:02:14 +00:00
2026-01-25 21:08:44 +00:00
2026-02-01 16:27:10 +00:00
2026-02-01 23:27:11 +00:00
2025-12-29 17:25:21 +00:00
2026-01-25 21:08:44 +00:00
2025-12-12 23:13:40 +00:00
2026-01-28 16:52:35 +00:00
2026-02-03 12:49:41 +00:00
2026-02-02 17:29:14 +00:00
2026-01-25 21:08:44 +00:00
2026-01-18 13:41:00 +00:00
2026-01-25 21:08:44 +00:00
2026-02-02 21:54:27 +00:00
2025-12-31 23:07:01 +00:00
2025-12-29 17:25:21 +00:00
2025-12-29 17:25:21 +00:00
2025-11-26 14:10:21 +00:00
2026-01-01 22:29:15 +00:00
2026-01-25 21:08:44 +00:00
2026-01-01 22:29:15 +00:00
2026-01-28 16:52:35 +00:00