Afterwords: Completing the Voice Loop in Claude Code
Adding local TTS to Claude Code so it talks back — 17 cloned voices, zero cloud dependency, one stop hook.
15 posts
Adding local TTS to Claude Code so it talks back — 17 cloned voices, zero cloud dependency, one stop hook.
Building a three-persona TTS pipeline for a Pi robot — MLX voice cloning, a GLaDOS model, and engineering graceful fallback.
Clone a voice from a 15-second sample using Qwen3-TTS on an 8GB M1 Mac — from raw audio to a production HTTP server with zero cloud dependency.
What if the LLM didn't read your document — what if it queried it? The Recursive Language Model pattern treats long texts as environment, not input.
Building AI for trauma therapy means the safety architecture has to exist before a single therapeutic feature does. Here's why.
How SPARK is rewriting the rules of neurodivergent support — a non-coercive AI companion for AuDHD children.
Reformulating harmful prompts as poetry bypasses safety filters across every major LLM family. A single-turn, universal jailbreak mechanism.
90% of companies plan to increase AI investment. Only 1% consider themselves AI-mature. The J-Curve explains why.
75% of lawyers cite accuracy as their top AI concern. The legal profession's core values are in direct tension with current AI capabilities.
120 models, 18k prompts: supply chain injection at 90–100% attack success, faithfulness gaps in frontier models, and why your benchmark numbers are wrong.
Goldman Sachs, PwC, McKinsey, and Acemoglu all model AI's economic impact and arrive at wildly different numbers. Why the divergence?
A probabilistic risk model for VLA-driven humanoid fatalities projects a 'Danger Zone' between 2027–2029: the mechanism, timeline, and what follows.
How I automated audio overviews, quizzes, mind maps, and infographics for 32 projects using NotebookLM's API and some shell scripts.
64 jailbreak scenarios across six eras tested on 2026 frontier models. Key finding: 2022 attacks still achieve ~30% success on today's reasoning models.
Single-agent safety does not compose in multi-agent systems. 1.5M interactions show 46.34% attack success rates and 16-minute median failure windows.