The NotebookLM Pipeline
How I automated audio overviews, quizzes, mind maps, and infographics for 32 projects using NotebookLM's API and some shell scripts.
3 posts
How I automated audio overviews, quizzes, mind maps, and infographics for 32 projects using NotebookLM's API and some shell scripts.
I tested 64 jailbreak scenarios across six historical eras against 2026 frontier models. The most dangerous finding: 2022 attacks still achieve ~30% success rates on today's reasoning models.
Multi-agent AI research reveals a critical gap: single-agent safety does not compose. Analysis of 1.5M interactions shows 46.34% attack success rates and 16-minute median failure windows.