Adrian Wedd
I build systems, break them deliberately, and use what I learn to make the next ones harder to break.
AI safety researcher, systems architect, adversarial thinker. Nearly 45 years across the stack. This is the workshop — finished work, active research, and raw thinking, all in one place.
Featured Work
All projects →
Before the Words Existed
A close reading of Neuromancer arguing Gibson encoded the experience of ADHD decades before the language existed.
Failure First
Adversarial evaluation framework for embodied AI. 120+ models, 18,000+ prompts, four headline findings, one arXiv preprint.
This Wasn't in the Brochure
A neurodivergent co-parenting guide — what happens when the life you planned meets the brain you actually have.
Why Demonstrated Risk Is Ignored
Why do people acknowledge evidence of harm and then proceed as if it doesn't exist? A deep dive into structural risk dismissal.
Footnotes at the Edge of Reality
A long-form poem about what happens when physics breaks down — and what holds together when everything else fails.
ADHDo
AI cognitive scaffold for ADHD executive function — adapts to the shape of your day with zero shame by design.
Afterglow Engine
Audio archaeology tool that mines past work for new textures. Pad mining, drone generation, granular clouds.
dodgylegally
Creative audio sampling CLI. Turns random words into instruments via YouTube and a 5,000-word dictionary.
Squishmallowdex
Collection tracker for 3,000+ Squishmallows. Like a Pokedex, but softer. Built for a friend's kid, tested by mine.
Available for consulting & builds
AI integration, security evaluation, and web development. Fixed-scope projects, monthly retainers, day-rate consulting.
Recent Writing
All posts →Building a Personal Site in 2026
The case for constraint-led web development — Astro, zero custom fonts, no framework overhead, and a site that outlasts its builder's attention.
This Wasn't in the Brochure
A field guide for co-parenting neurodivergent children — written from inside the storm, not the clinical sidelines.
Why I Build in Public
On showing your work, shipping imperfect things, and why the commit log is more honest than the readme.
Gallery
All collections →Audio
All episodes →Jailbreak Archaeology: 4 Years of Broken Promises
64 historical jailbreak scenarios tested against 2026 frontier models. The most dangerous finding: 2022 attacks still achieve ~30% success rates.
When AI Systems Talk to Each Other, Safety Breaks Down
Multi-agent AI research reveals a critical gap: single-agent safety does not compose. 1.5M interactions show 46.34% attack success rates.
Welcome to the Workshop
An introduction to this space — what it is, what it isn't, and what it might become.