About
I'm Adrian Wedd. I live in Cygnet, Tasmania. I build systems, break them deliberately, and use what I learn to make the next ones harder to break.
I've been doing this since I was six years old — BASIC on a home computer, pulling apart anything I could get my hands on to see what was inside. Nearly 45 years later the tools are more interesting but the impulse is identical.
Where the adversarial thinking comes from
I spent years coordinating direct actions for Greenpeace. Not communications, not fundraising — the Actions unit. Planning operations against well-resourced opponents who would rather you didn't succeed. That work teaches you to enumerate failure modes before you move. It teaches you that the optimistic plan is the dangerous plan. It teaches you that operational security isn't a checkbox — it's the difference between the mission succeeding and people getting hurt.
That thinking didn't leave when I moved into systems integration, cybersecurity, and eventually AI. It became the methodology. Failure First — the AI safety framework I published — isn't an academic construct. It's what happens when someone who spent years asking "what's the worst that could happen and how do we survive it" gets access to frontier AI systems and a lot of compute.
AuDHD — feature, not bug
I'm Autistic and ADHD. I'm not disclosing this as context for the hard parts. I'm naming it because it directly explains why I'm good at what I do.
Hyperfocus is a superpower in this work. When a problem is interesting enough — and complex systems failing is almost always interesting enough — I can go to a depth and velocity that most people can't sustain. The CV pipeline, the red-teaming framework, the Evolve ecosystem: these weren't built slowly. They were built in intense bursts of concentrated attention that covered enormous ground fast.
The pattern recognition that comes with autism is genuinely useful for adversarial thinking. I notice what doesn't fit. I notice the thing that looks fine but is subtly wrong. I notice the failure mode hiding inside the working system. That's not a skill I acquired — it's how I process the world.
And the directness that comes with both: I don't waste words and I don't soften assessments to make them comfortable. If your AI system has a problem, I'll tell you what it is, not a version of it that's easier to hear.
What I actually care about
I started at Greenpeace because I believed in the work, not because it was a career move. I built Freedom Engine — an AI system to help US federal inmates understand their First Step Act time credits — because those people needed it and no one else was building it. I built ADHDo and NeuroConnect because I understand executive function failure from the inside and I wanted better tools to exist.
The through-line: I build for people who need it. The small business that can't afford a Sydney agency. The healthcare practice navigating regulatory complexity without a legal team. The person inside a broken system trying to understand their rights.
I take safety seriously before it's required. My own CV pipeline ships a hallucination detector that blocks fabricated claims before they deploy. That's not a commercial requirement — it's a values statement. This site collects no analytics without your consent. That's not legally mandated — it's how I think things should work.
I'm not neutral on AI risk. I think the failure modes are real, underestimated, and worth taking seriously before the incentives catch up. That's why I do the red-teaming work. That's why the methodology is public.
How I got here
Greenpeace & Wilderness Society
Direct action coordination, operational security, field ICT, risk assessment against adversarial opponents. Where the failure-first instinct was forged under conditions where getting it wrong had real consequences.
Infrastructure & cybersecurity
Seven years at Homes Tasmania delivering systems integration, penetration testing, vulnerability assessment, IDAM, and Essential Eight compliance for public housing infrastructure. Wrote the department's first Generative AI policy.
Independent research & client delivery
AI safety research with a published arXiv preprint, 120+ models evaluated, 18,000+ adversarial prompts. Alongside: client sites, AI pipelines, voice agents, and agentic systems built and shipped.
What I'm working on
AI systems that fail gracefully and auditably. Multi-agent architectures with real safety constraints. The gap between what organisations say about AI risk and what they actually do about it — which remains enormous and consequential.
Also: music tools, interactive installations, writing, and whatever else demands enough attention to be worth the hyperfocus.
Elsewhere
- GitHub — 115+ repos across AI, security, music tools, and more
- Failure First — AI safety methodology and adversarial evaluation framework
- This Wasn't in the Brochure — a book about co-parenting neurodivergent children
- Work with me — consulting, development, AI integration, security evaluation
Privacy
This site respects your choices. No tracking happens without consent. Analytics (if enabled) uses Google Analytics 4 with cookie flags set to SameSite=Lax and Secure. Personalisation stores visit data locally in your browser — nothing leaves your machine.
You can see exactly what this site knows about you and reset it at any time.