Skip to main content

Now

Updated February 2026. This is a /now page.

Focus

Multi-agent AI safety. Specifically: what happens when individually safe agents are composed into systems, and why the safety properties don't transfer. The research sits under the Failure First umbrella.

Building

  • This site — shipping improvements in weekly sprints
  • Agentic Index — benchmarking agent capabilities
  • A pipeline that turns project documentation into audio overviews, infographics, and quizzes using NotebookLM Studio

Writing

Publishing more of the research that's been sitting in private repos. Recent pieces on jailbreak archaeology and multi-agent safety failures. Working on a longer piece about why demonstrated risk is systematically ignored.

Looking for

Interesting work in AI safety, systems thinking, or infrastructure. Open to research collaborations, technical advisory roles, or full-time positions where the problems are real and the approach is honest.

Location

Tasmania, Australia.

Recent Activity

Live from GitHub.