Skip to main content

Now

Updated March 2026. This is a /now page.

Focus

Multi-agent AI safety. Specifically: what happens when individually safe agents are composed into systems, and why the safety properties don't transfer. The research sits under the Failure First umbrella — 120+ models, 18,000+ adversarial prompts, four headline findings.

Building

  • Client projects — AI integration, automation, and infrastructure for small businesses that need it most
  • This site — weekly sprints, shipping in public
  • PAOS — a local-first agentic OS that runs on your hardware, not someone else's
  • A NotebookLM pipeline that turns project docs into audio overviews, infographics, and quizzes at build time

Writing

Publishing research that's been sitting in private repos. Working on a piece about why demonstrated risk is systematically ignored — and what the pattern looks like when you've enumerated failure modes professionally for two decades.

Available for

Client work, research collaboration, and technical advisory. I'm particularly useful when the problem sits at the intersection of AI, infrastructure, and something that matters — healthcare, legal access, community systems, governance. Open to consulting engagements and conversations that start with a real problem, not a brief.

Location

Tasmania, Australia.

This Site

7
posts
32
projects
26
images
5
episodes

3,539 words written across 7 posts.

Recent Activity

Live from GitHub.