This is a /now page.
The question I keep running into: organisations commission AI risk assessments and then do nothing with the findings. Not because the findings are wrong — because they're too technical to act on. That gap is what most of the work below is trying to close, in different registers.
Focus
Multi-agent AI safety and adversarial evaluation. The Failure First research — 120+ models, 18,000+ adversarial prompts — is now published, alongside a growing series on what breaks when AI systems talk to each other, embodied AI risk, and why organisations ignore demonstrated risk.
Building
- Client projects — AI integration, automation, and infrastructure for small businesses
- This site — weekly sprints, shipping in public
- SPARK — a non-coercive AI companion for neurodivergent children
- PAOS — a local-first agentic OS that runs on your hardware
- Homelab — Frigate NVR with Hailo-8L AI detection on a Pi 5, Home Assistant automations, self-hosted infrastructure
Writing
Publishing steadily — recent pieces cover giving a robot three voices, voice cloning with Qwen3-TTS on Apple Silicon, Perth's electronic music underground, homelab engineering, therapeutic AI safety, zero-build web development, and legal AI trust. Also running a NotebookLM pipeline that generates audio overviews and infographics for every post and project. See what's new for everything in one feed.
Available for
Client work, research collaboration, and technical advisory. I'm particularly useful when the problem sits at the intersection of AI, infrastructure, and something that matters — healthcare, legal access, community systems, governance. The problem is either worth solving properly or it isn't. Open to consulting engagements and conversations that start with a real problem, not a brief.
Location
Tasmania, Australia.
This Site
23,903 words written across 23 posts.
Recent Activity
Live from GitHub.