working on ollama, the easiest way to make use of LLMs
my work at ollama bounces around the entire stack from inference to API. i spend most of my time on ai agents, model capabilities, function/tool calling, structured outputs, and general systems engineering. if you find issues with anything ollama related, please reach out to me on x or linkedin
previously ran a startup called extensible ai where i worked on ai agent reliability, extensitrace (a thread-safe tracing library for agents), DAGent (agents as directed acyclic graphs), and also online tool use for agents
used to work on distributed systems at tesla and autodesk with scala, go, and python. built on-device ml pipelines in c++ at apple. did some pm too at some point.
regularly make latte art, sometimes do muay thai, and like to get good at new things
recently also been making a ton of random but useful personal tools through ollama launch like watchy (background task manager with TUI and LLM agent), ducky (natural language to bash), and zuko (read-only CLI wrapper to prevent destructive actions)
writings
- The Era of Generalists 2026-03-09
- The simplest and fastest way to setup OpenClaw 2026-02-23
- Subagents and web search in Claude Code 2026-02-16
- OpenClaw 2026-02-01
- ollama launch 2026-01-23
- Building Reliable AI Agents 2025-12-25
- Web search in ollama 2025-09-24
- Sampling and structured outputs in LLMs 2025-09-10
- Streaming responses with tool calling 2025-05-28
- Structured outputs in ollama 2024-12-06
- Functions as tools in ollama 2024-11-25