parth sareen

working on ollama, the easiest way to make use of LLMs

my work at ollama bounces around the entire stack from inference to API. i spend most of my time on ai agents, model capabilities, function/tool calling, structured outputs, and general systems engineering. if you find issues with anything ollama related, please reach out to me on x or linkedin

previously ran a startup called extensible ai where i worked on ai agent reliability, extensitrace (a thread-safe tracing library for agents), DAGent (agents as directed acyclic graphs), and also online tool use for agents

used to work on distributed systems at tesla and autodesk with scala, go, and python. built on-device ml pipelines in c++ at apple. did some pm too at some point.

regularly make latte art, sometimes do muay thai, and like to get good at new things

recently also been making a ton of random but useful personal tools through ollama launch like watchy (background task manager with TUI and LLM agent), ducky (natural language to bash), and zuko (read-only CLI wrapper to prevent destructive actions)

writings

hobbies