working on ollama, the easiest way to run and use llms on your computer and the cloud
my work at ollama bounces around the entire stack from inference to API. i spend most of my time on ai agents, model capabilties, function/tool calling, structured outputs, and general systems engineering. if you find issues with Ollama integrations, please reach out to me on x or linkedin
ollama run parthsareen/me
previously ran a startup called extensible ai where I worked on ai agent reliability, extensitrace (a thread-safe tracing library for agents), DAGent (agents as directed acyclic graphs), and also online tool use for agents
used to work on distributed systems at tesla and autodesk with scala, go, and python. built on-device ml pipelines in c++ at apple. did some pm too at some point.
regularly make latte art, sometimes do muay thai, and like to get good at new things
writings
- Building Reliable AI Agents 2025-12-25
- Web search in ollama 2025-09-24
- Sampling and structured outputs in LLMs 2025-09-10
- Streaming responses with tool calling 2025-05-28
- Structured outputs in ollama 2024-12-06
- Functions as tools in ollama 2024-11-25