Yet another generalist

It’s going to be very casual post today and it might even leave you with nothing. Just a brain dump of some of the thoughts I’ve been having recently. Every em-dash is written by yours truly.

I, myself, am a generalist. And I think I always have been. I’ve never been satisfied with doing the same thing for very long. I’ve also enjoyed coming up with fun ways of solving problems and don’t care as much for the tools I’m using.

At some point I got really good at learning things quickly. I wonder if skipping all those classes and studying the night before had anything to do with it. And when you can learn anything pretty quick, (I’m talking getting 70-80% good at pretty much anything) you start to wonder why not keep doing new things.

I went to Waterloo for Mechatronics Engineering (fancy for robots). I find it funny that even my program was known for being jack of all trades and master of none – because that’s how I feel on a day-to-day basis now. You got to learn a bunch from fluid mechanics, to power systems, to even machine learning.

For those unfamiliar with waterloo – most students do 6 internships throughout their undergrad.

The 6 I did were:

  • Ritual (QA & lots of scripting)
  • Deloitte (Natural Language Processing - the OG pre-LLM stuff)
  • Tesla (Distributed Systems)
  • Apple (On-device Automatic Speech Recognition)
  • Apple (Product Management)
  • Tesla (More Distributed Systems)

I also did a bunch of stints while being in school; working part-time at various startups doing product, designing systems, or one-off ML projects.

There were a few things which stood out to me at every internship. My final internship ended up being relatively breadth-first. I was coming off a product internship so it felt easy to tie together many concepts and ship something end-to-end. However, that usually doesn’t show as much technical prowess (in the pre-LLM era). My mentor at Tesla at the end of my internship told me that I had two options:

  1. Go deep into a certain area, and complement it with other skills which leans well to corporate culture

  2. Be good at doing many different things and work at a startup

His points actually held a lot of merit. And still do to this day – to an extent. When I first started using Cursor (~ Oct 2023) I quickly realized that there was going to be a step-wise change in how I’d be able to learn and build anything.

I first felt this feeling of “nothing will ever be the same again” with ChatGPT in Nov 2022, when I tried Cursor, and again when I built agents on top of llama3.1:8b (which is a bad idea now but it felt so cool to even get a tool call out of something running fully locally).

Email from Cursor

An email from Cursor I found in my inbox from Oct '23

In a way it felt like the playing field had evened out a bit. Especially if you got started early with trying out these tools. If you had done enough reps of actually learning programming and engineering, then AI was a tool which could change how you work.

Given this, the 1st point of “go deep and complement with other skills” seems less and less apparent for quite a lot of skills. I think it holds true for ones which evolve quickly, and there are a lot of new techniques or methods that you have to stay up-to-date for. But for a lot of existing technologies, this just doesn’t apply anymore.

Maybe you like databases? You can cover a surface level view in an hour, and as long as you describe your problems well to an agent, you’ll get reasonable results. So the value in deep, tacit knowledge held by a few, is now a $20 subscription away.

I’ve felt the step-wise change again more recently back in November 2025 when I was able to put agents in a loop to not just write code, but be able to verify correctness (a.k.a verification loops) of what you’re building.

In my head a bunch of concepts like RL reward design, and verification loops collapse into one – where it’s about optimizing the model or an agent to achieve a certain task. Eno has a great talk on verification loops for agents. [1] This verification loop lets you build out more autonomous agents which can take a task closer to completion if not completion.

Over the last few years, it seems that what’s remaining is either extremely deep engineering work - I’m talking low level, grinding kernels, multi-threading, distributed systems design, AI-research engineering, or work which just need to be done and doesn’t require deep technical expertise. Turns out, more often than not, you just need to get work done and someone needs to just orchestrate what that work is. And soon enough a quality bar is just going to be a matter of a ralph loop [2] or an autoresearch [3] task.

My job has significantly changed over just the last few months. Back in Sep/Oct, I was writing a bunch of code through more careful prompting, writing smaller bits with AI as the quality was just not up to my bar. Now, it feels like for the most part I am orchestrating work, with having to dive into the nitty-gritty only at times to make better architectural decisions. An argument to be made is that the model is just missing some of the context that I possibly don’t even find worthwhile to write down, but use as an informing factor while coming up with certain abstractions.

I’m fascinated by this where I don’t fully think we’ve solved the necessary interfaces or architectures to effectively use LLMs – even with all the fancy harnesses and tools we have now. Something feels missing to be able to flow state, and feel the creativeness again. I often have compared my code to art – purely from the aspect of creative expression, but I do much less of that now. What gets generated doesn’t really feel like mine unless I’ve made a significant stroke after the agent-generated code.

I am somehow having the most amount of fun I’ve had building things, while also feeling sad about missing the process of creating. There has never existed the ability to create so much before. In comparison, this is instant. Most of my day-to-day is not just writing software anymore (or even prompting it), it’s become more around product planning, analyzing metrics, looking at grafana, or my 16 split terminals to monitor my agents. While I miss the process of creating certain through code, feeling the blissful flow-state or even just solving a bug, I overall feel like I’ve gained superpowers as a generalist. Where I now have the ability to go end-to-end from building a feature, shipping it, to the marketing around it.

Through some luck, I use a good chunk of what I’ve done through my internships, part-time work, full-time work, and even my own startup, almost daily at Ollama. Part of it is the fact it’s a startup and there is a lot of different work to be done.

Even my first internship was insightful as it taught me how to break software – which in turn taught me how to build it better doing QA at Ritual. At Deloitte, I worked on a prototyping lab focused on AI to improve internal workflows and products – I did a lot of Natural Language Processing (NLP) work here. The first Tesla internship I had was the same overall team I ended up at again for my last one, having really enjoyed working on large-scale distributed systems. Both were definitely my grindiest internships, but also the most rewarding and I’m very thankful for that. My time at Apple for both my internships was pretty cool. The first time around I got to build a new mini-sdk to do speaker diarization on-device, and the other internship I did a lot of churn analysis for Siri as a product manager.

It’s going to take some time for most people to catch up to using these models, harnesses, and the bajillion tools which are spinning up. But the direction is pretty clear - generalists are going to start winning out in most settings. People who are adaptable, roll with the punches, can dive into different kind of work and are able to provide value and more interestingly – their opinion, will succeed in the coming few years.

I think specialists are also going to do great in their own niche – there’s just going to be a lot less of it. If you’re a specialist, feel the grain of the times, keep a pulse on the research, and make bets on what to get good at. The people who are just okay at this stuff are going to struggle the most, especially if they’re not up-to-date with leveraging these tools.

If you are a generalist, I think we tend to get fixated on just getting the thing done and be a bit hasty, instead of spending time in the weeds a bit more. This is usually where specialists are able to come up with better or more elegant solutions. We’re also kinda one-shotted by AI, as we’re able to get to the satisfactory 70-80% good enough with ease, but the rest is sometimes a struggle.

At this point, I’d argue that the ability to go deep and learn whatever needed for the task to a greater degree (to the extent of doing whatever needs to be done) is what will set us apart. At least for the next 6 months to a year. Hard to know these days.


References

[1] Eno’s talk on verification loops and RL reward design

[2] The Ralph Loop

[3] Andrej Karpathy’s Autoresearch