Keeping Thought Alive Between Execution & Using LLMs as a Common Sense Powerhouse

Lately, I've been using GPT to upgrade my planning mechanism through personal OKRs. I've felt that my long-term planning and time management weren't particularly effective, so I turned to LLMs for a fresh perspective.

From this silicon-based intelligence's point of view, I got a more refined and actionable OKR (see above). Comparing it to my original version, I found its suggestions significantly more effective. One major reason is that LLMs leverage a kind of common sense that I don't naturally possess—offering more realistic time estimates and better structured paths forward. It feels like an unlocked superpower, almost like gaining a third eye for planning.

LLMs function like a distilled version of collective human intelligence—akin to how small models distill knowledge from larger models using high-quality data. I like to imagine them as trained on the aggregated outputs of countless human "models," extracting the most distilled and generalizable patterns. What emerges is a form of common sense, refined through an implicit averaging of diverse perspectives, much like knowledge distillation in ML.

This also reminds me of a key idea from Noise: collective human decision-making is remarkably high in quality. There's that classic example where a village's collective estimate of a sheep's weight averages out to be nearly spot on. I see that average as common sense. Ray Dalio echoes a similar point in Principles—common sense is invaluable, and we should actively leverage it.

So, when stepping into a completely new domain, tapping into an LLM's common sense at the outset can provide surprisingly good initial estimations. That's a massive leverage point.


Here are a couple of things I found interesting and will try out next week:

  1. Start Actions with a "Question-Driven" Approach

    • Before beginning a task, ask: What key assumption am I testing?

    Examples:

    • Job search → "I'm reaching out to these people today to test whether AI startups care more about LLM research skills or engineering skills."
    • Portfolio project → "Do users actually want this feature? Is there a smaller way to validate this need?"
    • Paper reading → "Why am I reading this? What does it contribute to my LLM research?"
  2. End Actions with a 5-Minute Micro-Reflection

    • After a task, quickly jot down 1-2 sentences answering:
      • What new insights did I get from this?
      • How can I optimize this next time?

    This can be a Notion section called Execution Insights, updated daily in 5 minutes. Examples:

    • Found that "AI unicorn" in a cold email intro gets more responses than "LLM research."
    • For a distributed systems assignment, the professor cares more about implementation details than benchmark numbers.
    • Discovered that chunking methods significantly impact retrieval precision in LLMs—worth testing in my side project.

keep thinking alive in the gaps between execution.