7 Comments

This idea of unconscious thought being less transparent/understandable, yet stronger at addressing fluidly defined problems is highly reminiscent of the ongoing shift/debate of relying on human written code (explainable, possibly error prone, clearly handles precisely bounded problem spaces) versus generative AI/LLMs/neural nets (black box, more precise when given more time to optimize weights, can identify difficult-to-articulate patterns).

Expand full comment

yes!! reminds me of Rohit Krishnan calling LLMs "fuzzy processors" https://www.strangeloopcanon.com/p/beyond-google-to-assistants-with

Expand full comment

Even more reason to trust yourself!

Expand full comment

interesting study! I was actually thinking about writing about decision-making too for my next blog, but decided to punt it to sometime later, it's called "side stepping false decisions", the gist is that sometimes we create a decision in our head like a fork in the road, when in reality we don't actually have a decision to make. It's still to-be-fleshed-out, but I've observed that sometimes we don't have to decide between A and B, we can find option C that's a blend... or we can dig deep and think about what we really want, often realizing that we prefer one over the other. In that situation, the decision breaks down and flows into a natural choice - the only option we want

Expand full comment

I love this idea! just carving the path you actually want rather than assuming you have to conform to some predefined path or other

Expand full comment

Love your final few sentences. Such a great reframe, just nails it!

Expand full comment

thanks Will!

Expand full comment