Vishal Misra
banner
vishalmisra.bsky.social
Vishal Misra
@vishalmisra.bsky.social
Vice Dean Computing and AI @CUSEAS. Dean of Cricket Analytics @SFOUnicorns. Tweets on academia, cricket, dad/bad/wry jokes. Opinions of guy am pointing at.
Key insight: LLMs encode algorithmic reasoning during training. Chain-of-thought prompting isn't 'teaching' new skills—it's triggering existing computational pathways. DeepSeek's (and similar) approaches made these pathways the default, while others require explicit activation
February 19, 2025 at 2:42 AM
TokenProbe visualization shows how prompting unlocks these 'latent algorithms': When asked to solve 33×117 directly, tokens show uncertainty (lighter green). But prompt for step-by-step, and high-confidence tokens (green) reveal systematic computation emerging.
February 19, 2025 at 2:41 AM