Hardik
hardikvala.com
Hardik
@hardikvala.com
founder @ qaggle.com, one over one

prev: yc s23, ml eng @ google, apple

i dont endorse anything I say
using None as an index value for torch tensors adds a new dim. ludicrously simple when visualized.
November 21, 2025 at 2:03 PM
finally made sense of all the ways you can index tensors in pytorch / numpy #TensorViz
November 20, 2025 at 1:00 PM
Creating these visuals is addictive, but getting out of hand. Had to see the difference between torch chunk and unbind, but this visual doesn't need to exist. #TensorViz
November 13, 2025 at 7:30 PM
Not my best work. Created a visual of torch's transpose op over 3d tensors, but I don't know if the random coloring is instructive or confusing. #TensorViz
November 12, 2025 at 8:27 PM
TIL `tensor[None]` is just a shorthand for `tensor.unsqueeze( 0)` in torch.
November 12, 2025 at 2:44 AM
seriously!? every time i open the app
November 11, 2025 at 7:42 PM
Visual explainer for the pytorch op, `torch.unbind` #TensorViz
November 10, 2025 at 7:38 PM
From Seth Godin's book, The Dip.
November 9, 2025 at 11:36 PM
Finished the visual explainer for the torch op, `chunk`. #TensorViz
November 7, 2025 at 8:22 PM
Started a visual explainer for the torch op, `chunk`. The full explainer will have different arg combinations for no. of chunks and dim.
November 6, 2025 at 11:19 PM
Framework lock-in is definitely not a day 0 / pre-PMF concern.

mlechner.substack.com/p/why-we-sta...
November 5, 2025 at 10:35 PM
whytorch.org is quietly addictive. It's been fun playing with pytorch tensors in a visual format. Not affiliated with the project. Just a fan.
November 5, 2025 at 9:11 PM
torch.outer(a, b), visualized #TensorViz

(drawn w/ bit.ly/tensordiagram)
November 2, 2025 at 7:46 PM
Any time you see t[:, None] on a 1d tensor, it's just transpose #TensorViz
November 2, 2025 at 12:04 AM
if you, like me, forget how tensors get reshaped.

(drawn with bit.ly/tensordiagram)
October 31, 2025 at 8:22 PM
Fed up with trying to imagine chains of tensor transformations in my head, wrote a library to relieve my pain: github.com/hardik-vala/.... Lets you visualize torch, jax, tf, numpy tensors in notebooks and other python contexts, for understanding and debugging. Enjoy.
October 30, 2025 at 5:51 PM
is playground mode a new feature in colab? liking it for messing around with a notebook you authored that's also shared
October 29, 2025 at 5:49 PM
When you create a typo in the process of fixing one. Entropy refuses to go down.
October 28, 2025 at 9:07 PM
LFG
October 24, 2025 at 5:52 PM
• 𝗚𝗿𝗮𝗱𝗶𝗲𝗻𝘁-𝗳𝗿𝗲𝗲 𝗹𝗲𝗮𝗿𝗻𝗶𝗻𝗴: Emerging approaches to learning that are real-time and efficient, that don't require gradient updates. Things like model merging, context priming, etc. Nothing new but love the term from Sara Hooker's talk.

Thanks to the Flower team for organizing!
October 2, 2025 at 11:26 PM
• Cool applications: e.g. 𝗖𝗮𝗻𝗰𝗲𝗿 𝗿𝗲𝘀𝗲𝗮𝗿𝗰𝗵 𝗶𝗻 𝘀𝗽𝗮𝗰𝗲!
• 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗯𝗮𝗻𝗱𝘄𝗶𝗱𝘁𝗵 𝗶𝘀 𝗶𝗺𝗽𝗿𝗼𝘃𝗶𝗻𝗴 to make decentralized training feasible
October 2, 2025 at 11:26 PM
Takeaways from the Flower AI conference last week on decentralized / federated AI:

• 𝗦𝗵𝗶𝗳𝘁 𝗶𝗻 𝗯𝗿𝗮𝗻𝗱𝗶𝗻𝗴 from "federated" and "privacy", to "decentralized" and "open".
October 2, 2025 at 11:26 PM
One day, I would like my name to be immortalized as an LLM vocab token.
September 25, 2025 at 12:34 AM
omg, the ONNX runtime is wicked fast. ran some perf benchmarks for ModernBERT on cpu vs gpu (rtx 4070) and ONNX was 3x faster than vanilla torch. not superior on cpu tho.
September 17, 2025 at 4:30 AM
So any model trained/fine-tuned with llama outputs — even if it's not based on llama's architecture at all — must be prefixed with "llama" 😲 .

e.g. huggingface.co/ai4privacy/l... (ModernBert fine-tune)

License: www.llama.com/llama3_1/lic...
September 17, 2025 at 1:44 AM