Abhishek Divekar
adivekar.bsky.social
Abhishek Divekar
@adivekar.bsky.social
ML Science Lead @Amazon; prev @UT Austin. Team Lead for India at the International AI Olympiad 2025.
Reposted by Abhishek Divekar
It brings me no pleasure to report that completing a minor task you've been avoiding (1) is not very hard and (2) makes you feel better afterwards
September 16, 2025 at 4:42 PM
Logo drop! 🇮🇳 This is what Team India will wear for its historic first appearance at the International AI Olympiad!

The theme: 8 feathers for our 8 incredible Olympians. Let's cheer them on!

#IOAI2025 #TeamIndia #AI
July 30, 2025 at 3:25 PM
You should try cheese naan, if you haven’t.
July 13, 2025 at 12:45 PM
I would love this! I currently use MacWhisper and the ios app version of it is just lame.
Also, please support parakeet2, it is genuinely better and faster than whisper v3 large turbo, in my experience.
July 13, 2025 at 9:10 AM
I’m wary of it on Bsky, the other website, LinkedIn, Substack…short-message platforms where it’s unexpected because it’s convoluted to enter compared to a regular dash.
In Word docs and LaTeX, it’s common.
July 7, 2025 at 3:53 AM
You can summarize for a certain style, level of writing, and amount of attention the reader can spare.
For a highly technical paper in a field I know nothing about, I want a very simplified summary.
When I know I need to go deep, I use a summary to give the narrative along a specific axis.
June 17, 2025 at 4:33 AM
Not just expensive, it’s waymo expensive™️
June 6, 2025 at 10:20 AM
Spare a paper for the low rank part? (also, graceless collage is spot on)
June 6, 2025 at 10:17 AM
You supposedly read this article, yet still wrote a complicated sentence with a bunch of jargon.
June 3, 2025 at 10:47 AM
Reposted by Abhishek Divekar
I wrote a very long blog post about AI writing. I hope you'll read it.

meresophistry.substack.com/p/the-mental...
The mental tyranny of AI writing
An arduously long blog post
meresophistry.substack.com
March 29, 2025 at 7:10 PM
How so? Like, who are they marketing for/against?
May 26, 2025 at 3:06 PM
A Jurafsky x LeCun paper is bound to be interesting.
May 26, 2025 at 3:04 PM
I won’t comment on the legal aspect since I am not a lawyer. I will say that this is super hard to enforce realistically…I’ve had my papers summarised by websites with some of the details wrong, but I can’t definitively say it wasn’t human-written. The summary was a substantial transformation.
May 24, 2025 at 2:04 PM
I don’t think a license will stop this…someone looking to plagiarise can already bypass most licenses by using an LLM to slightly paraphrase content and claim the content is derivative.
May 24, 2025 at 1:55 PM
Honestly, I don’t get the reason to make it open source then? What is the situation you are trying to avoid?
May 24, 2025 at 1:41 PM
Jury seems to be out on this tbh bsky.app/profile/emol...
I grew up in the Indian education system with pretty poor teachers so I self-learned a lot. I’d have loved something which made that easier.
The state of research on AI and education from controlled studies: Growing evidence that, when used as a tutor with instructor guidance, AI seems to have quite significant positive effects. When used alone to get help with homework, it can act as shortcut that hurts learning.

Still early days.
May 22, 2025 at 3:16 AM
That’s the fish mate
May 20, 2025 at 6:42 PM
I feel extremely seen.
May 20, 2025 at 3:47 PM
Did we read the same post? I see it as poking fun at a (very) bad author, but how is it at all corrosive?
May 20, 2025 at 3:39 PM
Reposted by Abhishek Divekar
I want to share my latest (very short) blog post: "Active Learning vs. Data Filtering: Selection vs. Rejection."

What is the fundamental difference between active learning and data filtering?

Well, obviously, the difference is that:

1/11
May 17, 2025 at 11:47 AM
I would click the cat-protocols post immediately. I wouldn’t even think about it.
May 15, 2025 at 4:35 AM
I didn’t install neuralink, how can you read my thoughts 🤔
May 13, 2025 at 2:46 PM
Actually if you read the blog till the end, it mentions that the 14.3% is because it is overhelpful (benign hallucinations)
May 7, 2025 at 2:53 AM
Reposted by Abhishek Divekar
Reposted by Abhishek Divekar
DeepSeek-R1 Thoughtology: Let’s <think> about LLM reasoning

142-page report diving into the reasoning chains of R1. It spans 9 unique axes: safety, world modeling, faithfulness, long context, etc.
April 13, 2025 at 3:04 AM