Albert Thomas
albertcthomas.bsky.social
Albert Thomas
@albertcthomas.bsky.social
Research engineer at Huawei. https://albertcthomas.github.io/blog/
Also github.com/QwenLM/Qwen2... gives prompts you can try such as "Read all the text in the image" (in case you were not aware of this notebook). Not sure this will lead to a drastic change.
Qwen2.5-VL/cookbooks/ocr.ipynb at main · QwenLM/Qwen2.5-VL
Qwen2.5-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud. - QwenLM/Qwen2.5-VL
github.com
May 19, 2025 at 1:55 PM
What's the difference between your model and the MLX version?
May 19, 2025 at 1:50 PM
And reviewing will be the bottleneck. Although we can use LLMs to help us write the reviews (not saying that the review should be made by the LLM alone).
May 15, 2025 at 7:44 PM
Yes I was going to ask for more details about LangChain given how popular it seems to be.
December 23, 2024 at 1:51 AM
Thanks for the pointer!
December 6, 2024 at 10:00 AM
Really? mamba still seems to be faster than conda, I might just need to update my conda :)
December 4, 2024 at 11:51 PM
Never mind I saw someone asked the same question below :)
December 4, 2024 at 11:49 PM
Why is it considered anti pattern to have global environments?
December 4, 2024 at 10:10 PM
Yes! I often do the same when I am in the debugger
November 29, 2024 at 7:40 PM
Ok release updates cannot be automatic :)
November 26, 2024 at 5:36 PM
Wow this is nice, thanks a lot for sharing! This configures automatic updates as well? What about release updates? I find myself stuck with old Ubuntu releases...
November 26, 2024 at 10:07 AM
Reposted by Albert Thomas
Here’s a little script I made which I use to get a server up and running automatically (after you answering a few questions, including “what’s your name”) in just a few minutes.

You can even fully automate it with a few environment variables.
github.com/AnswerDotAI/...
github.com
November 26, 2024 at 9:35 AM
👋
November 24, 2024 at 1:00 PM