Jamie Tidman
jamietidman.bsky.social
Jamie Tidman
@jamietidman.bsky.social
CTO at Japeto.ai - currently working on building secure language AI products for healthcare and local government.
Europe != UK
October 21, 2025 at 10:25 AM
Alexa powered by Amazon Titan sounds even less useful than it currently is. Titan is by far the worst commercial LLM I've used.
February 5, 2025 at 9:30 PM
There's a bit in a Silicon Valley where they give admin access to an Adderall-fueled teenage coder who destroys the entire system, and I feel like that's about to play out on a massive scale
February 4, 2025 at 4:10 PM
This still makes sense canonically given this version of Lorca was from the evil timeline.

Every car is a Cybertruck in the mirror universe
January 29, 2025 at 2:23 PM
I agree with your last point wholeheartedly! My point is that Deepseek is a vindication of LLMs as a technology, not a threat or a sign of failure.
January 27, 2025 at 11:01 PM
Proportional*
January 27, 2025 at 10:52 PM
Yes, it is unfair given the advances are promotional to the investment. Deepseek makes that investment more rational, not less.
January 27, 2025 at 10:50 PM
Fair enough. I’d say given the advances in AI over the last 3 years anything short of ASI is apparently going to be defined as “not enough progress”!
January 27, 2025 at 10:48 PM
“Little to no progress”!!
January 27, 2025 at 10:43 PM
An extremely significant development in AI efficiency = overhyped?

That doesn't make sense to me.
January 27, 2025 at 8:35 PM
I can see a market for this for some exasperated CISO who fields so many dumb cybersecurity questions that he says FINE WE’RE STORING ALL OUR DATA ON THE MOON
January 25, 2025 at 1:02 PM
For a large swathe of the population "OpenAI" and "AI" are the same, unfortunately.
January 24, 2025 at 1:49 PM
T630
January 23, 2025 at 7:05 PM
llama-cpp-python
January 23, 2025 at 7:02 PM
For us they are a useful local test proxy for our production cloud environment, which uses L4s. They still have their uses.
January 23, 2025 at 6:50 PM
Yep. This is not the build for tokens per second. Our use case for this is batch processing - it's not fast enough for real-time chat on larger models.
January 22, 2025 at 8:47 PM
For us, it was buying a very old Dell PowerEdge server for £100 and putting 4 Tesla P40s in it.

Very slow, but it has 96GB VRAM and runs 70B models comfortably.
January 22, 2025 at 8:35 PM