rnatr
@rnatr.com
Civic Tech / Data Science / AI - All opinions are my own. 95% things I'm reading and 5% things I'm doing.
If they are going to build on a beach, clearly they should be using the Seabees. I joke but the situation is still awful.
March 22, 2025 at 5:43 PM
If they are going to build on a beach, clearly they should be using the Seabees. I joke but the situation is still awful.
I would love to hear strategies for dealing with this. I have been on the no one side before and sometimes it is like shouting into the void.
January 4, 2025 at 11:52 AM
I would love to hear strategies for dealing with this. I have been on the no one side before and sometimes it is like shouting into the void.
Having worked alongside them, I can guarantee there is a story and a name for this. I don't happen to know it though.
December 14, 2024 at 6:26 PM
Having worked alongside them, I can guarantee there is a story and a name for this. I don't happen to know it though.
But when looking at how my clock is performing vs the monitors worldwide, I get some interesting results. Practically everything is within +/- 5 ms. However, there are these spikes in offset every day only for 3 European monitors in NL and DE at 0100-0300 UTC. The others are fine.
December 8, 2024 at 11:45 AM
But when looking at how my clock is performing vs the monitors worldwide, I get some interesting results. Practically everything is within +/- 5 ms. However, there are these spikes in offset every day only for 3 European monitors in NL and DE at 0100-0300 UTC. The others are fine.
I've absolutely done this, but I also measure a host of metrics in parallel with a set of prompts. Latency, cost, and reliability are also worth considering. For some applications multiple models are used based upon performance.
December 7, 2024 at 6:00 PM
I've absolutely done this, but I also measure a host of metrics in parallel with a set of prompts. Latency, cost, and reliability are also worth considering. For some applications multiple models are used based upon performance.
If you want to do some rigorous evaluations over a set of inputs, MLFlow worked well for me with the different services over API. I wrote a bunch of metrics for latency and cost for the services and was able to argue which model was the best value for our use case with that data.
December 4, 2024 at 1:07 AM
If you want to do some rigorous evaluations over a set of inputs, MLFlow worked well for me with the different services over API. I wrote a bunch of metrics for latency and cost for the services and was able to argue which model was the best value for our use case with that data.
Shower thoughts: if you survive an encounter you will upgrade your equipment :D Also, the gear is a threat to those who would challenge you. We are starting with fast fashion and ending up with the clothes which last (which might be the tacticool leather).
December 2, 2024 at 12:08 PM
Shower thoughts: if you survive an encounter you will upgrade your equipment :D Also, the gear is a threat to those who would challenge you. We are starting with fast fashion and ending up with the clothes which last (which might be the tacticool leather).