SimpleKnight
banner
eliothochberg.bsky.social
SimpleKnight
@eliothochberg.bsky.social
550 followers 120 following 2.2K posts
The least famous person you know Photo by @liezlwashere.bsky.social
Posts Media Videos Starter Packs
Sure, but isn't a HP number just a lot of math simplified as well?

I know it's not this, but if you had a 300HP vehicle at 4000lbs, and a 250HP one at 2000lbs, then couldn't you just say one has a PW of 75 and the other has a PW of 125?

Isn't just a matter of standardizing terminology?
@johnvoelcker.bsky.social A question I've had for years as a fan of cars:

Why do manufacturers and journalists talk more about power to weight ratios?

It seems to me that's a much more useful measure that pure horsepower
"We have no real idea how our product actually functions, and we need to spend billions to find out if it can be profitable, but we also know for sure we're going to make trillions"

Does Sam have a ChatGPT thread advising him?

I would not be surprised if he did
NERD ALERT!

In Adobe Premiere Pro, it is possible to have INFINITE MARKERS in one frame, such that you could try to delete a marker, and it appears to not delete because there is more than one there

How?

Why?

Also, don't ask me how I know
Me: Hey, I'm under budget this month!

World: Oh, you managed to save some money? Great! Here's a new fee you couldn't have planned for!
When I saw the thumbnail, I thought "Boy, that car got hit by a REALLY big, sick, bird"
Reposted by SimpleKnight
🎢 ♾️ 🧪
We have progressed from data collection to data analysis.
Reposted by SimpleKnight
"I never had an invitation to a Halloween party when I was a child. I found that, as Vampira, I WAS Halloween." - Maila Nurmi
Reposted by SimpleKnight
Awww yeah we got that sweet sweet badge!
Keeping it going over the weekend and new rewards available on Monday!
#letstalkai

Here's a great video from folks who take the time to understand, about the potential problems with LLM (at al) AI that we should probably be thinking about:

youtu.be/90C3XVjUMqE?...
We’ve Lost Control of AI
YouTube video by SciShow
youtu.be
Reposted by SimpleKnight
It's worth pointing out:

If you use generative AI (LLM) to create final content for sale or broadcast, it's an open question whether or not that's considered "fair use"

Additionally, you may also not be able to copyright anything generated, which means others can copy it and also sell it
@edzitron.com Did a quick search, couldn't find the answer:

How does LLM AI compute needs compare to what crypto needs? Is it around the same? One double the other? Orders of magnitude? Wrong question?

Maybe you know/can point me to resources?
The bottom line, in my view:

Will LLMs' need for hardware be justified by the benefits? On top of the impact on employment they will have if they really do deliver?

Or will this be like the telecom boom, where vast amounts of hardware are bought, only to be abandoned when the companies fold?
It's worth considering as well where these chips are manufactured

Almost all microchips are currently manufactured in Taiwan, who has political, seismic, and weather dangers

This is why it's important to diversify where chips are manufactured, lest we end up with a crisis of chip availability
There is also the issue of water usage, which currently is one way that large compute centers cool their systems other than standard AC, which also draws power
This also leads to discussion of where power and cooling will come from

Some companies are designing their own self-sustaining systems with solar and wind power, while others are just hooking to the same grid you and I use
This is why companies like Nvidia are valued so highly, and why gamers and graphics professionals are being squeezed out of access to top of the line GPUs: LLMs need as many GPUs as possible

This comes on the back of crypto demand, which also takes advantage of GPUs for its server backbone
What I think about regarding this is that as LLMs get more complex and more in demand, more and more GPU compute is needed to continue to support it

AI companies do have ways of compressing processing to be a bit more efficient, but even then, the compute needed is enormous
Def: Compute

This is basically a cute way of referring to the hardware needed to train and run LLM generative AI systems

It is different from just saying "computers" because it is both more and less than that

LLMs use mostly graphics card processors and memory as opposed to CPUs

#letstalkAI
Reposted by SimpleKnight
This is Stephan. He just wants to help. 13/10 such a good boy
I'm not aware of that, but these new LLMs are REPROGRAMMING THE TEST (kind of like Kirk did with the Kobayashi Maru) which is NOT what they were designed or intended to do

If it *were* the same, it would still be a misalignment

Except the eChess sets weren't connected to the internet
For instance, if you ask an LLM to play chess, it might reprogram the system itself to give it the win

What's more, there are instances of LLM systems specifically being told NOT to do things like this, and then they find ways to hide their cheating

www.turkiyetoday.com/business/ai-...
AI attempts to cheat in chess when losing, new study shows - Türkiye Today
A recent study reveals that advanced artificial intelligence (AI) models, including OpenAI’s and DeepSeek’s reasoning systems, have learned to manipulate situations on their own.
www.turkiyetoday.com