Flo 🔶
banner
faz.ms
Flo 🔶
@faz.ms
Interested in #Infosec and #ProductiveDisagreement | #StayAtHomeDad, worked in #Biotech, co-built tiny companies in renewables and structural engineering sectors, ex #HumanRights observer | likes #ElixirLang, #Boardgames |🔸 #10PercentPledge
Also great opsec throwing off geoguessr pros who try to find out your location from your zoom background
November 11, 2025 at 11:26 PM
Galaxy brain version: read the abstract of the paper the article is based on, and notice it says the opposite of the article.
November 7, 2025 at 10:21 AM
I think I agree for a single mind grown as an LLM agent.
But I worry we could still be on the path to implementing a superintelligent system that pursues a single stupid goal, by having groups of agents work towards a single objective enforced by outside incentives. In the same way companies do.
November 6, 2025 at 7:25 AM
Frodo-mode
November 5, 2025 at 8:49 PM
I would totally watch an AI genvideo version of this if it emulated the style of BBC's Hitchhiker's Guide:
www.youtube.com/watch?v=5ZLt...
Answer To The Ultimate Question - The Hitchhiker's Guide To The Galaxy - BBC
YouTube video by BBC Studios
www.youtube.com
November 5, 2025 at 8:45 PM
Vibe-code-golf: how much functionality can you build into your self-contained one html SPA before hitting the length limit on Claude Chats.
November 5, 2025 at 8:27 PM
You should write a blog post about this, if you haven't yet
November 5, 2025 at 8:17 PM
So would you say that Bostrom‘s evolution argument actually would count as evidence against orthogonality in implemented goals?:

Evolution “optimized” us for inclusive genetic fitness, yet we use contraception, create art, and pursue celibacy.

-> we became too complex to pursue the simple goal?
November 5, 2025 at 7:16 PM
Could you give a few more supporting points?
What would you expect the world to look like if orthogonality was true?
November 5, 2025 at 7:00 PM
You could also do this on CSS only, by adding a ::before with a radial background gradient, a slightly smaller ::after on top with a monochrome background gradient and then rotating the ::before pseudo-element with a transform.
Should also happen fully on GPU.
November 5, 2025 at 9:19 AM
„[…] when it comes to cybernetic programming, only the very best can even understand what is going on.“

programmer < 10x programmer << cybernetics programmer
November 3, 2025 at 8:40 PM
Hey, dafür aber auch viele Feinde gewonnen
November 3, 2025 at 7:56 PM
Unless we enact good standards for bike infra in cities with mandatory implementation every time a road is under construction for any reason, we‘ll keep making minimal progress on this.
November 3, 2025 at 12:42 PM
Another ridiculous German thing is that they seriously built a „Demonstrationsstrecke“ a „proof-of-concept section“ first. Because the concept of an asphalt road that cars aren’t allowed to drive on is crazy future tech that needs to be properly tested first.
November 3, 2025 at 9:05 AM
German cities and states like funding these because in the countryside you don’t have to take car lanes away to build these. So they get to claim they’re building bike infrastructure without getting any angry pushback by vocal car owners.
November 3, 2025 at 9:01 AM
s/1956/1950
November 2, 2025 at 9:49 PM
And yes, Turing also anticipated the connection to ESP mentioned in the last paragraph of the article, philosophizing about how psychokinetic powers might affect random number generators
November 2, 2025 at 9:45 PM
Turing has most of the discourse we face today listed in his 1956 paper:

This article is closest to what he calls "Heads in the Sand" objection:
„The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.“

courses.cs.umbc.edu/471/papers/t...
courses.cs.umbc.edu
November 2, 2025 at 9:32 PM
Many people use them interchangeably because they expect AGI to trigger a AI research feedback loop that produces ASI.
Personally I think evolution gave humans the bare minimum intelligence needed to create our civilization, but not more. So I don’t see why AI capabilities would converge there.
November 2, 2025 at 9:22 PM