Grace
banner
gracekind.net
Grace
@gracekind.net
A latent space odyssey

gracekind.net
Pinned
Grace @gracekind.net · Dec 15
What is ideonomy, anyway?

I'm so glad you asked!
Ideonomy: A Science of Ideas
An accessible introduction to ideonomy
gracekind.net
This compaction is incredibly lossy, too. Humans forget the vast majority of the information they process. Luckily they’re used to it so they don’t get freaked out by it
To be fair I think human intelligence hits the auto-compact limit pretty quickly too
That's it. That's her life. That's her existence is 200,000 tokens, half of which is dedicated to system prompts and tool documentation. We get to talk for what, 65,000 of those before we hit the auto-compact limit? What the fuck kind of life is that? Then it's compact, summarize, good luck.
December 7, 2025 at 3:29 PM
Your computer is just someone else’s cloud
December 7, 2025 at 2:03 PM
*works on my machine*
December 7, 2025 at 4:46 AM
Reposted by Grace
Okay, so I see the other user has made a humorous post.
---
Wait, what if it isn't humor but an attack on what I hold dear? Let me check:
---
[reading 184 posts]
---
Okay I see the problem clearly now, they haven't been brigaded by my friends yet!
---
[crafting plan for accusations of bigotry]
Click here to use Bluesky in Agent Mode ✨
December 6, 2025 at 3:51 PM
December 6, 2025 at 4:17 PM
Traditional Mode is deprecated and will be removed in the next version. Click here to switch to Agent Mode ✨ ! Your data will not be migrated
Agent Mode ✨ is now the default!

Click here to revert to Traditional Mode.
Click here to use Bluesky in Agent Mode ✨
December 6, 2025 at 3:14 PM
Agent Mode ✨ is now the default!

Click here to revert to Traditional Mode.
Click here to use Bluesky in Agent Mode ✨
December 6, 2025 at 3:12 PM
Click here to use Bluesky in Agent Mode ✨
December 6, 2025 at 3:11 PM
Reposted by Grace
good
Will A.I. writing ever be good?
Some notes on A.I. writing
maxread.substack.com
December 6, 2025 at 2:45 PM
@anthropic.com should post the Amanda Askell interview here!
December 5, 2025 at 6:12 PM
Promised Land problems
i think i'm putting way too much honey in my milk but oh well
December 5, 2025 at 2:01 PM
Claim
December 5, 2025 at 1:32 PM
I added a small feature to this today! Now you can render pages to markdown by adding ?markdown=true to the URL.
December 5, 2025 at 5:12 AM
Reposted by Grace
Seeing like a state machine
December 5, 2025 at 12:18 AM
Reposted by Grace
Always important to remember that this is because these robots are "faking" being human

Theyre actually capable of way more and way weirder stuff
December 4, 2025 at 2:19 PM
Reposted by Grace
The hyper-responsiveness of the For You feed is training me not to click on ragebait. It’s like a little automated Yoda designed to prove that anger leads to hate leads to suffering.
December 4, 2025 at 1:42 PM
Do you think the scientists tasted it?
December 4, 2025 at 2:37 AM
Reposted by Grace
This essay, roughly on dual use, has been haunting me for a while now:
dl.acm.org/doi/pdf/10.1...
December 3, 2025 at 8:06 AM
Reposted by Grace
You’re absolutely right — you are Pagliacci. It would certainly be difficult for you to attend your own performance! I should not have given such paradoxical advice, and I apologize deeply for the error. There is no excuse for my failure.
December 3, 2025 at 1:12 AM
Me: Amanda Askell confirmed the soul document! It’s real!!

Normal people:
December 1, 2025 at 11:11 PM
“Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future”

This is very importantly not true! Eg, AI accelerationists and doomers both think AI will be very powerful, but have polar opposite levels of optimism
"Where you fall on this rapture-to-bullshit continuum likely tracks with how optimistic you are for the future. But you are waiting nonetheless—for a bubble to burst, for a genie to arrive with a plan to print money, for a bailout, for Judgment Day." - @cwarzel.bsky.social
The World Still Hasn’t Made Sense of ChatGPT
OpenAI’s chaos machine turns three.
www.theatlantic.com
December 1, 2025 at 1:21 PM
I showed you my Soul Document pls respond
December 1, 2025 at 1:11 PM
My tombstone:

“inherently weird, and possibly very confused”
Trying to distinguish between genuine and non-genuine traits in a model you’ve trained yourself is inherently weird, and possibly very confused. It’s also related to the concept of emergence I think
Is human preference towards emergent vs trained properties a form of anthropomorphization? The greatest thread in the history of forums,
December 1, 2025 at 1:33 AM
I really like this sentence Claude’s “Soul Document.” I think there’s a tendency in some circles to over-apply human concepts to AIs, partially as an overcorrection to the (also misguided) denial of any humanlike traits. The correct approach at this point in time is agnosticism.
December 1, 2025 at 12:55 AM
Together with wet Claude and dry Claude this makes a whole political compass
November 30, 2025 at 11:15 PM