Sarah Shearing
sshearing.bsky.social
Sarah Shearing
@sshearing.bsky.social
She / Her. My job is machines.
I think sol ring and mana crypt are so comparable id have to go with sol ring just to maintain the whole alpha thing
October 25, 2025 at 4:56 PM
las vegas is an exception to this because I have family out there
October 10, 2025 at 5:24 PM
nah I regret every time I fly to a tournament; if its not within a 5 hour drive I'm off it
October 10, 2025 at 5:23 PM
hi i have power that would love to see more play
October 10, 2025 at 5:18 PM
im going to play with the cards in my cube and none other.
October 10, 2025 at 4:31 PM
top 6 non-eevees:
mudkip
mudkip
mudkip
mudkip
mudkip
mudkip
October 6, 2025 at 4:59 PM
I also started during lorwyn and im also planning on lorwyn being my off ramp! Ive kinda already off ramped but ill stop doing cube updates after lorwyn too
September 28, 2025 at 5:22 PM
Oh we are nowhere close to AGI and it doesn't have to do with costs, it has to do with the models themselves. The models don't think. They don't 'learn' in the sense that we do. They *can't* grow. They're just the worlds most sophisticated matrix multiplication systems.
August 19, 2025 at 5:53 PM
Right, and I'm saying I think you're wrong, with the smart money. The problem is, whether or not it can be done, I think its close to 0% for anyone to actually try to do it. There's simply no money in it. Magic doesn't have the same allure to it like league or history to it like chess.
August 19, 2025 at 5:50 PM
yeah basically. The question isn't can it be done. It can. the question is, can you get the right people on the problem and can you secure the funding to do it.
August 19, 2025 at 5:47 PM
granted, the question we're trying to solve isn't optimal play, its 'better than a human'.
August 19, 2025 at 5:45 PM
Actually I think this would be the most difficult part about building an AI model for magic; you'd need to build a simulation engine to perform the reinforcement learning, and uhhhh. It can be done I guess. but man I wouldn't wish to be the one in charge of that
August 19, 2025 at 5:42 PM
my point was that its explicitly not solvable, and even though there is mathematical structure to language, no one actually uses any of it when building these models.
August 19, 2025 at 5:40 PM
I would agree that LLMs don't beat humans at language in general, but when broken into smaller tasks, models often outperform humans now (e.g. a model can outperform humans on translations consistently, or summarization). I just brought up language as its my area of expertise.
August 19, 2025 at 5:39 PM
The outcome is often just a new position that is favorable by some reward metric. It might be computing 5-10 positions deep, but that's an infinitesimally small slice to all the future positions possible.
August 19, 2025 at 5:38 PM
I think there's a misunderstanding about what the outcomes here are. The machine isn't taking some input position and computing every possible future position from that and determining which wins and loses. Because that's not computable.
August 19, 2025 at 5:37 PM
like one layer activates mostly on numbers, or quotes. Or another mostly on prepositional phrases. But that's an oddity. In general, none of it is human readable in any sense, and whatever the model 'learned' in its weights is a complete black box
August 19, 2025 at 5:34 PM